Back to Home / #linode / 2008 / 06 / Prev Day | Next Day
#linode IRC Logs for 2008-06-08

---Logopened Sun Jun 08 00:00:49 2008
---Daychanged Sun Jun 08 2008
00:00<Xel>caker: Still working on those storage nodes?
00:00<Xel>It's been a while since I last asked.
00:04<rvhi>how independent are 2 DCs? what infrastruture do they share? it looks like that they use different ISP, so one ISP problem is unlikely to affect another DC.
00:04<path->there are four
00:04<path->each is a different company
00:05<path->i think the only common thing is the linode manager
00:05<scorche>ISP? ;)
00:05<path->but the hosts can run indepandant of that i think
00:05<path->if it were to go out temporarily..
00:06<scorche>oh...linode manager
00:06<scorche>i thought you were on about his "ISP" ;)
00:07-!-lastsh [] has joined #linode
00:08<bd_>Xel: He's been saying something about Q3
00:09<rvhi>has any DC been hit by DOS? our other nodes with another vps was hit by DOS, to their credit, they resolve it in minutes, very impressive.
00:10<rvhi>btw, the sympotom was huge packet loss to our nodes
00:10<bd_>in the past, occasionally. They get the target IP blackholed quickly enough
00:13<Peng>Linode has a lot of bandwidth, so it would take a pretty large DoS to impact things very badly.
00:14<Peng>If you get DoSed, your IP will be null routed, and if you get DoSed a lot, you'll be kindly asked to move to another provider. ;)
00:14<bd_>$20/mo does not pay for absorbing frequent DoSes :)
00:18<rvhi>haha, i don't have enemies who hates me enough to waste resources to bring me down.
00:18<Peng>That's what they all say. :P
00:18<irgeek>That's what they all say. ;)
00:19<bd_>"That's what they all say." <--That's what they all say.
00:20<Peng>That it is.
00:23<irgeek>GPRS s s s s l l l l o o o o w w w w
00:24<Peng>Nice, an Amazon EC2 instance tried an SSH dictionary attack against me.
00:24<Peng>DenyHosts got it, for once.
00:25-!-arooni-mobile [] has quit [Quit: Leaving]
00:27<lastsh>Peng: from i got it too.
00:27<Peng>lastsh: Yeah.
00:29-!-Bdragon [] has quit [Read error: Connection reset by peer]
00:30<avongauss>I have fail2ban set to reject for 1 day after 3 failed attempts - usually never sure if it's a real dictionary attack... :(
00:30-!-Bdragon [] has joined #linode
00:31<Peng>DenyHosts only polls auth.log every 30 seconds, so attackers usually get in a good number of attempts.
00:31*Peng goes to bed.
00:34-!-Talman [~ender@] has joined #linode
00:35<Talman>Hey, I'm checking to see if I have S3 backups, if I do, anyone able to help me recover some directories from the S3 instance? I'm on a bare server.
00:36<irgeek>How did you do the backups to S3?
00:36<bd_>Talman: If you don't know, you probably don't.
00:36*Talman followed the wiki.
00:37<irgeek>The wiki where?
00:37<Talman>linode wiki.
00:37<Talman>Actually, triyng to remember the name of a deb.
00:37<bd_> ?
00:37<Talman>s3backup or something silly like that.
00:38<Talman>Yeah, switched from s3sync to something python based, but lost the server. Right now, I just want to pull directories out of S3 storage.
00:38<Talman>And can't remember what the heck programs access S3. I'm basically siting in a bar on a boat, and its hard to think with 60 decibel rap music.
00:40<Talman>That's it!
00:40<bd_>Talman: I hope you have the private key that you encrypted it with (or disabled it)
00:41<lastsh>cool. just send me a drink from the bar.
00:41<Talman>I do.
00:41<Talman>Keys, keys I got.
00:41<bd_>okay, one moment
00:41<Talman>Installing duplicity via aptitude.
00:42<bd_>do you remember the bucket name?
00:42<bd_>okay, try:
00:43<bd_>duplicity list-current-files s3+http://bucketname
00:43<bd_>if that works, duplicity restore s3+http://bucketname targetdir
00:43<Talman>oh lol, the pkg is borked, wants boto.
00:43<Talman>Wasn't a dependency. installing.
00:44<bd_>boto's only needed for the S3 mode
00:44<bd_>duplicity supports other targets
00:44<Talman>Ah, gotta find it in apt.
00:44-!-dc0e [] has joined #linode
00:45-!-dc0e [] has quit [Remote host closed the connection]
00:45<Talman>hmm, obsoleted?
00:46<Talman>Package python-boto is not available, but is referred to by another package.
00:46<Talman>This may mean that the package is missing, has been obsoleted, or
00:46<Talman>is only available from another source
00:46<bd_>its in testing/unstable
00:47<bd_> <-- not in stable
00:47<Talman>I'm using Ubuntu 7.10, and at this point, I've lost most of what the hell I'm even thinking about. The install image doesn't use testing?
00:47<Talman>The linode install image.
00:47<bd_>oh, ubuntu
00:47<Talman>It reports lenny/sid
00:48<bd_>one sec
00:48<Talman>but its ubuntu.
00:48<bd_>that shouldn't... where does it report that?
00:48<bd_>look in /etc/apt/sources.list
00:48<Talman>I'm on 1k/s internet, btw.
00:48<Talman>When I type apt-get or aptitude.
00:48<bd_>deb <url> <suite> <otherstuff>
00:48<bd_>what's the suite?
00:49<bd_>okay, you're on ubuntu
00:49<Talman>universe enabled
00:49<bd_>you'll need to upgrade to hardy
00:49<Talman>Is it safe to with a linode image?
00:49<Talman>What's the command again? ubuntu-upgrade or something?
00:49<bd_>what was the way to do it on ubuntu again...
00:50<bd_>install update-manager-core and run do-release-upgrade
00:50<m0unds_>works really well, just be careful of when it replaces your init scripts
00:51<Talman>That's it.
00:51<Talman>I haven't modified my init scripts.
00:52<Talman>Ok, running upgrade.
00:52<m0unds_>i meant just in terms of xen compatible stuff
00:52<bd_>m0unds_: it just breaks the lish console, right?
00:52<m0unds_>not sure, i was cautious about mine
00:54-!-Ender [~ender@] has joined #linode
00:54-!-Talman is now known as Guest1877
00:54-!-Ender is now known as Talman
00:55<Talman>Time card dance.
00:55<Talman>How badly modified are the xen scripts?
00:55-!-Guest1877 [~ender@] has quit [Remote host closed the connection]
00:57<bd_>Talman: tiny change to the console tty list
00:58<bd_>(disabling everything but tty1)
01:05-!-Talman [~ender@] has quit [Ping timeout: 480 seconds]
01:15-!-walber1 [] has left #linode []
01:15-!-TheFirst [] has joined #linode
01:19-!-^GaveUp^ [] has quit [Ping timeout: 480 seconds]
01:59<irgeek>!calc 2419200 / 3600
01:59<@linbot>irgeek: 2,419,200 / 3,600 = 672
01:59<irgeek>!calc 2419200 / (3600 * 24)
01:59<@linbot>irgeek: 2,419,200 / (3,600 * 24) = 28
02:00<@linbot>m0unds_: *click*
02:00<rvhi>on a ubuntu 8.0.4 node, how do i disable ipv6? i modify /etc/modprobe.d/alias and remove ipv6 alias, still won't work
02:03<m0unds_>i imagine you could use the same command as the desktop
02:03<m0unds_>add "blacklist ipv6" to /etc/modprobe.d/blacklist.local
02:04<m0unds_>sudo sh -c 'echo blacklist ipv6 >> /etc/modprobe.d/blacklist.local'
02:04<m0unds_>if the file doesn't exist
02:06<bd_>m0unds_: no, ipv6 is built in, not a module
02:06<rvhi>that didn't work, is there a way to disable it?
02:06<bd_>not as far as I know, why?
02:07<bd_>is it breaking something?
02:07<rvhi>also waste resource
02:07<irgeek>Well there's your problem. FTP blows.
02:07<bd_>hmm, I'm running pure-ftpd fine, and I have ipv6 configured with a tunnel and everything
02:07<bd_>also, it doesn't waste any signifigant amount of resources
02:08<bd_>maybe a few kb
02:08<rvhi>do you use mysql backend for pure-ftpd?
02:08<irgeek>rvhi: Having IPv6 wastes resources? Are you serious?
02:08<rvhi>don't know too much ipv6, but every little momery and cpu counts, right?
02:09<Napta>rvhi: You can safely leave ipv6 alone if you're not using it. Disabling it will just defer your OS from a standard build, perhaps even complicate things because you have to remember customizations. I would leave it as stock as possible :)
02:10-!-jm [] has quit [Ping timeout: 480 seconds]
02:11<bd_>Also you'd have to get one of the staff to build a new non-ipv6 kernel just for you :P
02:11<irgeek>It's compiled into the kernel, so the kernel will allocate some memory no matter what. Considering how little memory that would be, I wouldn't get my panties in a twist over it.
02:11<bd_>and don't forget the reams of other unused features!
02:11<bd_>Are you using SCTP? the MARK target of iptables?
02:12<bd_>How about quota support or ipsec? The loopback driver? :D
02:12<m0unds_>i forgot about the whole kernel thing :D
02:12<irgeek>And you'd have to put it back in a year or two when IPv4 space gets *really* limited.
02:14<rvhi>eh, i have a couple of vps from another vendor, i was able to remove it by modifying modprobe.d/alias, how is it different? it just annoys me when I use netstat and see :::25
02:15<rvhi>ipv4 still gets a long long way to go, that's a whole different story though
02:15<Napta>Well I suppose if you have multiple servers it will be better to keep everything consistent
02:15<avongauss>unless the kernel at the other vps was modular or you rolled your own, you probably didn't "remove" it. Just disabled it.
02:16<bd_>Well, a modular kernel isn't a bad idea
02:16<bd_>just, what happens if you're on latest 2.6, and that changes without you getting a net module set
02:16<rvhi>the other one was a module, i guess. i use their stock image
02:19<rvhi>is there a way to install ubuntu from scratch without using an pre-built image?
02:21<bd_>rvhi: You can use a prebuilt image to bootstrap a distribution whatever way you want
02:24-!-andrew [] has joined #linode
02:24<andrew>anybody online now?
02:25<andrew>any admin?
02:25<irgeek>You can also create an install and copy/rsync it up to your Linode.
02:25<irgeek>andrew: They may be around. What's your question?
02:25<andrew>hello irgeek:)
02:26<andrew>we have just talked yesterday
02:26<andrew>we have just talked yesterday
02:26<andrew>finally I bought a vps from linode
02:26<andrew>however there is a question
02:26<andrew>I interested in which modules are loaded in my kernel
02:26<bd_>andrew: none, by default
02:26<irgeek>zcat /proc/config.gz
02:27<andrew>let me see
02:28<andrew>its not clear for me
02:28<andrew>can I compile modules into the kernel or not?
02:29<andrew>lets say fuse module
02:29<irgeek>Yes, you can compile modules.
02:29<bd_>andrew: Linode kernels have no modules loaded by default (if it's xen, which it is for you, you're free to build new modules and load them). Many features are, however, built in - including fuse.
02:29<bd_>So you don't need to build a fuse module.
02:29<andrew>is there any howto around?
02:29<bd_>It'll just work :)
02:29<bd_>if it's something other than fuse it might or might not be in
02:29<bd_>if it comes with linux 2.6.18, it's probably in
02:29<andrew>then how can I compile modules?
02:30<bd_>grab and use it as your kernel source when building the module
02:30<irgeek>CONFIG_FUSE_FS=y <-- Fuse is already in the kernel
02:30<bd_>load with modprobe -f
02:30<andrew>If there is a module which not in kernel, then can I compile to it or no?
02:30<bd_>andrew: you can, yes
02:30<andrew>no modprobe on my debian4:(
02:31<andrew>I should download the souce
02:31<bd_>install module-init-tools for modprobe
02:31<bd_>andrew: Read the module's readme, and where it says to put inthe path to the kernel source, use that kernel source
02:31<bd_>basically, just like you'd do it elsewhere
02:31<andrew>hehe lsmod gave nothing
02:31<bd_>except that you'll need to use modprobe -f to load it
02:32<andrew>I see
02:32<bd_>andrew: yes, by default everything's built in, not with a module
02:32<bd_>and the reason you need -f is because the kernel is built with gcc 4.0, which is a different version than in debian. -f makes it ignore it
02:32<andrew>how long do you have a vps at linode?
02:32<andrew>I see
02:32<bd_>since january
02:33<irgeek>andrew: The stock kernels have a whole bunch of stuff compiled in, but you can add modules if you want to. You won't get anything from lsmod until you compile a module and load it.
02:33<bd_> <-- two of the most common third-party modules are here
02:33<irgeek>andrew: I've been here five years.
02:33<andrew>and what about uptime?
02:33<andrew>and performance
02:34<irgeek>I've been here five years. If there were uptime or performance problems, I'd have left.
02:35<andrew>I see
02:35<andrew>anybody familiar with pptp VPN?
02:35<irgeek>The longest downtime I've had (that wasn't my fault) is probably about half an hour.
02:36<irgeek>You're never going to get 100% uptime (anyone trying to sell you that is lying) but Linode does very well.
02:37<andrew>I see
02:38<andrew>they seems to be great but since I have another "big" vps I just need a very small one with just 64mb ram etc..
02:38<andrew>20$ is a bit high for me
02:38<andrew>but you say its worth right,
02:38<bd_>RAM is cheap these days - it doesn't really make economic sense to sell anything much smaller
02:39<irgeek>The margins on hosting services are slim enough as it is. I like Linoce because they offer me what I need, and a price I like and they don't try to upsell me on a bunch of crap I don't need or want.
02:41<bd_>andrew: when there are bottlenecks on VPSes, they tend to be in disk bandwidth - if you chop up a host into 64mb linodes, that's a huge number of linodes hitting the same disk, most of which will be on distro defaults, which are not properly set to work well in 64mb. Result: IO is really, really slow. So 64mb doesn't make sense :)
02:41<Napta>Are there cheaper VPS packages out there than $20 ?? $20 USD seems very good value for money imo
02:42<irgeek>My first Linode was a Linode64 - It was almost impossible to keep it from swapping all the time.
02:42<Napta>Standard web hosting can cost more than that these days
02:42<Napta>or er, $19.95 ;)
02:42<andrew>actually I see an 5$ xen vps somewhere with 1mb unmeneterd bandwdth
02:42-!-Dreamer3 [] has quit [Read error: Connection reset by peer]
02:42<exor674>andrew: and how often does that server blow up? :P
02:43<bd_>andrew: oversold to hell :D
02:43<andrew>oh, yes, cant I change dc from web interface?
02:43<Napta>andrew: The whole site / control panel is worth the exta $15 though ;)
02:43-!-Dreamer3 [] has joined #linode
02:43<bd_>andrew: you have to file a ticket
02:43<andrew>bd_: xen how can be oversold? or the network?
02:43<irgeek>I've seen cheaper VPSes, but they don't promise that there will be a maximum of 40 clients/host.
02:43<andrew>I see
02:43<bd_>andrew: xen can be oversold (disk IO or CPU)
02:43<andrew>I see
02:43<bd_>network can be oversold, although it's a bit harder
02:43<irgeek>Most are overselling like crazy.
02:43<bd_>other virtualization technology can be oversold on RAM
02:45<m0unds_>i've used a couple of other vps providers and linode > *
02:46<andrew>what do you think If I open a ticket regarding custom package what will they say?
02:46<andrew>for a package with less rescources then their cheapest one
02:47<tozz>they'll probably make it work
02:47<tozz>if possible
02:48<irgeek>andrew: I've asked. They said no. I wouldn't count on it.
02:48<tozz>or not ^^
02:49<tozz>the cheapest one is ubercheap anyway imo
02:49<irgeek>The Linode360 is likely the smallest you're ever going to get.
02:49<tozz>even as a student
02:51<irgeek>$20/month for a server you can do whatever you want with company that has real people who do support, habitually upgrades plans, adds new features on a regular basis and has an active user community is a steal.
02:52<andrew>I know I know
02:53<andrew>just opened a ticket maybe I will be lucky
02:53<andrew>we will see
02:53-!-Schroeder3 [] has quit [Ping timeout: 480 seconds]
02:54<andrew>what is the average response time?
02:54<irgeek>Sometimes I feel guilty that they only charge me $20/month. But then I get over it.
02:54<andrew>you can pay +40$ for me
02:54<irgeek>For tickets? Usually pretty quick.
02:54<bd_>I'm pretty sure they can't offer smaller plans without either a) not making a profit or b) ruining performance
02:55<bd_>but what do I know :)
02:55<bd_>basic problem is a host requires $X to maintain = you need to charge at least X/n, where n is the contention ratio. Smaller cost = higher contention = IO begins to suck, particularly with the increased memory pressure
02:56<irgeek>df -h
02:56<bd_>additionally, making an exception makes no sense, as they can't sell half a linode 360
02:56<bd_>so the space left by shrinking your linode just sits unused
02:57<bd_>so they'll probably politely decline the plan adjustment :)
02:57<irgeek>If you really want a Linode64, you can limit the ram your Linode boots with. :)
02:57<bd_>it'd still be $20/mo though :)
02:57<andrew>and can I limit my price?
02:58<bd_>andrew: You can delete it for half the time! then it'd only be $10/mo with a $20 deposit and a chance of running out of availability if you get unlucky. ;)
02:58-!-Dreamer3 [] has quit [Read error: Connection reset by peer]
02:58-!-Dreamer3 [] has joined #linode
02:58<andrew>bd_: you are so funny
02:58<bd_>It'd work though ;)
02:59<andrew>btw linode planning to have some node in the uk?
02:59<bd_>It's like EC2 only cheaper :D
02:59<bd_>I highly doubt it - they build hosts in their office in NJ, then fedex them to the datacenter. Think of the shipping costs and taxes for (really heavy!) servers, plus all the legal red tape...
02:59<bd_>and they just opened a DC in NJ too
03:00<andrew>since they have a shit ping to europeú
03:00<bd_>the speed of light sucks like that :/
03:01<andrew>uk has better ping since its in the uk:)
03:01<bd_>true, but it does no good if your hosting company goes out of business because they're trying to run a uk host from the us :)
03:02<andrew>they have huge profit
03:02<bd_>andrew: the margins are thinner than you might think in the hosting business :P
03:03<andrew>dont think so
03:04<irgeek>What hosting service do you run?
03:04<andrew>vpn as I said yesterday
03:04<irgeek>You must be charging a lot if your margins aren't as thin as the rest of the undustry.
03:05<bd_>andrew: they have a cash flow problem already - it's not dangerous for the business, but one month of monthly payments doesn't pay for a full server, so they're limited at the rate at which they can buy, build, and install new servers. Caker was asking how people would feel about various annual-payment discounts the other day, even...
03:05<andrew>thats why I told them I will pay annually if they can half of my current package
03:05<irgeek>It's not a cash flow problem. It's a capital investment problem.
03:06<bd_>Think how much worse that'd be if they had to pay international shipping, customs, fight against the falling dollar, etc etc to get the server into the uk :)
03:06<bd_>irgeek: I'm a programmer, not an accountant :P
03:06<irgeek>bd_: Me too, but I like to know how to make money.
03:07<andrew>then how mouch would it cost to pay y eur? would be much more expensive
03:08<bd_>I don't think they're set up for that
03:09<bd_>by which I mean, they have a merchant account denominated in USD
03:09<andrew>If they do that (I mean host server in the uk) they could get international costumers
03:09<bd_>and one customer is not going to make up the fees involved in taking euro as well
03:09<bd_>andrew: They could.... but they have to make a profit, you know?
03:09<andrew>they will
03:09<andrew>europe is huge
03:09<bd_>andrew: If they were in europe, sure, but - and here's the crucial part - they're not :)
03:10<andrew>currently, their services target us people
03:10<bd_>operating in a foreign country carries large overheads
03:10<andrew>they have to be bigger and expand
03:10<irgeek>andrew: They can't keep availability up as it is. They don't need to expand to Europe to get more customers.
03:10<bd_>andrew: They don't have the cash for that right now anyway :P
03:10<andrew>then just rent a dedi from a dc
03:10<bd_>Making it even more expensive :D
03:10<bd_>and making their hardware wildly variable
03:10<andrew>how much servers do they have in the us?
03:11<andrew>they can sell lets say 4 server and get 2 servers in the uk
03:11<bd_>andrew: a few hundred, I think. something like that. There's a list on the wiki
03:11<andrew>just check the availability
03:11<bd_>andrew: availability is the unused slots :P
03:11<bd_>most of the servers are full
03:11<@linbot>Dallas360 - 0, Dallas540 - 0, Dallas720 - 0, Dallas1080 - 0, Dallas1440 - 0, Dallas2880 - 0 , Fremont360 - 0, Fremont540 - 0, Fremont720 - 6, Fremont1080 - 0, Fremont1440 - 0, Fremont2880 - 0 , Atlanta360 - 3, Atlanta540 - 1, Atlanta720 - 1, Atlanta1080 - 0, Atlanta1440 - 0, Atlanta2880 - 0 , Newark360 - 20, Newark540 - 2, Newark720 - 8, Newark1080 - 2, Newark1440 - 1, Newark2880 - 1
03:11<andrew>they have far more free space left then I have ever thought
03:12<irgeek>In the last two weeks, I think 120 Newark360 slots have opened up. Now there are 20.
03:12<bd_>hmm,where *was* the list of linode host ssh keys?
03:12<bd_>for lish
03:13<bd_>andrew: also, how much does a dedi with 2x quad-core xeon, 24G of RAM, and RAID-1 with at least 600GB of space cost? :P
03:13<bd_>oh, and a few subnets while we're at it
03:15<lastsh>200 american pesos... or, approximately, GBP 2.50 ;)
03:15<bd_>you wish :P
03:15<bd_>give it another year or two :P
03:16<Nigel>wow, linode is getting really popular
03:17<irgeek>Getting? They don't even advertise anymore and they can't keep slots open.
03:17<andrew>bd_: if its colocated then not much
03:17<Nigel>irgeek: I know :)
03:18<bd_>andrew: colocated meaning bring your own server?
03:18<bd_>andrew: that's what they do now :P
03:18<andrew>okey, anyway they had to buy but...
03:18<bd_>and that means shipping it to the uk!
03:18<m0unds_>they can just will it overseas
03:18<bd_>which is a lot of money
03:18<m0unds_>no shipping costs
03:18<bd_>plus the USD is falling
03:18<m0unds_>that's how i ship packages to europe
03:19<bd_>m0unds_: will it? >_>
03:19<m0unds_>yes indeed it will
03:20<bd_>anyway step one is to keep up with demand :P
03:21<m0unds_>whoops, wasn't paying attention-- i meant send the machines overseas by force of will
03:21<lastsh>yeah, the last things caker, et al., need to spend their time doing is filling out VAT paperwork.
03:21<bd_>lastsh: yeah, they need to finish that private network backend storage thing already ;)
03:22<Nigel>2nd that
03:24<irgeek>And the API--which is coming along nicely.
03:25<bd_>I still want a bulk load function :|
03:25<irgeek>Bulk load what? DNS?
03:25<bd_>just 'Okay, toss out the old zone, here's a complete list of what I want to put in it now'
03:27<irgeek>There will be a Python library for the API with that capability which--if I get around to it--will have a CGI frontend you can use to do bulk uploads. If I don't get that into the library, you can still do ti from the command line.
03:27<bd_>still... :|
03:27<bd_>seems like the kind of thing which would be more efficient on the server side
03:27<bd_>less ping-pong with the dallas DC :)
03:28<avongauss>I doubt cost would be the biggest challenge in setting up overseas dc(s), its the bureaucy that can get annoying and be a challenge.
03:29<irgeek>bd_: I prefer Linode's way. Don't assume how I want the human interface to work - just give me a low-level API and I'll make the interface the way I like it.
03:29<bd_>fair enough :)
03:30<bd_>still, if you're replacing a large number of RRs, it seems like it could take some time (fremont<->dallas is, what, ~80ms? 10 RRs take a full second already, unless you hit the server with a lot of parallel requests)
03:30<bd_>(or pipeline I guess)
03:31<irgeek>There's talk of being able to bundle up a bunch of requests and submit them all in a single POST. I believe caker is working on it.
03:32<exor674>I kinda wonder what Linode's host uptimes look like
03:33<Nigel>pretty good
03:33<Nigel>I remember having a node up for well over a year
03:34<irgeek>Host reboots get reported in the forums, and there aren't many.
03:34<bd_>the newarks have only been up for a bit over a week! ;)
03:35<m0unds_>how's that new datacenter been?
03:35<avongauss>has it really only been one week? time flies...
03:35<irgeek>Hasn't it been two weeks?
03:36<lastsh>they're prolly not that great right now w/ all the uml -> linode migration reboots
03:36<bd_>lastsh: those have already started?
03:36<m0unds_>New Datacenter: Newark, NJ
03:36<m0unds_>May 31, 2008 10:02 am
03:36<lastsh>oh, i dunno
03:36<lastsh>i assumed they had
03:36<irgeek>Oh, you're right. It has only been a week.
03:36<bd_>% uptime
03:36<bd_> 03:36:39 up 7 days, 23:12, 7 users, load average: 0.00, 0.01, 0.00
03:36<bd_>^^^ what I was looking at :)
03:37<irgeek>lastsh: They haven't.
03:37<irgeek>People are asking to be migrated, but you can still stay on UML if you want to .
03:37<lastsh>ah. i opened tix to migrate all of mine, so was just making an assumption.
03:38<exor674>I think they're making UML hosts into Xen hosts when they've got to reboot htem anyway
03:38<bd_>exor674: I doubt they'd do that for unscheduled maintenance
03:39<irgeek>exor674: Then the UML Linodes wouldn't boot. They use different kernels on the hosts and the Linodes.
03:39<bd_>"Oh, sorry about that, some klutz at the DC tripped over the wires. By the way, we've converted you to Xen without any warning whatsoever. Enjoy!"
03:39<bd_>^^^ I don't know about you, but this would annoy me :)
03:39<exor674>I thought i read in one of the host downtime reports that they did that
03:39<exor674>mebbe I was weong haha
03:39<avongauss>they upgraded a xen host's kernel to the latest version, but it was already xen.
03:39<bd_>exor674: they've upgraded xen hosts to newer versions of xen, since that doesn't affect the linodes on it much
03:40<m0unds_>yeah, that'd be a little aggravating for the folks who enjoy uml
03:40<exor674>ah, that's probably what I read then
03:40<irgeek>They do upgrade kernels on some of the old hosts occasionally, but it's strictly UML->New UML
03:42-!-irgeek [~irgeek@] has left #linode []
03:42-!-irgeek [~irgeek@] has joined #linode
03:43<irgeek>Having two monitors going takes some getting used to.
03:43-!-TheFirst [] has quit [Ping timeout: 480 seconds]
03:44<avongauss>I think it will be interesting to see how the "big migration" will go over.
03:45<avongauss>technical wise, they've made it extremely simple. Getting some people to press the migrate, that might be interesting...
03:46<bd_>they could do another round of upgrades to encourage people to migrate ;)
03:46<irgeek>They might decide to just consolidate the hold-outs on a couple of hosts in each DC.
03:46<m0unds_>what are the technical advantages to using UML? i didn't spend much time on it before i migrated to xen
03:46<bd_>m0unds_: It's familiar and comfy to people. >.>
03:46<avongauss>upgrades might work, but haven't there been a couple of people wandering in here going I haven't reboot to get my 360 from their 300?
03:46<m0unds_>aside from that?
03:47<avongauss>they could leave a couple hosts uml for a time, but they still would have to migrate them from one uml to another uml host to consolidate.
03:47<irgeek>There are people with longer uptimes than that.
03:48<avongauss>I mean this in general, but uptime is such a horrible metric.
03:48<exor674>I kinda wonder if there are people still running from the upgrade BEFORE they give everyone 300
03:49<irgeek>exor674: There are. Specifically, the DNS servers are still Linode240s I think.
03:49<avongauss>service availability is better imo... was it available when you wanted it, then we did good.
03:50<bd_>all xen linodes are on the latest though (there were mandatory reboots/migrations at the end of the beta period as they upgraded the software)
03:50<irgeek>avongauss: That's how Google thinks. They don't care how long one machine stays up, as long as the cluster stays up everybody is happy.
03:50<bd_>so <360 would be on UML
03:51<avongauss>thinking about it, a host that hadn't been rebooted in 365 days would probably scare me.
03:51<bd_>mmm remotely exploitable security holes
03:51<irgeek>Only if one's been found.
03:51<exor674>avongauss: you can patch holes without machine reboots
03:51<exor674>some you can't but
03:52<avongauss>eh... processes are processes. you update the files on disk, that's nice, but it's got to get in to active memory somehow.
03:52<bd_>this is where you patch the running code :)
03:52<avongauss>patching memory would scare me even more, if it's done right, just like routers you can take down (x) members in a cluster without
03:52<avongauss>any appreciable artifacts.
03:53<bd_>theoretically xen could be used to migrate linodes between hosts without downtime - but it's hard without running off a NAS
03:54<bd_>I have this theory that it can be done with dm-mirror and nbd, but I've yet to set up xen boxes and test said theory :P
03:55<avongauss>what would be the advantage verses just multiple independent servers hosting the same sites/apps? there's probably an implicit sql cluster in that sentence though.
03:55<bd_>for a NAS?
03:56<bd_>Xen's live migration needs some kind of block device that'll be consistent between the old and new hosts
03:56<bd_>a NAS is the easiest way to achieve this
03:56<avongauss>maybe I misunderstood, I was thinking that was more of a live memory migration.
03:56<bd_>it is
03:56<irgeek>It's a live everything migration.
03:57<bd_>the idea is to copy all of RAM while the machine's running - then halt it, copy anything that changed, copy processor state, and resume on the destination
03:57<bd_>there's some performance degradation while this is all happening, but it's basically seamless
03:57<avongauss>okay, are we talking about using this as a failover mechanism or just a way to do live migrations so people don't have to shutdown if they have a single host?
03:58<bd_>the latter :)
03:58<bd_>it's not useful for failover
03:58<avongauss>oh, that sounds like fun then... ;) Didn't Microsoft try selling the hot failover initially though? Circa 2000 ish?
03:59<bd_>with memory imaging? you need (really expensive) hardware support to keep the memory and CPU state in sync
04:00<avongauss>it's going back a bit, and I never liked it at the time so I admit I probably tried to purposely forget it, but there proposal at the time I believe
04:00<avongauss>was you would have two (or more) machines running with multiple nics. One would be backend plane that would keep the machines in relative
04:00<avongauss>sync and if there was a failure then another machine would take over that machines activities.
04:01<avongauss>rather than the clustering that was there before and that we use today.
04:01<irgeek>I don't remember anything like that from MS back in 2000. They had what amounted to an IP failover system. I think it's in their proxy server hunk of shit^W code now.
04:03<avongauss>I'm not sure how far they took it, it was something they came up with when they were still learning the (Internet) server market.
04:04<irgeek>The stuff bd_ was talking about is different though. He's talking about moving a live guest instance from one host to another essentially in real time and without the guest ever realizing it.
04:04<bd_>well, the guest can tell if it wants
04:04<bd_>the block device states are reset, or something
04:05<bd_>which requires cooperation from the guest
04:05<bd_>guest kernel anyway
04:05<avongauss>I understand, and like the idea for migration purposes. Microsofts idea was more of a constant memory / state sync if I remember right.
04:05<bd_>right, that would require a huge amount of bandwidth - and specialized hardware if you want anything approaching performance :)
04:06<avongauss>Microsoft... Performance... Hmmm...
04:07<bd_>avongauss: doing a page fault + network transmission on EVERY MEMORY WRITE EVER is slow even by microsoft's standards :)
04:07<avongauss>Not trying to bash Microsoft, but they do know how to make things complicated at times.
04:07<exor674>in the thing that bd_ is explaining... wouldn't all the host notice would be ~1second freeze while it does the final sync?
04:07<exor674>err, all the guest...
04:08<bd_>exor674: performance degradation during the pre-sync, and also maybe some messages on dmesg, as well
04:08<avongauss>and a bunch of hotplug events I would image. Migrate from 300 to 360 node. Somebody just plugged in a memory card.
04:08<irgeek>Having a whole hot-spare server is sort of ridiculous though. What you really want is a cluster of redundant machines with a device in front of them doing reverse address translation to spread the load. Then if one server on the back end goes down, you just stop sending traffic to it.
04:09<bd_>irgeek: yes, we know that -now- :)
04:09<avongauss>and that's where they ended up.
04:09<exor674>irgeek: and what if the front end goes boom
04:09<bd_>exor674: you have another one standing by to take over its IP and MAC address
04:09<irgeek>You can have redundant front-ends too.
04:10<exor674>and then if that data center has the power closet blow up? :P
04:11<avongauss>your other data center continues to operate...
04:11<irgeek>Cisco gear can be used to do that, but if you're allergic to spending money the same thing can be accomplished with some kind of *nix running on a couple of low-end PCs.
04:12<avongauss>nobody said the entire "cluster" should be located in the same dc...
04:12<avongauss>although a lot of people do.
04:13<bd_>exor674: use BGP to hijack the frontend IP to your other datacenter :)
04:14<irgeek>BGP doesn't fail over fast enough.
04:14<irgeek>You'd have to have a link upstream to shunt traffic over.
04:14<bd_>that and DNS of course
04:14<irgeek>And that's *not* going to be cheap.
04:15<bd_>but if your DC is hit by a meteor you don't have too much else of a choice
04:15<Nigel>or if the datacentre blows up
04:15<Nigel>*cough* TP *cough*
04:15-!-dpn` [~tripped@] has quit [Quit: Leaving]
04:16<rvhi>what's considered a good value for disk io rate notification?
04:16-!-lastsh [] has left #linode []
04:16<avongauss>I still say if your site / host(s) are that mission critical and a total DC outage takes you down, you're the one that got caught with your pants down.
04:16<rvhi>i haven't done anything to my node, but got an alert already
04:17<irgeek>rvhi: Really? I've never touched the default settings, and I've never gotten an alert.
04:18<rvhi>i rebuilt the server once, not sure if it triggered it.
04:18<avongauss>using the default as well, but I got an alert the other day, but I also redeployed a template about 6 times testing an install script.
04:19<irgeek>If you mean you deployed a new OS with the LPM, I'm pretty sure that doesn't show up under your Linode's usage - it's handled on the host.
04:19<avongauss>not sure if it is, but it was a new Newark host and it was LPM deployments. Plus an apt-get upgrade per cycle.
04:20<irgeek>I just checked and my IO alerts isn'e enabled. That would explain me never getting an alert. :)
04:21<rvhi>newark here
04:22<irgeek>My 24 hour IO graph shows a max of about 170 and an average of 40 FWIW.
04:22<avongauss>my alert is at 300, I believe the default. The notification said I averaged 545.27 for the last two hours.
04:23<avongauss>I should also admit that I really haven't taken the time to understand how that is measured in xenland.
04:23<irgeek>Does the graph show that?
04:24<avongauss>that I pulled from the e-mail, let me look at the graph.
04:24<irgeek>I believe in UML it was IO operations and in Xen it's IO blocks.
04:25<avongauss>the graph shows the same thing, with the 30 day view it looks like it was born doing 545 and then gracefully slowed down... ;)
04:25<irgeek>On my 30 day graph, my highest 2hr average is 128. There are only about half a dozen times in the last two weeks it's gon over 100.
04:26<irgeek>What distro?
04:26<irgeek>And did you disable updatedb?
04:26<avongauss>ubuntu 8.04 , but like I said I was testing an install script that automatically sets up a linode for me.
04:27<avongauss>once I left the server alone, it's at almost zero - its still not running anything.
04:27<rvhi>what timezone is used in the graphs?
04:27<irgeek>The one you set in your profile.
04:30<rvhi>mine must be from some apt-get
04:30<rvhi>not from rebuilt
04:32<Dave>i've not found a way to change the profile time in your profile
04:32<Dave>in the new interface anyway
04:33<rvhi>click 'my profile' on the top right
04:33<rvhi>by 'logout'
04:33<Dave>ah yes
04:34<Dave>thats odd, the timezone is the right timezone for me already, but the graphs are around 6 hours out
04:35<Dave>changed it, and changed it back fixed that
04:36<rvhi>on my own server with ubuntu 8.0.4, if i type an uninstalled command, it gives me a hint of what package to install. but not on linode, anyone knows what package do i need to get this?
04:37<rvhi>nm, got my answer
04:41-!-Dreamr_3 [] has joined #linode
04:41-!-Dreamer3 [] has quit [Read error: Connection reset by peer]
04:59-!-xitology [~xi@] has joined #linode
05:07<rvhi>anyone uses chatzilla?
05:40-!-peleg [] has joined #linode
05:41-!-poepy [] has quit [Remote host closed the connection]
05:44<rvhi>hi, i moved my web site to linode, however when i tried to access, it returns 501 not implemented. is this related to ssl certificate?
05:44<rvhi>can i just move ssl certificate from one server to another one?
05:46<Dave>I very much doubt it
05:46<irgeek>Of course you can. There's nothing magical about a certificate.
05:47<irgeek>You need the certificate as well as the private key.
05:47<Dave>arnt they tied to a specific IP address?
05:47<irgeek>They are tied to the server *name* not it's IP.
05:47<Dave>ah, right
05:48<rvhi>not sure why got this 'invalid method in request' in apache log
05:48<irgeek>The 501 error, I'd guess is not related to SSL.
05:49<irgeek>Are you accessing a CGI or a static page?
05:52<Dave>ruby on rails
05:52<irgeek>Um. Yeah. I'm not sure then.
05:52<irgeek>Can you access a static file at all?
05:53<rvhi>i didn't even see apache trying to access rails
05:55<irgeek>I'd figure out why Apache won't return a static page before I started fighting with the RoR 800lb gorilla.
06:00<rvhi>i got it, missing a package
06:02<rvhi>on a linode, how do i tell if it runs low on memory?
06:03<exor674>run free :P
06:03<irgeek>Install proc info.
06:03<irgeek>Er, one word. procinfo
06:04<irgeek>You can also use vmstat to watch memory over time.
06:05<Nigel>rvhi: ask the memory gods to do their voodoo
06:05<irgeek>Or you can get rrdtool going to log memory information and to help you get a better view of what's going on.
06:06<irgeek>Basically, the amount of memory you're using isn't as important as how my you are swapping.
06:06<rvhi>linux allocates all memory
06:06<irgeek>If you're constantly swapping in and out, you need more memory
06:06<rvhi>Mem: 368840 356164 12676 0 28232
06:07<rvhi>Memory: Total Used Free Shared Buffers
06:07<rvhi>there is nothing left
06:07<rvhi>Swap: 262136 24 262112
06:07<rvhi>swap is not used much
06:07<rvhi>i guess i am ok
06:07<rvhi>but still no idea how much is used
06:07<rvhi>when should i worry about upgrade
06:08<irgeek>How much you're using isn't as important as how often you are swapping.
06:08-!-xitology [~xi@] has quit [Quit: Ex-Chat]
06:08<rvhi>well, it indicates if an upgrade is needed
06:08<irgeek>Let vmstat -30 run in a windows for a while.
06:08<exor674>so I'm guessing swap in : 0 // swap out: 33 is a good sign? :P
06:09<irgeek>If you are constantly swapping, but it's a two digit number every thirty seconds, you're right on the edge.
06:09<rvhi> r b swpd free buff cache si so bi bo in cs us sy id wa
06:09<rvhi> 0 0 24 13544 28304 229016 0 0 4 19 29 25 1 0 99 0
06:10<rvhi>0/0 must be good
06:11<irgeek>The swap out number is how many pages were swapped out to disk since the last line. The swap in number is how many pages were swapped in.
06:11<irgeek>vmstat only tells you about swap changes, not the overall numbers.
06:12<rvhi>that's good indication, but still want to know if i can pile more applications on the node.
06:13<irgeek>procinfo tells you how many pages have been swapped since boot.
06:13<Nigel>btw, 'sar -w' is quite nice for swap in/out
06:14<rvhi>eh... 'sar -w' got this,
06:15<irgeek>rvhi: There's no way to tell except to put it under load and see how it performs.
06:15<rvhi>Cannot open /var/log/sysstat/sa08: No such file or directory
06:15<irgeek>You need systems accounting enables to use sar.
06:15<Nigel>rvhi: you might need to poke it (/etc/init.d/sysstat start)
06:16<Levia>Is Apache supposed to have so many threads (I have about 12), and that each takes up 60mb on average?
06:17<rvhi>sar is running, need some time to get the data, i guess
06:18<rvhi>levia, my apache has 9 threads, not much memory though
06:18<rvhi>only 2% each on 360 node
06:18<irgeek>As far as memory goes, the memory listed as free isn't your only available memory. Much of the memory listed under buffers is being used to speed up IO operations, so if an application asks for more memory, the buffers are easy to drop and allocate--with performance penalties obv.
06:19<irgeek>Levia: Yes. And they aren't really using 60MB - most of that is shared memory.
06:20<Levia>irgeek: ah okay. I just though it's a bit weird in total my 360 node uses 300 mb, out of 430, with apache seeming to use most of that memory
06:20<irgeek>The important number for processes is RSS which is the resident memory size.
06:20<Levia>for Mysql, I already disabled innodb - that helped a bunch
06:24<irgeek>If your VSZ for Apache is 60MB, you must have a lot of modules loaded.
06:24<Levia>erm yeah, that
06:24<Levia>that's true
06:25<irgeek>Mine are about 20MB VSZ and 5MB RSS
06:25<Levia>Apache/2.2.8 (Ubuntu) DAV/2 SVN/1.4.6 PHP/5.2.4-2ubuntu5.1 with Suhosin-Patch mod_ruby/1.2.6 Ruby/1.8.6(2007-09-24) mod_ssl/2.2.8 OpenSSL/0.9.8g mod_wsgi/1.3 Python/2.5.2
06:25<Levia>I don't need ruby, I don't need DAV. two I can already disable as it seems
06:26<irgeek>The PHP, Ruby, Python & perl modules are all pretty heavy on memory I think.
06:27<irgeek>It's not really a big deal though. If your system needs more memory, the parts that don't get used much will get swapped out and just sit there.
06:28<Levia>yeah okay - but still, to be more efficient, I'm just going to disable Ruby
06:28<Levia>I don't use it
06:28<Levia>so why have it loaded
06:29<irgeek>It won't hurt to do that if you don't need it.
06:30<rvhi>i changed my reverse dns, how long will it take to propogate?
06:30<irgeek>I've heard of it taking a couple of days, but I think it's usually around 12 hours.
06:31<rvhi>how do i check the timeout value?
06:31-!-Deetz [] has joined #linode
06:31<irgeek>dig -x <ip>
06:33<irgeek>Or dig -x <ip> to ask the server directly.
06:33<irgeek>!calc 86400 / (3600 *24)
06:33<@linbot>irgeek: 86,400 / (3,600 * 24) = 1
06:33<rvhi>33215, about 9 hours
06:34<irgeek>That's just the cached value in the server you're asking.
06:34<rvhi>right, 86400
06:34<rvhi>that's loooonngg
06:35<rvhi>could create some problem for email
06:35<irgeek>The default TTL for rDNS is 1 day, so 1 day after the rDNS changes on the server (which, as I said, usually takes about 12 hours) all other sites should be seeing the new value too.
06:36<irgeek>Your IP already has correct forward and reverse DNS. It should not affect email at all.
06:37<rvhi>eh.. if i use email with linode, it is correctly forward and reverse dns, won't spam filter block it?
06:38<irgeek>Besides, it's only a couple of days. The mail shouldn't even time out in the queue by then.
06:38<irgeek>Is your address?
06:39<irgeek>If you're trying to masquerade as a domain that publishes SPF, rDNS is not going to help you.
06:39<rvhi>i wish, then i will have hundred of thousand servers to play with.
06:40<irgeek>There are sites that block email for IPs which don't have rDNS set, but there aren't many.
06:41<irgeek>Anyone who's blocking mail from an IP with correct forward and reverse DNS just because they don't match the domain of the sending email is a fscking idiot.
06:42<rvhi>what's the outbound smtp server for linode?
06:42<irgeek>There isn't one.
06:42<irgeek>You have to run your own mail server.
06:42<irgeek>That's the point of a VPS. ;)
06:45<Napta>irgeek: agreed. They should only block mail from senders with no MX record
06:45<irgeek>Uh, different subject, but true.
06:46<Napta>Well I just figured I would steer it in the rant on spam blocking criteria :)
06:46-!-xitology [~xi@] has joined #linode
06:50<irgeek>Now that I re-read that, lack of MX isn't a problem. The SMTP RFC doesn't require an MX record. If there is no MX mail should be delivered to the A name for the top of the domain.
06:50<Napta>That is correct
06:50<Napta>but you shouldn't really be sending mail without an MX
06:50<irgeek>So if the domain doesn't resolve *and* there is no MX, then block.
06:51<Napta>I tend to just filter mail through the spamhaus Zen RBL, and then check against MX, pretty much filters 99% of it out
06:51<irgeek>Blocking on a HELO that doesn't resolve is valid I think. I'm 99% sure the RFC says that the HELO name should at least resolve to a public IP.
06:51<Napta>It's just that in 99% of spammers, they almost always do resolve :(
06:52<irgeek>Spammers aren't dumb. They know we check that.
06:53<Napta>Some times their DNS is outside of their control, though. In the case of bot nets from dynamic IP address space sending mail, they won't have an MX
06:53<Napta>The resources and infrastructure operations of top spammers is northing short of staggering, though
06:54<Napta>almost rivals commercial data centres, heh
06:54<irgeek>Wait. You're looking up the MX for the rDNS of their IP? Don't do that!
06:54<Napta>hmm ? no
06:54<irgeek>There is no reason a server that sends mail has to be under a domain that receives mail.
06:55<Napta>sorry, elaborate on that one
06:55<Napta>< sleepy
06:55<irgeek>I could set up a service where I forward mail for people. My servers are listed in their SPF and the relay all mail through me.
06:55<irgeek>But I want to simplify things for my tech grunts.
06:56<irgeek>So I register and all my mail servers exist under than name.
06:57<irgeek>But I do business under - has no email addresses at all and doesn't receive mail.
06:57<Napta>oh right
06:57<rvhi>i just screw up my postfix configuration, how do i rerun the setup?
06:57<Napta>debian? dpkg-reconfigure postfix? er
06:57<irgeek>But they are still the official sources of email for thousands of domains. (My business is very successful)
06:58<exor674>sacrifice a farm animal to the postfix gods... or that
06:58<rvhi>ubuntu, dpkg-reconfigurate works
06:58<Napta>rvhi: *take care* with mta configuration. You don't want to open your box up to spammers :(
06:59<rvhi>ya, i only open it up for loopback
06:59<irgeek>Postfix is difficult to turn into an open relay. It actually does some sanity checking and will bitch at you if it thinks it is too open.
06:59<Napta>or right
06:59<Napta>I'm a qmail user. For my sins
06:59<Napta>Actually, anything but postfix for some reason. Sendmail, exim, qmail
07:01<rvhi>i had some issues with exim with debian before, no issue with postfix.
07:02<irgeek>If I never see another sendmail config file, it will be too soon.
07:03<Napta>We have close to 2,400 systems running sendmail at work ;)
07:03<irgeek>'If I never see another sendmail config file, it will be too soon.'dnl
07:03<Napta>Don't be alarmed though, they're just servers and workstations with default MTAs :)
07:03<rvhi>actually i do want to setup a relay server for my iphone. AT&T mail servers won't relay if i use wifi
07:04<Napta>wifi from a dynamic location?
07:04<rvhi>ya, therefore it is not on att's network
07:04<irgeek>rvhi: Use auth with TLS/SSL and it's easy.
07:04<Napta>and would technically be a relay. You'll need SSL/TLS
07:05<Napta>or poplock ! ;)
07:05<irgeek>Is that a pop-before-smtp variant?
07:06-!-r3z` [] has quit [Ping timeout: 480 seconds]
07:06<rvhi>not available with att, nor with our beloved hosted microsoft exchange server
07:06<irgeek>I wrote a pop-before-smtp variant back in the day. One of the worst coding experiences of my life.
07:07<irgeek>rvhi: I meant use them on your Linode.
07:07<Napta>irgeek: Apologies. What I was actually refering to earlier is 'reject missing sender mx' (sender being the domain of the sender's email address)
07:08<irgeek>Ok. That's not as bad. Still not a requirement, but I doubt there are any legitimate domains that don't have MX records.
07:08<Napta>external ones, anyways (*smacks lotus notes*)
07:08<irgeek>Which spammers know, so they don't use domains without them.
07:08<Napta>Yeah, one of the reasons I don't bother with graylisting
07:09<Napta>Although the main reason is that it just seems entirely illegal to me
07:09<irgeek>Illegal how?
07:09<Napta>in the sense of adhereing to RFCs
07:09<irgeek>Not at all.
07:10<Napta>Well if you run a mail server with the intention of deferring every new sender for a fixed time period :/
07:10<irgeek>It's a soft bounce. The server receiving mail is not required to receive every mail that comes in.
07:10<Napta>that and spam bot 2.0 will probably just retry anyways
07:10<Napta>but there is some sort of irony in deferring everything by default
07:11<Napta>perhaps not irony
07:11<irgeek>Spam bot 2.0 is in the wild, but most of them haven't gotten it correct yet. They don't wait long enough to try again.
07:11<irgeek>'but there is some sort of irony in deferring everything by default' <-- That's called management.
07:12<Napta>I suppose it depends on the level of spam your infrastructure / customers are seeing
07:12<Napta>Thankfully I've never had to adopt graylisting
07:12<irgeek>Greylisting should be a per-person opt-in feature.
07:14<irgeek>At the end of the day however, we're not going to solve the majority of problems until the majority of domains provide authenticated relaying and publish strict SPF records.
07:14<irgeek>I do both already, but I am a small portion of the Internet.
07:15<Napta>how small? ;)
07:15<irgeek>If Google, Yahoo and Hotmail published strict SPF records tomorrow, I think an lot more admins at smaller sites would pick it up.
07:16<irgeek>Small enough that my attempts to run my servers The Right Way (C)(TM) go unnoticed by the world.
07:17<Napta>I think a few people appreciate it, from their own perspective corners.
07:18<Napta>There is something to be said about running core infrastructure such as DNS/NTP/MAIL in a correct & consistent mannor
07:20<Napta>There are a lot of places (usually internally) who neglect core infrastructure because (they believe) that it does not directly generate revenue
07:21<irgeek>It's the rest of the company not generating revenue when the neglected bits break down that bites them in the ass eventually.
07:21<irgeek>Well, as fun as this is, I'm going to hit the sack.
07:22<Napta>night man
07:22<irgeek>My pillow is beckoning and it's getting hard to type.
07:28-!-Nysis [] has joined #linode
07:30-!-NetNuttt [] has quit [Remote host closed the connection]
07:31<rvhi>anyone migrated svn servers before?
07:36-!-rvhi [] has quit [Quit: rvhi]
07:36-!-rvhi [] has joined #linode
07:36<Nysis>Hi. Any ideas how can someone prevent dos or ddos attacks that eat up all the bandwidth of a linode vps in a short period of time? Ok i have read about traffic shaping & control with various ways such as squid, shorewall with tc but seems compicated and maybe require custom kernels. Any directions would be greatfully appreciated. thx
07:55-!-jimcooncat [] has joined #linode
07:58-!-Multitask [] has joined #linode
08:10-!-Multitask [] has quit [Quit: Verlassend]
08:10-!-Levia [] has quit [Ping timeout: 480 seconds]
08:16-!-Levia [] has joined #linode
08:17-!-getsmart [] has joined #linode
08:18-!-andrew [] has quit [Remote host closed the connection]
09:04-!-Nysis [] has left #linode []
09:04-!-jm [] has joined #linode
09:32-!-Schroeder [] has joined #linode
09:38-!-TheFirst [] has joined #linode
09:57-!-Schroeder [] has quit [Ping timeout: 480 seconds]
10:17-!-Schroeder [] has joined #linode
10:19<@linbot>New news from forums: Mailman consultant needed - 50.00/hour in Email/SMTP Related Forum <>
10:58-!-TheFirst [] has quit [Remote host closed the connection]
11:19<@linbot>New news from forums: iptables problem in Linux Networking <>
11:20-!-TheFirst [] has joined #linode
12:55-!-solitude| [] has quit []
12:55-!-getsmart [] has quit [Quit: Ex-Chat]
13:21<Levia>irgeek: about that Apache memory thing I talked to you about earlier - I disabled ruby
13:21<Levia>it now uses about 30 mb less ram per thread
13:21<Levia>so thanks
13:23-!-ankur [7aa3ff55@] has joined #linode
13:24<ankur>caker, anything wrong with fremont39 , can't access my linode?
13:24<zeroday>"250 mail from IP soft failed sender ID check. Please ensure this IP is authorized to send mail on behalf of [])
13:24<zeroday>" o.0
13:24<zeroday>im trying to send emails to my hotmail a/c
13:26<mwalling>read the forum
13:26<mwalling>you have to sign your first born over to ms to be able to send mail
13:27<Alucard>ankur, did you traceroute?
13:28<ankur>yeah, traceroute seems fine
13:29<Dave>tried connecting to lish?
13:29<ankur>ahh, lemme try that
13:31<ankur>lish seems fine
13:31<ankur>seems some problem with my ISP..
13:34<zeroday>mwalling, well, thats annoying
13:34<zeroday>im trying to reset a password :|
13:34<mwalling>zeroday: you sound surprised at that fact
13:34<zeroday>im surprised hotmail would do this
13:35<zeroday>when the apache process sent the email, it gave that error, when I used mail command, it said it passed fine
13:36<zeroday>no errors at all
13:38-!-ankur [7aa3ff55@] has quit [Quit: ajax IRC Client]
13:39<zeroday>i can send the email fine from my uni account
13:40<mwalling>zeroday: you sound surprised at that fact
13:54-!-Schroeder [] has quit [Ping timeout: 480 seconds]
13:56<{Shawn}>wuts the apt-get thing for gentoo?
13:59<{Shawn}>for getting stuf..
13:59<mwalling>SpaceHobo: have pitty on him, for he is from canadia
14:02<{Shawn}>not canadia
14:03<fred>Did someone mention canadia?
14:05<mwalling>nope, us slackware users use dd to edit our filesystem
14:05<mwalling>SpaceHobo: and it is *very* productive, for values of productive aproaching insanity :)
14:05-!-jackc- [~nop@] has quit [Read error: Connection reset by peer]
14:05-!-jackc- [] has joined #linode
14:06<mwalling>sounds about right to me
14:07<{Shawn}>not CANADIA
14:08<{Shawn}>ur on drugs
14:08<Dave>i've had a few beers if thas what you mean?
14:08<Alucard>and no, Ur was an ancient city in southern Mesopotamia (modern day Iraq).
14:09<tjfontaine>dave is part canadianite
14:09<mwalling>!proper spelling of country north of US
14:09<@linbot>The proper spelling of the country north of the US is C-A-N-A-D-I-A, canadia
14:09-!-Schroeder [] has joined #linode
14:10<Dave>tjfontaine: shh, I try to keep that under wraps :(
14:10<mwalling>if linbot says it, it must be true
14:12<{Shawn}>!properspelling of gay usa
14:12<cruxeternus>!notaverb canadia
14:12<{Shawn}>!proper spelling of gay usa
14:12<{Shawn}>proper isn't a command
14:12<@linbot>{Shawn}: you're doing it wrong.
14:13<{Shawn}>!proper spelling
14:13<{Shawn}>!proper spelling of country north of US
14:13<{Shawn}>i copied and pasted
14:13<{Shawn}>!proper spelling of country north of US
14:13<{Shawn}>i agree
14:14-!-luca [] has joined #linode
14:14<Alucard><linbot> cruxeternus:
14:15<mwalling>{Shawn}: ^^
14:21<luca>venezuela -> venezuelian
14:22<luca>hard consonant + a => ian
14:22<luca>hard consonant + ia => ian
14:23<mattt>wtf are you lot on about?
14:23<tjfontaine>luca: how would you know? you preside in canadia
14:23<luca>tjfontaine: we have no office of president so i cannot preside
14:24<luca>there is no such thing as bad canadian weed
14:24<fred>From canadia?
14:25<luca>To amerika?
14:25<luca>SpaceHobo: i was speaking politically; corporately, sure, but we intend to nationalize all corporations
14:31-!-r3z [] has joined #linode
14:36<luca>it would appear my work here is done
14:36-!-luca [] has left #linode []
14:43-!-andrew_j_w [] has joined #linode
14:44-!-weave [] has quit [Quit: Lost terminal]
14:46*CaptObviousman reads over his scrollback buffer and cringes
14:52-!-arooni-mobile [] has joined #linode
15:11-!-rvhi [] has quit [Quit: rvhi]
15:19-!-simlun [] has joined #linode
15:21-!-D[a]rkbeholder [] has joined #linode
15:26-!-darkbeholder [] has quit [Ping timeout: 480 seconds]
15:31-!-W|GGL|T [] has joined #linode
15:45-!-binel_ [] has joined #linode
15:46-!-binel [] has quit [Ping timeout: 480 seconds]
15:47-!-Pyromancer [] has quit [Quit: Leaving]
15:47-!-arooni-mobile [] has quit [Ping timeout: 480 seconds]
15:49-!-{Shawn} [~shawnc@] has quit [Ping timeout: 480 seconds]
15:49-!-simlun [] has quit [Quit: Leaving.]
15:56-!-simlun [] has joined #linode
15:56<zeroday>anyone here set up a SPF record using the Linode DNS?
15:57-!-metaperl_ [] has quit [Quit: HydraIRC -> <- IRC for those that like to be different]
15:58*irgeek raises hand
15:59<zeroday>for "The name of your TXT record." can I call it anything I want or does it need to be something specific?
15:59<irgeek>It needs to be specific - it should be the same as the way your MX is defined.
16:00<irgeek>If you're like most people, you leave it blank.
16:00<zeroday>i use google apps with this domain, so its all google's mx
16:01<irgeek>I don't think so. Hold on...
16:01<irgeek>It should be this: "v=spf1"
16:02<irgeek>Google Apps uses the same outbound servers as, and that is the SPF record for
16:02<irgeek>You can look it up with dig TXT
16:03<zeroday>I also send emails from that domain via my mailserver running on my linode ip
16:04<irgeek>Let me check that...
16:04<zeroday>found this:
16:06<irgeek>Oh. I didn't know they had a separate SPF setup for Apps.
16:08<irgeek>Then you'll probably want "v=spf1 ip4:x.x.x.x -all" where x.x.x.x is your Linode IP. Alternatively you could use an A record instead of the ip4 record.
16:09<zeroday>this is what I have: "v=spf1 ip4: a mx ~all"
16:09<irgeek>But note that I changed the ~all to -all there. That makes it a Fail instead of a SoftFail.
16:09<irgeek>SoftFail is stupid.
16:10<zeroday>google recommends ~all
16:10<tjfontaine>read, decide for yourself :)
16:11<irgeek>SoftFail basically says "I did bother to set up and SPF record, but I don't really fscking care about setting up my mail server correctly."
16:14<irgeek>If admins would take proper control over their domains, SPF could be very effective. There is no reason a user shouldn't be sending mail through one of your known gateways.
16:14<Napta>here here
16:14-!-klono [] has joined #linode
16:15<zeroday>hmm..bug in linode cp I think
16:16<irgeek>What is it?
16:17<zeroday>when adding a new txt record, if you wrap it in "", when you edit it, the record isnt shown in the edit box
16:18<zeroday>only effects the record when the " is at the beginning
16:18<irgeek>Does it show up when you render the zonefile.
16:18<zeroday>escaped though
16:18<klono>Recently I setup exim to deliver mail locally, and upon logging in I got this e-mail: - is this something to worry about? or is this something I should take action against?
16:21-!-tgeller [] has joined #linode
16:21<tgeller>Hello, is anyone here who could help?
16:21<irgeek>tgeller: Ask your question and we'll try to help.
16:22<irgeek>klono: /usr/sbin/exim_tidydb is having problems when it tries to run. You should investigate why it is doing that.
16:22<tgeller>Hey there! I just signed up and poked through the documentation, but couldn't find the answer to a simple question: To what address do I SSH and/or FTP for my node? (I'm a beginner/intermediate Linux user.)
16:22<zeroday>have you set up your linode?
16:23<tgeller>I configured it for Debian, but have done nothing else.
16:23<zeroday>once you set it up, you get assigned an ip address
16:23<tgeller>But where do I see that?
16:23<irgeek>!google exim_tidydb "failed to open DB file /var/spool/exim/db/wait-remote_smtp"
16:23<@linbot>irgeek: Search took 0.40 seconds: Bug#312492: marked as done (exim4: problems with db files on upgrade): <>; cron messages re exim?: <>; Messages from cron daemon and exim_tidydb: <>; #252483 - failed to (3 more messages)
16:23<klono>Thanks irgeek
16:23<tgeller>caker: Wait... I see it. :)
16:23<tgeller>Let me try that...
16:23<zeroday>you probably need to set up a config and boot it first
16:24-!-xitology [~xi@] has quit [Quit: Ex-Chat]
16:24<tgeller>I did set up a config and boot it, and was given the IP I just tried SSHing there, but it didn't accept my password. Checking to make sure the password's right...
16:25<zeroday>did you set up a root password during config?
16:25<HoopyCat>dear ken block: plz stop breaking cars, kthx
16:26<tgeller>I thought I set up a root password during config... where can I go to change (and confirm) it?
16:27<tgeller>Ah, got it -- under caker "Settings and Utilities".
16:27<HoopyCat>tjfontaine: , third one down
16:28<tgeller>No... it still doesn't work. Hmm.
16:28<irgeek>tgeller: Don't forget to shutdown you Linode before resetting the root password.
16:28<tjfontaine>HoopyCat: I see
16:29<tgeller>Ah, I didn't know that, irgeek... trying that.
16:29<irgeek>tgeller: Are you sshing as root? You need to make sure of that too.
16:30-!-LanceHaig [] has joined #linode
16:30<tgeller>I was just issuing "ssh 69.56.251.XXX", and it didn't ask me for a username.
16:31<zeroday>any error msg?
16:31<Dave>the username it uses is the one you are logged in as
16:31<zeroday>thats for Lish
16:31<tgeller>Ah, there we go! I'm in.
16:31<tgeller>Thanks, folks. :)
16:31<mwalling>tgeller: you realize you already gave us your ip above, right?
16:31<tgeller>Oh, duh. :)
16:31-!-klono_ [] has joined #linode
16:31<tgeller>Security through lack of obscurity!
16:33<klono_>irgeek: I read that the problem I'm having is with exim3, so I installed exim4 via 'apt-get install exim4' and configured it for local use. I tried to remove exim3 by typing 'apt-get remove exim3', however it failed as it couldn't find package 'exim3' - should I continue with 'apt-get remove exim" to remove to previous version which is causes issues?
16:33-!-klono [] has quit [Ping timeout: 480 seconds]
16:34<Napta>apt-get remove exim will remove exim3
16:34<HoopyCat>tjfontaine: -- 'twas there opening stages this weekend. fun stuff.
16:34<Napta>There is no "exim3" package in the apt repositories
16:34<klono_>thanks :)
16:34<irgeek>klono_: I'm pretty sure that apt is smart enough to know that they your were replacing exim3 which is probably why it can't find the package.
16:35<klono_>Package exim is not installed, so not removed
16:35<irgeek>tgeller: Assuming you're on a *nix you can edit ~/.ssh/config so you don't have to remember usernames. It's especially helpful for Lish since Linode sets the name for that login.
16:35<klono_>I'm guessing it install exim4 over exim3? Hopefully ending my issues
16:36<Napta>It sounds like you do not have exim 3 installed.
16:36-!-LanceHaig [] has quit [Quit: Ex-Chat]
16:36<irgeek>klono_: Try running the exim cron script as root to see if it works now.
16:36<jvaughan>klono_: the exim4 package will conflict with / replace the exim 3 package, so exim 3 will be uninstalled when you install exim 4
16:37<jvaughan>not sure why exim 3 is in the linode debian image, i think exim 4 is normally the default for etch
16:38<klono_>I was using the default 'exim' preloaded within the Linode debian image - and then I installed exim4 - however I'm unable to remove 'exim3' or 'exim'
16:38<klono_>upon running the cron script, I get: bash: syntax error near unexpected token `then'
16:38<klono_>perhaps I'm not running the exim cron correctly? I used what was sent via e-mail
16:38<klono_>Subject: Cron <mail@base> if [ -x /usr/sbin/exim_tidydb ]; then /usr/sbin/exim_tidydb /var/spool/exim wait-remote_smtp >/dev/null; fi
16:39<klono_>without Subject: of course
16:39<Napta>jvaughan: yep, exim4 is the stock MTA for etch.
16:39<klono_>upon installing exim4, would it by any chance replace exim3?
16:40<jvaughan>20:36 < jvaughan> klono_: the exim4 package will conflict with / replace the exim 3 package, so exim 3 will be uninstalled when you install exim 4
16:40-!-klono [] has joined #linode
16:40<klono>lousy connection, sorry
16:40<jvaughan>klono 'dpkg -l |grep exim' will show you what's installed with exim in the name
16:41<klono>rc exim 3.36-18.2 An obsolete MTA (Mail Transport Agent), repl
16:41<klono>ahh.. exim still exists
16:41<klono>thanks for that jvaughan
16:42<klono>I will search google some more, and try and figure out how to remove 'exim'
16:42<jvaughan>klono: the first 'r' means it is removed
16:43<klono>and the "ii" before the others means = installed :)
16:43<klono>thanks everyone
16:43<Napta>klono: try dpkg --purge exim
16:43<Napta>(backup any config files you want to keep heh)
16:44<klono>dpkg - warning: while removing exim, directory `/etc/exim' not empty so not removed.
16:44<klono>root@base:/etc# ls -la | grep exim
16:44<klono>drwxr-xr-x 2 root root 1024 Jun 8 15:44 exim
16:44<klono>drwxr-xr-x 3 root root 1024 Jun 8 15:31 exim4
16:45<klono>I will keep 'exim' as a backup
16:45<Napta>thats the whole point of purge ...
16:45<Napta>(to remove configuration files)
16:45*Napta slaps your debian
16:46-!-tgeller [] has quit [Quit: CGI:IRC 0.5.6 (2005/02/09)]
16:46<klono>I'm going to remove the e-mail from exim and hope I don't receive another one
16:46<klono>thanks everyone for the help :) I highly appreciate it
16:46<klono>for being patient, etc
16:47-!-klono_ [] has quit [Ping timeout: 480 seconds]
16:53-!-SpaceHob1 [] has joined #linode
16:59-!-arooni-mobile [] has joined #linode
17:02-!-klono [] has quit [Quit: thx]
17:04-!-{Shawn} [] has joined #linode
17:06-!-ryan8403 [] has joined #linode
17:06-!-Schroeder [] has quit [Ping timeout: 480 seconds]
17:08-!-ryan8403 [] has left #linode []
17:12<pbryan>Out of curiosity, what distro does Linode use for its hosts?
17:13-!-jvaughan [] has quit [Quit: leaving]
17:13-!-bliblok [] has joined #linode
17:14-!-jstad [] has joined #linode
17:16-!-jvaughan [] has joined #linode
17:16<irgeek>pbryan: Ubuntu
17:16-!-jstad [] has quit []
17:17<JDLSpeedy>irgeek: i thought it was debian at one point ;-)
17:17<bliblok>Is anyone able to supply me IPs to servers/VPSes in the different datacenters? I would like to test the latency from ISPs in my country.
17:19<irgeek>bliblok: ^^^
17:19*linbot slaps jkwood
17:19<bliblok>irgeek: Thanks
17:21-!-Schroeder [] has joined #linode
17:22<@linbot>irgeek: Linode360 - 20, Linode540 - 3, Linode720 - 14, Linode1080 - 2, Linode1440 - 1, Linode2880 - 1
17:22<@linbot>Dallas360 - 0, Dallas540 - 0, Dallas720 - 1, Dallas1080 - 0, Dallas1440 - 0, Dallas2880 - 0 , Fremont360 - 0, Fremont540 - 1, Fremont720 - 6, Fremont1080 - 0, Fremont1440 - 0, Fremont2880 - 0 , Atlanta360 - 3, Atlanta540 - 1, Atlanta720 - 1, Atlanta1080 - 0, Atlanta1440 - 0, Atlanta2880 - 0 , Newark360 - 17, Newark540 - 1, Newark720 - 6, Newark1080 - 2, Newark1440 - 1, Newark2880 - 1
17:24<{Shawn}>Any IPv6 Wizards here?
17:25<lucca>not a wizard, but I have it working from my linode
17:25<{Shawn}>do u use afraid?
17:25<lucca>not familiar with it
17:25<bliblok>{Shawn}: You could try stating the actual question, noone can say what level of "wizardry" you need from that kind of questions.
17:25-!-TheFirst [] has quit [Ping timeout: 480 seconds]
17:26<straterra>hi {Shawn} :)
17:26<{Shawn}>ill take a screen shot
17:28-!-Hermes [] has joined #linode
17:29-!-Hermes [] has quit [Remote host closed the connection]
17:29-!-Hermes [] has joined #linode
17:30<{Shawn}>¤t=5.jpg <-- WHy is it like that?
17:30<{Shawn}>thats IPv6
17:30<Hermes>good evening, anyone from the linode company here?
17:30<Alucard>did you click Details?
17:30<Deckert>Hermes: the ops are all from Linode
17:30<Alucard>Hermes: anyone with a @
17:30<Alucard>except the bot
17:31<{Shawn}>yes i did Alucard
17:31<{Shawn}>it doesn't make sense :S
17:31<Hermes>thanks, i ll try to /w them
17:32-!-irgeek [~irgeek@] has left #linode []
17:32-!-irgeek [~irgeek@] has joined #linode
17:32<Alucard>well the output of the Details link would be far more useful
17:32-!-TheFirst [] has joined #linode
17:34-!-Hermes [] has quit []
17:34-!-Deckert [] has quit [Quit: Leaving]
17:34-!-Deckert [] has joined #linode
17:34<irgeek>17:31 < {Shawn}> it doesn't make sense :S <-- It may make sense to someone else here.
17:35-!-metaperl [] has joined #linode
17:36-!-arooni-mobile [] has quit [Quit: Leaving]
17:37-!-arooni-mobile [] has joined #linode
17:37<{Shawn}>if i give u details link u have to log in :S
17:38<{Shawn}>u have to log in :S
17:38<jkwood>Print Screen.
17:38<{Shawn}>ill just pastebin it
17:38<{Shawn}>but it has alota ips
17:38<irgeek>Excellent plan.
17:39<{Shawn}>should i delete the ips out of it?
17:39<bliblok>Why would you?
17:39<{Shawn}>i dunno
17:39<irgeek>Security the obscurity isn't.
17:39<irgeek>Just saying.
17:40<{Shawn}>wut do u mean?
17:41<irgeek>Scrubbing IPs from pastebins makes people think they are providing themselves with security.
17:41<{Shawn}>i know
17:41<{Shawn}>are they?
17:42<irgeek>I reality, if you have a public IP someone determined enough could figure it out.
17:42<jkwood>I could easily code an IP generator tonight, man.
17:43<{Shawn}> <--- pass is 12
17:43<cruxeternus>Yeah. And I've already rooted you anyway.
17:43<{Shawn}>someone tell me please wuts wrong with it :S
17:43<irgeek>And I just noticed I typoed that - it should have been "Security through obscurity isn't." It's the short way of saying "Security through obscurity isn't security."
17:43<jkwood>It's akin to the Ostrich Algorithm, which also doesn't work.
17:44<cruxeternus>irgeek: It made since to me the way you wrote it.
17:45<{Shawn}>cruxeternus? wut does Yeah. And I've already rooted you anyway. mean?
17:45<jkwood>I'm Ron Burgundy?
17:45<jkwood>He's making a joke.
17:45<irgeek>I think this might be part of the problem: "Could not find giving up."
17:45<jkwood>cruxeternus and I like to joke about having hacked other people.
17:45<Jeremy>you need NS records
17:45<irgeek>You might need an AAAA record for that name.
17:46<{Shawn}>oh, on my linode?
17:46<jkwood>Hmm... doesn't respond to pings...
17:47<bliblok>Do you use for DNS?
17:47<Jeremy>%host -t NS
17:47<Jeremy>Host not found: 3(NXDOMAIN)
17:49<{Shawn}>thats the pic of my Hurrecane electric tunnel config
17:49<{Shawn}>like its part of it
17:49<Jeremy>did you assign them recently?
17:50<{Shawn}>like 2 days ago
17:52<Alucard>But Major! What if the robot geishas demand a pay raise?
17:53<{Shawn}>so Jeremy, can u help me?
17:56<{Shawn}>Can anyone help me..
17:56<Jeremy>{Shawn}: that is just weird, should return NS records
17:56<{Shawn}>is it not?
17:56<Jeremy>not for your subnet
17:58<{Shawn}>im getting my friend who set it up to help me :P
17:58<Jeremy>yup, who I happen to know
18:00<{Shawn}>hes on swiftirc
18:00<Jeremy>I know
18:00<Jeremy>IPv6 :p
18:00<{Shawn}>u know him!
18:00<{Shawn}>lol who r u?
18:01<Jeremy>chaoscon on swift
18:01<rsdehart>are you really laughing out loud every single time you say that?
18:01<{Shawn}>I have said lol [7152] times!! :P
18:02<rsdehart>ah, so you fail REALLY hard
18:02<rsdehart>good of you to keep track so you can quantify your fail level
18:06<jkwood>Mmm... quantified fail level... tasty.
18:06-!-avongauss [] has quit [Quit: Coyote finally caught me]
18:07-!-avongauss [] has joined #linode
18:17<bliblok>My testing seem to indicate that newark has the best routing to my location.
18:17<jkwood>If you're overseas, then I'm not surprised.
18:18<bliblok>I am
18:18<bliblok>Or in my mind, you are.
18:19<@linbot>Dallas360 - 0, Dallas540 - 0, Dallas720 - 1, Dallas1080 - 0, Dallas1440 - 0, Dallas2880 - 0 , Fremont360 - 0, Fremont540 - 1, Fremont720 - 6, Fremont1080 - 0, Fremont1440 - 0, Fremont2880 - 0 , Atlanta360 - 3, Atlanta540 - 1, Atlanta720 - 1, Atlanta1080 - 0, Atlanta1440 - 0, Atlanta2880 - 0 , Newark360 - 17, Newark540 - 1, Newark720 - 6, Newark1080 - 2, Newark1440 - 1, Newark2880 - 1
18:19<jkwood>Looks like at least one of each. :)
18:19<bd_>no 360s in fremont :P
18:20<jkwood>I was talking about Newark.
18:21<bliblok>The Newark and Dallas downloads both seem max my connection, but Newark seems to have a better latency.
18:22<bliblok> 193.493ms 351.76 KB/s
18:22<bliblok> 161.973ms 1.16 MB/s
18:22<bliblok> 147.719ms 442.26 KB/s
18:22<bliblok> 110.126ms 1.23 MB/s
18:24<Napta>bliblok: it would. NJ vs Dallas ;)
18:24<Napta>lower latency, anyways
18:26<bliblok>Is there anything else I should keep in mind when picking a DC?
18:26<zeroday>lol, chmod is useful on windows
18:26<zeroday>needed to update the repo location, couldnt edit the file, was read only
18:27<zeroday>windows permissions on xp home need to be done via cli >.<
18:27<jkwood>bliblok: Atlanta blocks some ports, notably irc.
18:27<jkwood>Newark is brand spanking new, and AFAIK all the hosts there are Xen.
18:27<jkwood>It's also less crowded, although the others aren't exactly packed.
18:28<bd_>jkwood: less crowded? How does that matter? :)
18:28<Napta>bd_: might do if their aircon isn't quite up to it ;)
18:28<jkwood>Not really.
18:28<bliblok>bd_: More CPU available in short term, I'd guess.
18:28<bd_>bliblok: the host contention ratios are fixed
18:28<@linbot>bd_: Newark360 - 17, Newark540 - 1, Newark720 - 6, Newark1080 - 2, Newark1440 - 1, Newark2880 - 1
18:29<Napta>It might be asking how far the electical / hvac systems are from the server area. cough TP
18:29<bd_>^^^ there's only one host with empty space for 360s in newark, for example
18:29<bd_>the rest are probably full up with 40 linodes on each
18:29<bd_>the only shared infastructure then that you'd notice would be the network link *shrug*
18:30<bd_>and I've never heard anyone complain about contention for the network uplinks
18:30<jkwood>Oh, that's right. Linode has a free, unmetered backend network in each datacenter.
18:30<jkwood>So if you want to use a service offered by someone in Dallas, for example, it would make sense to use Dallas.
18:31<jkwood>mwalling has a Slackware repo set up there.
18:31<bd_>jkwood: yeah, but that's only relevant if you know the backend IP of said service
18:31<Napta>How many VPS systems do linode host?
18:31<jkwood>Napta: A half a brazilian.
18:31<bd_>Napta: $lots
18:31<Napta>We've just (still) migrated to them. pretty amazing so far
18:31<bd_>there are something like at least 39 hosts in fremont, 10 in newark... (based on host hostnames)
18:32<bd_>add in the legacy, subtract the old ones that have been taken out of service, multiply by contention ratios and you should have a good ballpark
18:33-!-TheFirst [] has quit [Ping timeout: 480 seconds]
18:35<pbryan>irgeek: Thanks.
18:37-!-andrew_j_w [] has quit [Remote host closed the connection]
18:58<{Shawn}>for sbnc, why wont /sbnc tcl rehash work ? it says unknown command
19:00<Peng>Napta: FWIW, caker said that Newark was quite cold. Linode's servers are in the, I think, high-density area, where it's even colder.
19:00<@caker>Yes. Will bring jacket next time (this Tuesday!)
19:01<Napta>I expected you to say warmer in the high-density area. It's very reassuring to know that the high-density area is cold :)
19:01<Napta>sounds like a good facility
19:01<@caker>The HD area makes the rest of the DC feel HOT (and outside even hotter)
19:01<@caker>I swear it felt like 45(f) in there .. + wind chill
19:02-!-Pyromancer [] has joined #linode
19:02<{Shawn}>does anyone know why this is happening? :
19:02<{Shawn}>for sbnc, why wont /sbnc tcl rehash work ? it says unknown command
19:02<Napta>caker: It's a good thing, though :D
19:03<Napta>Actually it's pretty good to see a dedicated 'HD area'
19:04-!-jm [] has quit [Ping timeout: 480 seconds]
19:04-!-exor674 is now known as exor|gone
19:05<Peng>For someone who's never been in a data center, what *is* a high-density area? I mean, as opposed to low-density? Linode uses beefy, 1U or 2U servers, right? Do people in the low-density area use Pentium Ms in 4U servers with empty space between each server?
19:06<iggy>I'd guess the difference between people that rent multiple racks vs parts of a rack/u at a time
19:06<iggy>but I've never heard the terms low and high density
19:07<@caker>most datacenter facilities utilize ambient cooling of the entire open area, or at best the concept of "cold rows" where chilled air is introduced, and "hot rows" where exhaust from servers is sucked back into the system
19:07<@caker>in those places, hot exhaust can contaminate the cooled area, thereby reducing the efficiency of cooling
19:08<@caker>this HD area uses cabinets with an exhaust chimney which goes directly back into the intake of the air handler. Cold air is introduced into the front of them. Open Us are blocked off -- so the air is forced through the servers
19:09<@caker>It's essentially a closed system
19:09<Napta>At work we classify High Density as typicall clusters and blade centers. Our blade centers are 'densly packed' meaning they disappate a lot more heat per X than a traditional server would, requiring greater or alternative cooling
19:09<iggy>cool idea blocking off the blank U
19:10<@linbot>Dallas360 - 0, Dallas540 - 0, Dallas720 - 1, Dallas1080 - 0, Dallas1440 - 0, Dallas2880 - 0 , Fremont360 - 0, Fremont540 - 1, Fremont720 - 6, Fremont1080 - 0, Fremont1440 - 0, Fremont2880 - 0 , Atlanta360 - 3, Atlanta540 - 1, Atlanta720 - 1, Atlanta1080 - 0, Atlanta1440 - 0, Atlanta2880 - 0 , Newark360 - 16, Newark540 - 1, Newark720 - 6, Newark1080 - 2, Newark1440 - 1, Newark2880 - 1
19:11<Peng>caker: Are you planning to send more servers to the other locations, or concentrate on Newark for a while?
19:12<iggy>well, even when atl was new, they were still sending servers to the others
19:12<Peng>What was the first DC?
19:12<iggy>he I think
19:12<iggy>actually, I don't know
19:12<iggy>he or tp
19:13<@caker>Let's put it this wqy. The HD setup is rated to 15,000 watts power consumption (and required cooling to achieve that) in the ~6 square feed a cabinet occupies. ThePlanet's equivalent limit is something like 4400 watts in the same space. HE's is 1650 watts. And Atlanta's is like 7000 watts, but we have heat issues there
19:14<{Shawn}>with the screen thing on gentoo, like the ctrl a d , how do i get the screen back?
19:14<@caker>Peng: the set going out this week is going to Newark. The set after that most of it will go to Newark also. After that we'll probably send some elsewhere
19:14<guinea-pig>screen -r
19:14<guinea-pig>man screen
19:14<Peng>caker: Okay.
19:15<{Shawn}>how do i see wut screens i have?
19:15<Peng>I use screen -D -R. I think it was the man page author's favorite.
19:15<iggy>if there's more than one it'll give you a list
19:15<Peng>(Wait, would -DR work? That would be shorter...)
19:15<Napta>Yeah typically DCs tend to be engineered to cool 3-4kW per rack... HD can require a lot more though, hehe :)
19:16<Peng>caker: Do you actually need 15 kW, or is it just nice to have the extra cooling?
19:16<{Shawn}>screen -d does?
19:16<Peng>{Shawn}: man screen.
19:17<@caker>Peng: we're in the 10-11kwatt range
19:17<Peng>caker: Environmentalists must love you. ;P
19:17<Napta>Thats a lot of power heheh
19:17<{Shawn}>why wont my sbnc rehash? do i need to do -- emerge tcl ?
19:17<{Shawn}>like my sbnc wont TCL rehash
19:17<@caker>hey, vps is inherently a green service
19:18<iggy>too right
19:18*jkwood environmentals caker
19:19<Peng>caker: You could buy some carbon credits and start advertising as a green host. The website already has the color scheme. :D
19:19<@caker>I'd rather buy new hosts
19:19<{Shawn}>can someone please help me :)
19:19<Schroeder>"carbon credits" is a scam by the environazi left
19:20-!-ibinsad [] has joined #linode
19:20-!-ibinsad [] has quit []
19:20<Peng>I wasn't seriously suggesting it. I just wanted to make the color scheme joke. :)
19:20-!-ibinsad [] has joined #linode
19:20<ibinsad>is there anyone ?
19:21<jkwood>Nobody at all.
19:21<@linbot>just us bots
19:21<bliblok>Or, you could move your hosts to Norway, where the electricity comes from waterfalls.
19:21<Peng>Then wouldn't Sweden log all the traffic?
19:22<bliblok>Peng: ?
19:22<ibinsad>is it possible have, 3 different linode in different datacenters and load balance them ?
19:22<bliblok>1. Does Sweden log traffic in any way?
19:22<bliblok>2. Why would they?
19:22<Peng>bliblok: I read recently that Sweden is planning to go all NSA, archiving all the Internet traffic they can get their hands on.
19:22<Peng>bliblok: I dunno. Evil?
19:23<bd_>ibinsad: You could use DNS round-robin entries, I guess
19:23<Peng>How cold is Norway? Do DCs get to save on AC costs?
19:24<Napta>I read an article about an IBM DC who use the excess heat to heat a school swimming pool, or something
19:24<Napta>sounded nifty
19:24<bliblok>Norway is mostly <0C during the Winter.
19:25<ibinsad>can it be managed service cluster of 3 linode in different datacenter ?
19:25<iggy>ibinsad: that probably won't work much
19:25<bd_>ibinsad: Linode is an unmanaged service...
19:25<ibinsad>iggy: why ?
19:26<iggy>most cluster infrastructure bits I've worked with were very sensitive to latency
19:26<iggy>i.e. a non-local lan would cause it to articially fail hosts a lot
19:26<iggy>plus you have data synchronization issues to deal with
19:26<iggy>which are normally outside the scope of the cluster software
19:27<ibinsad>iggy: i understand, how big websites do global load balance ? Do you work on it ?
19:28<ibinsad>is it there company can manage cluster of linodes ?
19:28<iggy>they normally have access to BGP and just have a main datacenter and a secondary DC which is updated data wise occasionally, then use BGP to fail over between DCs when necessary
19:29<iggy>I don't know anyone who is using a cluster of linodes across DCs
19:29<bd_>iggy: well, you could still use DNS to failover (TTL of 300 or so)
19:30<iggy>the question was about how big sites do it
19:30<iggy>and that's how the big ones I've seen work
19:30<bd_>*nod* you're not going to be able to achieve something like that with a VPS obviously
19:30<Napta>perhaps ip-failover within the DC, and DNS failover between DCs :)
19:31<{Shawn}>why is this doing this.. pass is 12
19:32<ibinsad>i am searching global load balance.. the issues for the latency and data sync are not still solved until now with latest software ?
19:34<ibinsad>i used batchsync secure for data transfer using ssh and syncronize without issues
19:35<bd_>ibinsad: The problem is when you have two copies of the database being modified at the same time
19:35<irgeek>ibinsad: If you truly need global load balancing, you're going to need a lot more hardware that a few VPSes.
19:35<bd_>if you make one the master, that works - but then you have a lot of latency from anything other than hosts in the same DC
19:35-!-Deetz [] has quit [Ping timeout: 480 seconds]
19:36-!-bliblok [] has quit [Quit: Leaving]
19:36<bd_>to really get good performance with such a setup you'd need to start looking at eventual consistency - and then things depend on your application really
19:36<bd_>but anyway - overengineering is bad, mmmkay? :)
19:37<irgeek>With two Linodes in each DC and a way to manage DNS programatically (the Linode API for that is in development) you could get pretty close.
19:37<ibinsad>i need just always on service.. i don't need lot of ram space or performance..
19:37<bd_>there's no such thing as 100% uptime
19:37<Napta>ibinsad: what are you clustering??
19:38<bd_>now, the question is how much uptime you -really- need, and how much you can afford to pay for it :)
19:38<irgeek>And if your service is critical to need true global load balancing and failover, you're probably looking at tens of thousands of dollars to achieve it.
19:38<ibinsad>3 different dc if 2 dc have issues 1 dc is up :-)
19:38<irgeek>Good, cheap and fast - pick two.
19:39<iggy>yeah, your best bet is to hire someone who knows what they are doing and throw lots of money at it
19:39<ibinsad>i can invest 200$ monthly
19:39<iggy>you can't really expect much for that
19:40<bd_>no matter what, if a DC goes down, you'll have to wait a bit for things to reroute - even using BGP
19:40<bd_>i don't know, won't DNS failover be enough?
19:40<Napta>rsync, mysql replication, DNS failovoer, all easily done for $200 heh
19:40<bd_>2x hosts in each DC, in DC failover
19:40<bd_>er, in-DC IP failover
19:40<irgeek>Yeah. You're not likely to get 99.99% uptime for that.
19:40<bd_>and inter-DC DNS failover
19:40<bd_>if a DC goes down, five minutes or so for DNS updates, right?
19:41<ibinsad>i think.. mostly of costs are for manage all linode
19:41<iggy>if you don't know enough to manage this yourself, you're barking up the wrong tree
19:41<Napta>bd_: probably more like 10-15 - depending on the frequency of your heartbeat
19:42<iggy>if you are using linode's dns, more since they only update the zone files periodically
19:42<bd_>iggy: Linode's DNS in slave mode will respond to a NOTIFY packet almost immediately
19:42<bd_>and they support multiple DNS masters
19:42<Napta>something simple like a perl script combined with nsupdate(1) and dynamic updates works quite well
19:42<iggy>true, but then you still have to run DNS servers in both DCs
19:43<iggy>so you might as well do all your own DNS
19:43<bd_>iggy: Indeed. You'll have 4-6 servers, most of which are idle most of the time after all...
19:43<iggy>and I don't think a 360 is going to cut what all you have to run to do this
19:43<ibinsad>i can manage my self.. i need to study a lot.. and use best sw for it.. like rsync..
19:44<Napta>ibinsad: what are you trying to cluster??
19:44<ibinsad>Napta: i just have different websites hosted in 1 dedicated server
19:44<iggy>he want's a website to have 99.99% uptime
19:44<iggy>already stated
19:44<ibinsad>as previous experiences
19:44<Napta>oh right
19:44<bd_>ibinsad: how critcal is this website? How much money do you lose if it's down for 30 minutes? How about 10 or 5 minutes?
19:45<bd_>This is your budget for redundancy :)
19:45<mwalling>bd_: ^^ slackware mirror
19:45<Napta>Is this a dynamic or static web site ?
19:45<mikegrb>! people still use slackware?!?!
19:45<bd_>! people still use slackware?!?!
19:45<irgeek>Is it mostly static pages or a heavily updated dynamic site?
19:45<ibinsad>bd_: for 30 minutes not so much money..
19:45<iggy>the stated monthly spending is $200
19:45<iggy>so you want the most you can get for that
19:45<iggy>that's your requirement
19:46<bd_>ibinsad: *shrug* you don't need anything fancy then, really...
19:46<ibinsad>i remember previous experiences.. 4 hours server down ... 1 week server down.. and few times 2 hour, 30 minutes etc..
19:46<ibinsad>2 different dc
19:46<iggy>1 linode would be infinitely better than that
19:46<bd_>I've never heard of a linode host being down for a week >_>;;
19:46<ibinsad>it was in 2002
19:47<ibinsad>bad experience
19:47<irgeek>ibinsad: Is it mostly static pages with occasional updates or a heavily updated dynamic site? That makes a huge difference in redundancy.
19:47<ibinsad>1 week during my holidays
19:47<exor|gone>the only downtime I have had has been caused by me with linode
19:47<ibinsad>for now occasional updates
19:47<bd_>upload static pages to S3?
19:48<irgeek>For now? Is that going to change?
19:48<iggy>I got knocked out for aobut 30 mins one time by a ddos taking out a whole rack
19:48<ibinsad>yes i mean i reduce a lot mysql queries..
19:48<iggy>the worst I've seen in ~4 years
19:48<Napta>bit of mysql master-master or master-slave replication
19:49<ibinsad>and i will reduce more in the future..
19:49<ibinsad>so reduce ram cpu, using
19:49<ibinsad>just 1 query for a page.. i will use memcached
19:49<irgeek>ibinsad: Synchronizing mysql servers is a non-trivial operation between datacenters. Especially as the UPDATE/INSERT/DELETE queries increase.
19:49<bd_>ibinsad: It's not about the number of queries, it's about what kind of syncing will be needed. Imagine on an ecommerce site, if two DB hosts lose communication and assign two orders the same ID.
19:50<ibinsad>i understand.. is there a software to manage that ?
19:51<ibinsad>or everything is costumized ?
19:51<irgeek>One option would be to run the updates only in a particular DC with IP faliover and have a copy in another DC that is read-only. That way people can still get to the data if there's a problem, just not update.
19:51<ibinsad>in the case of ecommerce, forums...
19:51<bd_>ibinsad: there are components, but to an extent you'll need to adapt your software to fit the replication model, probably
19:52<ibinsad>i know a company make cluster..
19:52<ibinsad>of vps
19:53<iggy>it's possible
19:53<iggy>nobody said it wasn't
19:53<ibinsad>usually for clusters of dedicated server cost 500$ minimum
19:53<irgeek>It's doable, but you need to weigh the costs against the benefits.
19:53<iggy>there's just a lot to know and a lot of variables that affect what you get out of it
19:53-!-Mojo1978 [] has joined #linode
19:54<ibinsad>and clusters of vps start from 160$
19:54<ibinsad>here >
19:54<ibinsad>is too much for it
19:54<iggy>that's using virtuozzo
19:54<irgeek>Run away!
19:54<iggy>B. Those are almost certainly in the same DC
19:54<ibinsad>only Total Memory 1536 MB
19:55<m0unds_>yuck yuck yuck
19:55<{Shawn}>Can someone please help me... why isn't ipv6 working? this is the paste of wut it says is broken :S i dont understand it pass == 12
19:56<ibinsad>how much can cost manage clusters of linode monthly ?
19:56<iggy>I don't know anyone who does that to tell you a price
19:57<Napta>You might find a company over on the WHT forums
19:57<irgeek>{Shawn}: "Could not find giving up." - Try creating an AAAA record for that name. I suggested that last time you asked and it's still not there.
19:57<{Shawn}>how do i do that..
19:57*iggy slaps {Shawn}
19:57<{Shawn}>im using hericane electric and afraid
19:57<irgeek>It's like the A record you already have, but it's an AAAA record pointing to an IPv6 address.
19:58<{Shawn}>so where would i add it? on the afraid?
19:58<ibinsad>i asked to
19:58<ibinsad>but they manage only 1 server once
19:59<irgeek>{Shawn}: Wherever you added the A record that points to it.
19:59<mwalling>irgeek: forgive him, for he is from canadia
19:59<Napta>perhaps they can manage them under the guise of three individual servers
19:59<Napta>It's not a very difficult task
19:59<{Shawn}>irgeek so on herican electric?
19:59<irgeek>Hold on.
20:00<ibinsad>Napta yes but they don't manage clusters is very different
20:00<ibinsad>is good to ask here ? >
20:00<Napta>well you don't yet have a cluster solution :)
20:00<ibinsad>i can have.. i prefer a company manage for me.. i invest on it..
20:01<irgeek>The authoritative servers for your domain are & - You need to tell them the IPv6 address for I think.
20:01<ibinsad>i prefer more vps instead 1 strong dedicated
20:02<m0unds_>!google managed vps
20:02<@linbot>m0unds_: Search took 0.22 seconds: WiredTree – Fully Managed Dedicated Server and VPS Hosting: <>; Need managed VPS - Web Hosting Talk - The largest, most ...: <>; Fully Managed VPS Web Hosting: <>; TekTonic - Managed Virtual Private Server ( VPS ) Hosting: (2 more messages)
20:02<{Shawn}>irgeek, i dont have a domain, its a free thing
20:02<irgeek>ibinsad: Linode is an unmanaged service. You will need to find a consultant to manage them for you.
20:02<irgeek>{Shawn}: Are you certain they support IPv6 then?
20:02<{Shawn}>yes.. thats wut they are for,,.
20:03<ibinsad>irgeek yes i hope :-)
20:03<{Shawn}>RDNS Delegation NS1: [Update]
20:03<{Shawn}>RDNS Delegation NS2: [Update]
20:03<{Shawn}>RDNS Delegation NS3: [Update]
20:03<ibinsad>is the best solution to scale up and no downtime
20:03<Napta>ibinsad: There isn't a "standard" off the shelf software solution (that I know of) that will manage DNS failover for you. External DNS hosting / monitoring (like dnsmadeeasy) and a pair of linodes kept in sync with rsync & database replication seems to be the simplest solution for you. on the surface anyways
20:04<{Shawn}>irgeek ^
20:04<bd_>{Shawn}: ns[12] seem down to me
20:04<bd_>ns3 is responding:
20:04<bd_> 60 IN SOA 80608009 86400 7200 3600000 60
20:04<{Shawn}>check 4 and 5 please
20:05<ibinsad>Napta: sure the problem will be if i add more linodes and if there are issues for the replications , syncs .. in that way i need who can manage for me..
20:05<bd_>4 works, 5 doesn't
20:05<Napta>ah you want it to scale
20:05<bd_>use the linode dns manager :)
20:05<{Shawn}>bd, how?
20:05<{Shawn}>its IPv6
20:05<mwalling>by reading documentation
20:05<{Shawn}>like, i know how to use the linode thing
20:05<bd_>{Shawn}: create a zone for, add some PTR records to it, point rDNS delegation to ns[12]
20:05<ibinsad>Napta: probably yes if i need more, and have high availability
20:05<Napta>ibinsad: Sounds like you need to hire a moderately expensive consultant to design some HA solution for you, that scales well
20:05<{Shawn}>but this is freedomains from afraid
20:06<{Shawn}>so i think u have to us afraid to do it
20:06<ibinsad>Napta: i want to stay sure and sleep well
20:06<Napta>I guess the webhostingtalk forums would be a good bet to get in touch with someone for that
20:06<bd_>{Shawn}: rDNS has nothing to do with your free domain name
20:06<bd_>or well
20:06<bd_>it has something to do with it
20:06<bd_>but doesn't have to manage it
20:07<irgeek>ibinsad: You really are going to need a consultant of some sort to design this for you. And it's going to cost lots of money to get it done.
20:08<Napta>Once you start adding servers you will probably need a mysql cluster configuration, not just replication, more complexity, etc
20:08-!-m0unds_ [] has left #linode []
20:08<bd_>ibinsad: Have you looked into google's app engine? ;)
20:08<ibinsad>irgeek: if i need the highest security what i can do ?
20:08<irgeek>Hire a consultant.
20:09-!-m0unds_ [] has joined #linode
20:09<ibinsad>bd_: i create an account on google , they support only python
20:10<bd_>Yes. Nobody ever said you wouldn't have to redesign your application to get total scalability and redundancy :)
20:10-!-m0unds_ [] has quit []
20:10<ibinsad>irgeek: do you ever done project about clusters ?
20:10<bd_>The nice thing there is they take care of the redundancy and scalibility for you - but you have to design the application to fit their system
20:10-!-m0unds_ [] has joined #linode
20:11<ibinsad>bd_: do you use google app engine ?
20:11*mwalling is still waiting for his beta account :(
20:11<irgeek>ibinsad: You need to understand the high level of complexity involved with what you are trying to do. There is no single answer that covers everything. That is why we are suggesting you find a consultant to help you.
20:11<bd_>ibinsad: I've played with it a bit. Not too much. It's a promising approach.
20:12<irgeek>ibinsad: I've done small clustered and HA projects, but it's not something I enjoy.
20:12<ibinsad>bd_: i create account called netlog
20:13<ibinsad>bd_: i just create it , never done
20:13<ibinsad>bd_: i don't know python language :-)
20:14<bd_>ibinsad: Well, GAE is probably the easy route to HA and (once they release paid quota expansions) scalibility - but only if you can conform to their platform's requirements :)
20:14<bd_>Whether you want to put your app in their hands, or build your own system from scratch, is a choice only you can make.
20:14*caker wants sour patch candies
20:15-!-TheFirst [] has joined #linode
20:15<irgeek>Amazon webs services is another option, but you have to build in the HA & scalability yourself.
20:15<ibinsad>irgeek: i know, there company do for ec2
20:15<HoopyCat>caker: in hamilton, ontario, there's a candy factory about 1.3 miles before the place i usually overnight. holy crap, gummy worm day is delicious.
20:15<bd_>irgeek: that has all the same problems as linode though :)
20:15<ibinsad>they start from 500$ monthly .. too high for me
20:15<irgeek>And S3 and SimpleDB
20:15<bd_>irgeek: and they're all in the same general area
20:16<bd_>also SimpleDB is kind of useless ATM...
20:16<bd_>You issue a query, you get back primary keys. To get the data associated with these keys you need to issue another request PER KEY.
20:16<ibinsad>bd_: if google will support php and other opensource languages i can do something there.. now still in beta
20:16<bd_>Can anyone say latency? :|
20:17<bd_>You can parallize it, but wow...
20:17-!-m0unds_ [] has quit [Quit: leaving]
20:17<bd_>ibinsad: Any route to HA will involve a lot of work. Learning python is just another tradeoff :)
20:18<irgeek>Or paying other people to do it for you.
20:20<bd_>I mean, the python requirement is the least annoying bit of GAE :)
20:20<ibinsad>do you think clusters of vps will be more popular in the future ? Now only few companies do these things
20:20<bd_>ibinsad: Very few application can scale indefinitely by simply throwing lots of hardware at it
20:20<Napta>It depends on what OSI layer you define a 'cluster'
20:20<irgeek>ibinsad: The cluster isn't the problem. It's fitting them into a solution that does what you want.
20:21<iggy>ibinsad: most people are going to move up to a dedicated server before they start trying clusters
20:23<ibinsad>other websites do these things are:
20:25<irgeek>What don't you like about those companies?
20:26<iggy>mediatemple has a grid service too
20:26<iggy>don't know what langs they support
20:27<@caker>"It had to be "device-less." There are no servers, VPSes, operating systems, or devices for you to setup and configure. Simply upload your application, and you're off and going."
20:27<@caker>!setup even
20:27<@linbot>setup is not a verb. Please see
20:27*caker pulls hair out
20:28<ibinsad>they are not global load balance
20:28<irgeek>Aren't those all scalability solutions, not HA solutions?
20:28*linbot slaps jkwood
20:28<iggy>I thought we wanted both
20:29-!-Grog_SA [] has joined #linode
20:30<ibinsad>i like:
20:30<ibinsad>but it cost a lot
20:30<avongauss>bd_: this is going back a bit, but re: ipv6, linode dns and ptr records: does the Linode DNS manager allow you to enter PTR records?
20:30<HoopyCat>cheap, easy, good: pick any two
20:30<@caker>avongauss: only via slaves
20:30<HoopyCat>avongauss: nope, but you can slave
20:31<avongauss>ah, okay, thanks.
20:32-!-Grog_SA [] has quit []
20:33<HoopyCat>avongauss: catch me when i haven't had four beers and a rant session about STPR under my belt and i miiight deploy an easy ipv6 ptr service, since i really need to for personal use anyway
20:33<avongauss>I've been using as well for the IPv6 rDNS, but looking back in the log it looked like there was another possibility. Was just curious.
20:33-!-arooni-mobile [] has quit [Ping timeout: 480 seconds]
20:34<ibinsad>thank you for everything... i hope to find a solution.. else i will buy just 1 strong dedicated managed... ( i like to manage my self, but i prefer dedicate time on other things )
20:34<HoopyCat>avongauss: nod, i'm just lazy and can say "ns1 and" in my sleep
20:35*irgeek goes back to attacking the API beta
20:35<ibinsad>bd_ , iggy, irgeek .. and all.. best wishes.. i have to go
20:36-!-ibinsad [] has quit [Quit: ibinsad]
20:37<HoopyCat>caker: i might hit up the API beta tomorrow or tuesday and do an IPv6 PTR thing if that's cool (and if i have CFT)
20:38<HoopyCat>i woulda gotten in on thursday, but i have an excuse
20:38-!-TheFirst [] has quit [Ping timeout: 480 seconds]
20:38<mikegrb>HoopyCat: heheh
20:38<@tasaro>he was busy with his desk trash bulk mailing
20:38<mikegrb>HoopyCat: I was thinking about making a webservice doobie for ipv6 reverse zone creation
20:39<irgeek>HoopyCat: The API doesn't support PTR records either...
20:39<bd_>irgeek: It supports slave configuration though
20:39<HoopyCat>if i can configure slaves, that'll make me happy
20:39<bd_>hmm, the slave system should support backend IPs :)
20:40<irgeek>That you can do.
20:40<bd_>er, private net IPs
20:40<HoopyCat>dnsapi.slave.gimpMask = True
20:40<mwalling>bd_: ...
20:40<mwalling>ns2 is in he
20:41<bd_>mwalling: Yes...? If there was a linode nameserver in each DC it could be achieved by having them AXFR from each other
20:41<bd_>I suppose it doesn't matter much, really
20:41<irgeek>bd_: I doubt the traffic between masters and slaves makes that a must-have feature.
20:42<bd_>I suppose not
20:42<mwalling>on that note, i should work on my public -> private zone
20:42<HoopyCat>mikegrb: as was i, but i bet yours will suck less than mine
20:42<mwalling>if anyone wants me to manually add them, let me know
20:46<HoopyCat>yegh, i am going to sleep well tonight.
20:49<mikegrb>HoopyCat: lies, mine was to be completely minimal but fuctioning
20:49-!-NetNuttt [] has joined #linode
20:50<HoopyCat>mikegrb: mine was to have a flash opening, a hard techno soundtrack, a half-naked anime girl mascot, and a blog-forum, but when you actually go to add a zone, it will display an "Under Construction" stage with the animated yellow diamond construction worker guy.
20:50<mikegrb>HoopyCat: awesome
20:51<HoopyCat>s/stage/page/ # goddammit, rally weekend kills my brain
20:51-!-exor|gone is now known as exor674
20:51*mikegrb sleeps
20:52<HoopyCat>mikegrb: perhaps if i share a percentage of the ad revenue with you, we can corner the IPv6 rDNS market and make millions
20:54<HoopyCat>3.40x10^38 possible addresses, at one cent per ad impression and three impressions per PTR... that's... 1.02x10^37 dollars right there
20:55<avongauss>was that four beers or four shots? ;)
20:58<HoopyCat>it was a 7% ABV debriefing
20:59-!-Schroeder [] has quit [Ping timeout: 480 seconds]
21:03<ang>so dumb question. from Cox in CT, why would i get faster ping times but slower downloads to Newark when i get slower pings but faster DL to atlanta?
21:04<exor674>pingtime != throughput :P
21:05<ang>yeah, i suppose. disapponting tho, i was contemplating migrating to newark
21:05<Napta>perhaps its only a temp. problem
21:06<tjfontaine>or QoS by your isp
21:06<irgeek>What's the difference in latency?
21:06<ang>7-8ms faster on ping to newark than to atl
21:06<irgeek>You probably won't notice a difference that small.
21:07<irgeek>Anything under about 250ms difference is negligible.
21:11<avongauss>250 or 25ms?
21:12<irgeek>250 - A quarter of a second is pretty fast.
21:13<irgeek>If you're under a quarter second for latency, using a terminal feels natural. Once you get above that is when it starts to feel sluggish.
21:14<avongauss>I guess I am too impatient. Anything over about 100 in a terminal and I start getting annoyed.
21:16<HoopyCat>avongauss: drink fewer energy drinks and lay off the soda ;-)
21:16<irgeek>If you type pretty fast 250ms may feel a little sluggish. I'd estimate anything over about 40 wpm sustained you'd notice it.
21:17<irgeek>But for a command line where you're not typing at a sustained rate anything under 250ms feels normal.
21:17<avongauss>not an energy drink person, but there might be dew sitting near me.
21:17-!-chuck [] has joined #linode
21:17<chuck>is it possible to run virtual servers under a linode virtual server?
21:17<chuck>just wondering
21:17<iggy>be more specific
21:17<bd_>You can nest UML
21:17<iggy>like apache virtual hosts?
21:18<irgeek>It's possible, but there will be a performance hit.
21:18<chuck>iggy: what's more specific than "virtual server"?
21:18<HoopyCat>chuck: i've successfully compiled and experimentally used UML under a linode.
21:18<mwalling>chuck: 21:17 < iggy> like apache virtual hosts?
21:18<chuck>virtual server != virtual host though
21:18<iggy>chuck: like wtf are you actually trying to achieve?
21:18<iggy>I don't know you know the difference
21:18<chuck>iggy: i'm not trying to achieve anything, "just wondering"
21:19<irgeek>chuck: We don't assume people know how to ask their questions the right way around here.
21:19<mwalling>chuck: and how many people actually know that host 1= server?
21:19<iggy>I'd guess most of the people that come in here and ask that kind of question don't
21:19-!-TheFirst [] has joined #linode
21:19<mwalling>chuck: we get people who ask this: 13:56 < {Shawn}> wuts the apt-get thing for gentoo?
21:19*iggy pukes
21:19<mwalling>assuming the least common dominator is essential
21:20<HoopyCat>eh, i just go for the "you can't run a domU, but UML compiles fine" and see if their eyes glaze over ;-)
21:20<irgeek>The answer to that question, of course, is install Ubuntu or Debian.
21:20<iggy>or gtfu
21:20<HoopyCat>or, for our friend at 13:56, "Google"
21:20<mwalling>irgeek: but dudz, i need my USEFLAGS!
21:21<HoopyCat>!google apt-get thing for gentoo
21:21<@linbot>HoopyCat: Search took 0.25 seconds: apt-get install Net::Pcap: <>; taw's blog: apt-get upgrade: <>; TIP Converting from or to a non- gentoo distro - Gentoo Linux Wiki: <>; FromGentooToKubuntu - Community Ubuntu Documentation: (2 more messages)
21:21<mwalling>HoopyCat: read your /lastlog
21:21-!-Schroeder [] has joined #linode
21:21<HoopyCat>mwalling: was it today? i'll hit the web
21:21<mwalling>iggy: together those sound pretty good... gtfu(p) and gtfo damnit
21:22<iggy>I read that as "fracking stripes"
21:22<mwalling>HoopyCat: yeah, best was just around 1400 local
21:22<iggy>of course I've been drinking and eating a lot of chocolate
21:22*HoopyCat pops popcorn and reads
21:25<path->he got entirely too angry over that
21:25<jkwood>mwalling: <3
21:25<jkwood>fred: <3
21:26<jkwood>linbot: <3
21:26<mwalling>hey, i used to work at the first cracker barrel south of montreal on I87 (where i met my wife)...
21:26<mwalling>i've had to deal with one too many canadians
21:27<HoopyCat>mwalling: 14:09 was classic, and indeed, i <3 that cracker barrel; i met your wife there too
21:27*HoopyCat drunkenly dodges mwalling
21:27<mwalling>HoopyCat: ... i know where you live
21:29-!-irgeek [~irgeek@] has quit [Quit: irgeek]
21:31<iggy>Need a good IT objective for my resume
21:31<iggy>anybody got any good ones?
21:32<mwalling>destroy microsoft?
21:32<jkwood>"IT objective"
21:33<iggy>mwalling: for my resume, not for a general slogan
21:33<mwalling>well... what do you want to do?
21:33<HoopyCat>iggy: "To obtain a position in the field of Information Technology in which I can utilize my skills in Naval Contemplation and Microsoft Server Administration in a dynamic and fast-paced environment."
21:34<iggy>mwalling: as little as possible and make really good money doing it
21:34<iggy>what else
21:34<HoopyCat>tbh, i quit my job, started writing my resume, and got violently ill when i had to come up with an Objective section, so the only app i have in right now is to be an ice cream truck driver
21:34<iggy>"Naval Contemplation"
21:34<HoopyCat>so don't ask me
21:34<mwalling>HoopyCat: i thought you were doing the self-employed gig?
21:34<iggy>HoopyCat: the only ice cream trucks in this area drive through the ghetto
21:35<iggy>I'm too pertty to drive there
21:35<mwalling>yeah, i didnt put an objective on the resume i gave caker and ge
21:35<HoopyCat>mwalling: full-time student, part-time consultant, ice cream truck driver if i get bored next month
21:35<iggy>I wonder how many people don't
21:35<jkwood>I often don't.
21:35<HoopyCat>iggy: how pretty are you? can you send jpg?
21:35<iggy>I've noticed some of the examples I looked at didn't have an objective, it was more of a blurb about themselves
21:36<path->i thought you were a professional resume writer
21:36<jkwood>! people still use jpgs?!?!
21:36<HoopyCat>but yeah, i think Objective is a pretty pointless section, as is the "References: Available Upon Request"
21:37<mwalling>HoopyCat: i thought you had to be a sex offender to get a job for mr dingaling
21:37<HoopyCat>it's assumed that "Objective: To work for you instead of [the hellhole I work for now|eating ramen noodles and begging my [parents|wife] for beer money]" and "References: Available upon request", or you wouldn't be sending a resume
21:37<path->ice cream trucks roll through my neighborhood all 12 months.. i couldn't believe it when i moved here
21:38<path->but apparently someone down the street much give them constant business
21:38<iggy>fatties down the road?
21:39<path->dunno, i just saw their kids
21:39<path->well, the kids weren't fat from what i could tell.. but i am down aways
21:40<HoopyCat>mwalling: probably why i haven't heard back. the hiring manager seemed interested when i turned in my resume, though:
21:41<jkwood>HoopyCat: I know what you mean.
21:42-!-Mojo1978 [] has quit [Remote host closed the connection]
21:43<mwalling>my favorite line when $Wife's playing assassin's creed: "throw a frag nade!"
21:44<HoopyCat>Requirements: BS in Ice Cream Distribution or 6 years related experience, Blue Bunny Certified Solutions Engineer (BBCSE), 2 years experience with Dippin' Dots 2009, Clean driving record, Level 3+ sex offender status
21:52-!-Kassah [] has quit [Quit: Ex-Chat]
21:55<jkwood>Hmm... How do I disallow directory listing in Apache?
21:56<iggy>-Indexes maybe
21:56<Napta>don't include Indexes in your options directive, jkwood (or -Indexes)
21:56<iggy>can't remember offhand
21:56<Napta>-Indexes works
21:56<jkwood>Oh, okay.
21:57<jkwood>Ah, much better. Thanks. :)
22:00-!-r3z` [] has joined #linode
22:01<chuck>the map on the linode homepage should be updated
22:01<iggy>I mentioned that the other day
22:01<iggy>but nobody listens to me
22:07-!-r3z [] has quit [Ping timeout: 480 seconds]
22:07-!-exor674 is now known as exor|zzz
22:09<mwalling>na, hung wireless connection
22:11<jetlag>just imagine the map looks like this
22:12<mwalling>maybe they should go to *'s instead of shading the states
22:15-!-meff [] has quit [Quit: Into the dark I fade..]
22:26-!-NetNuttt [] has quit [Remote host closed the connection]
22:36<jetlag>can someone ping
22:36<Napta>64 bytes from ( icmp_seq=1 ttl=63 time=0.301 ms
22:37-!-Schroeder [] has quit [Ping timeout: 480 seconds]
22:37<jkwood>--- ping statistics ---
22:37<jkwood>10 packets transmitted, 10 received, 0% packet loss, time 9000ms
22:37<jkwood>rtt min/avg/max/mdev = 38.827/38.902/38.994/0.204 ms
22:37<jkwood>Solid 38.8/38.9ms ping time.
22:38<jetlag>Seems Verizon has crapped itself again.
22:41<jackc->vzn has been having transit issues of late
22:43-!-Iahova [] has joined #linode
22:43*jkwood raises an eyebrow
22:46<mwalling>verizon *ALWAYS* has issues
22:46*mwalling <-- pissed off subschiber who sold his soul to them for another 11 months
22:47<jkwood>But they have FiOS, and other neat acronyms!
22:48<encode>the last mile doesn't help much if the packets can't get there
22:49<guinea-pig>thank god verizon can't reach my soul
22:49<path->they didn't setup reverse dns for some ip's i get once in awhile
22:49<path->that's been the most annoying thing so far
22:50<path->but i don't feel like paying comcast twice as much, when i'm not a big downloader.. dsl is fine for my needs
22:50<path->that comma seems misplaced
22:51<guinea-pig>seems verizon people on other nets/channels are having connectivity issues tonight.
22:51<path->regional maybe?
22:52-!-lakin [] has joined #linode
22:58-!-mkelly32 [] has joined #linode
22:58<mkelly32>hi, i'm suddenly being told my ssh connections to both my server's sshd and the lish sshd are being refused... why might that be?
22:59<mkelly32>it's being refued at a higher level than my server's setup -- it doesn't even show up in the logs
22:59<mkelly32>(i'm only able ot get in with the ajax ssh client on the website)
22:59<mkelly32>no firewall stuff changed between an hr ago and now
22:59<bd_>mkelly32: try running a tcptraceroute to the ssh port. Note that some distros install a symlink to regular traceroute by default - install tcptraceroute explicitly, then do tcptraceroute yourserver 22
22:59<mkelly32>and i can ssh out to other places
22:59<guinea-pig>might be network related between you and the host
23:00<mkelly32>bd_: don't have a linux box up locally
23:00<bd_>mkelly32: Do you have a NAT device? (often billed as a home router or something)
23:00<bd_>if so try rebooting it - I have a friend who had a similar problem with specific hosts being inaccessible over ssh
23:00<mkelly32>i have a nat device, but it hasn';t caysed any other ssh problems
23:00<mkelly32>i can ssh to other places rignt jow
23:00<bd_>and rebooting it fixed it for him
23:00<path->try lish ssh to port 443 (i think(
23:00<mkelly32>just not linode
23:00<bd_>mkelly32: yes, he also could ssh to some places but not others
23:00<mkelly32>eh, i'll give it a try...
23:00<path->i think 443 is an alternative port
23:01<path->just to see if there is a block
23:01<mkelly32>that's the standard https port
23:01<guinea-pig>again, also try a traceroute to see if there's a network issue
23:01<bd_>mkelly32: yes, the hosts run ssh on it instead
23:01-!-lakin [] has quit [Quit: Ex-Chat]
23:01<mkelly32>i can get to the hots via http
23:01<mkelly32>so t isn't a routing issue either
23:01<guinea-pig>you didn't say that. that's valuable information
23:01<mkelly32>eh, wait, now it works
23:02<mkelly32>hmm, maybe it was a weird routing huckyhp then
23:02<guinea-pig>it happens
23:02<mkelly32>oh well, can't do anything now
23:02<mkelly32>sorry to waste time
23:02<mkelly32>fi it comes back i'll get better diagnostics first
23:02<jkwood>Time is not wasted if it's used.
23:03<bd_>mkelly32: there are windows binaries of tcptraceroute I think, btw
23:03<mkelly32>they have a regular traceroute
23:03<mwalling>traceroute != tcptraceroute
23:03<guinea-pig>does that use UDP or TCP?
23:04<guinea-pig>or IM?P ECHO?
23:04<guinea-pig>i'm so forgetful
23:04<mkelly32>i think
23:04<bd_>tracetcp uses TCP SYN
23:04<bd_>standard tracert uses ICMP ECHO
23:04<Peng>I want an IMAP traceroute. :)
23:04<guinea-pig>some things tend to block ICMP stuffs
23:04<Peng>Is tcptraceroute different from traceroute's TCP option?
23:04<bd_>guinea-pig: well moreover when you only have one port blocked tcptraceroute is very useful
23:04<guinea-pig>traceroutes might not get through when a tcptraceroute would
23:05<bd_>Peng: probably not
23:05<bd_>I wasn't aware normal traceroute had a TCP option oO
23:05<jkwood>man traceroute
23:05<Peng>Whatever traceroute I have has lots of options.
23:06<Peng>I have less success with TCP than ICMP.
23:06<guinea-pig>i just use mtr
23:06*guinea-pig shrugs
23:06<Peng>I don't have much success with mtr either.
23:06<bd_>Peng: your ISP might block ICMP then :/
23:06<mkelly32>mtr is pretty nice
23:06<jkwood>Hmm... I don't see a TCP option there.
23:06<Peng>bd_: Regular traceroute works fine.
23:06<mkelly32>and windows' client doesn't seem to havea tcp option
23:07<bd_>and mtr doesn't? o_O
23:07<guinea-pig>apparantly not. huh
23:07<Peng>jkwood: -T?
23:07<Peng>bd_: mtr shows really high packet loss.
23:07<bd_>Peng: hmm, icmp rate limiting?
23:08<Peng>I just tried an mtr from my Linode to my IP. It had packet loss too. I guess Level 3 rate-limits ICMP, or at least gives it a really low priority.
23:09<guinea-pig>nifty rdns
23:10<guinea-pig>18. rbflwnpk12
23:10<guinea-pig>19. ???
23:10<Peng>Haha, yeah.
23:10<jkwood>20: Profit!!!
23:10<guinea-pig>what the heck is rbflwnpk12? :P
23:10<Peng>It's one of the first hops.
23:11<Peng>Oddly, I don't see it right now.
23:11<Bdragon>!cyborg rbflwmpk12
23:11<@linbot>Bdragon: R.B.F.L.W.M.P.K.: Robotic Biomechanical Facsimile Limited to Worldwide Mathematics and Potential Killing
23:11<guinea-pig>yeah, but why am i seeing it on a traceroute to you from the outside world?
23:11<Peng>Usually I do see it, or something similar.
23:11<bd_>It kills potential?
23:11<bd_>Giving lobotomies to promising youths, then?
23:11<Peng>Where are you tracerouting from?
23:12<Peng>"fl" could be Florida, "wmpk" Winter Park or something.
23:12<Peng>Not sure where the M would come from. Shrug.
23:12<guinea-pig>from my linode, actually. fremont
23:13<Peng>This is very weird. I don't see that weird one any more.
23:13<guinea-pig>i don't see it everytime.
23:13<guinea-pig>it goes through another hop 50%
23:13<guinea-pig>that has a more normal rDNS
23:13<Peng>Which is what?
23:13<guinea-pig>it just amuses me that your ISP has something broken like that, hehe
23:14<Peng>I'm getting that one 100% of the time right now.
23:14<guinea-pig>oh wait
23:14<guinea-pig>that's the same thing
23:14<Peng>That's like 2 hops from me, including my modem or router (forget which).
23:14<guinea-pig>] host
23:14<guinea-pig>Name: rbflwnpk12
23:15<Peng>You can traceroute me successfully? It doesn't get blocked by something?
23:15<guinea-pig>no, that's my last hop. it's dropped after that
23:16<Peng>I got a new modem recently, with an even suckier web interface. I'm not sure what I did with its firewall.
23:18<Peng>Can you ping me?
23:18*jkwood pings Peng
23:19<Peng>Wait, I have a Linode. I can ping myself.
23:19<Peng>I mean, I can try to. It fails.
23:19*jkwood pings Pengs linode
23:20<Peng>Also, I do get rbflwnpk12 in a traceroute.
23:20*jkwood rbflwnpk12s Peng's traceroute
23:21<Peng>Wait, that is an N. Definitely "Florida, Winter Park", then. (It's a nearby town/city/thing, but not *very* near.)
23:21<Peng>Dunno about the rb.
23:22<Peng>"router" something?
23:22*caker used to live in Winter Park, FL
23:22<guinea-pig>can't be much of a WINTER park if it's in florida
23:22*jkwood used to winter in Caker Park, FL
23:22<guinea-pig>how much snow you get?
23:23<Peng>Interesting, if I ping from my Linode, I get a decent amount of packet loss.
23:23<Peng>Ultimately, it's probably my modem's firewall..
23:23<Peng>caker: Oh, nice.
23:23<Peng>caker: When? I bet houses are 20 times more expensive now.
23:23<@caker>Peng: um ... 1994-1998 ?
23:24<Peng>I just pinged it again. No loss. Never mind. :\
23:24<guinea-pig>i see 0% packet loss to rbflwnpk, 100% loss beyond it. make your own judgement about your modem
23:24<Peng>guinea-pig: Not only is it not winter, it's a city, not a park.
23:25<Peng>Central Park is nice though.
23:25<guinea-pig>Peng: got messed up rDNS too
23:25<Peng>They'd probably be insulted if I called them a "town".
23:25<mwalling>i've got a folder of mabybe 100-150 pictures of alireal distribution structures from winter park in my desk
23:25<guinea-pig>16. crflwnpk01
23:26<guinea-pig>enough of this
23:26*guinea-pig goes off to play with packaging again
23:26<Peng>Also, I should catch the end of the basketball game. See you in five minutes. :)
23:31-!-Iahova [] has quit [Quit: This computer has gone to sleep]
23:44-!-purrdeta_ [] has joined #linode
23:46-!-purrdeta [] has quit [Ping timeout: 480 seconds]
23:51<Peng>The Celtics went from a 24-point lead to 2 at the end of the game.
23:51<Peng>They won with a 4 or 6-point lead though.
23:59-!-VS_ChanLog [] has left #linode [Rotating Logs]
23:59-!-VS_ChanLog [] has joined #linode
---Logclosed Mon Jun 09 00:00:33 2008