Back to Home / #uml / 2007 / 04 / Prev Day | Next Day
#uml IRC Logs for 2007-04-22

---Logopened Sun Apr 22 00:00:38 2007
02:21|-|kos_tom [~thomas@humanoidz.org] has joined #uml
02:58|-|shze [~shze@c-69-245-52-216.hsd1.tn.comcast.net] has joined #uml
04:08|-|polyonymous [~hacker@pd953913f.dip0.t-ipconnect.de] has quit [Ping timeout: 480 seconds]
04:20|-|polyonymous [~hacker@pD9538F72.dip0.t-ipconnect.de] has joined #uml
05:01|-|pat_ [~pat@pd9e64e51.dip.t-dialin.net] has joined #uml
05:02<pat_>hmm hello .. just something weird happened the third time now .. the uml just froze
05:02<pat_>while editing a simple file
05:02|-|richardw [~richardw@M363P011.adsl.highway.telekom.at] has joined #uml
05:02<pat_>even network connectivity stopped
05:03<pat_>i.e. apache times out
05:03<pat_>anyone ever seen summit similar?
05:09|-|shze [~shze@c-69-245-52-216.hsd1.tn.comcast.net] has quit [Quit: Konversation terminated!]
05:13|-|richardw [~richardw@M363P011.adsl.highway.telekom.at] has quit [Quit: Leaving]
05:15|-|pat_ [~pat@pd9e64e51.dip.t-dialin.net] has quit [Quit: leaving]
08:24|-|aroscha [~aroscha@193.238.159.249] has joined #uml
08:24|-|aroscha_ [~aroscha@193.238.159.249] has joined #uml
08:24|-|aroscha [~aroscha@193.238.159.249] has quit [Read error: Connection reset by peer]
09:20|-|aroscha_ [~aroscha@193.238.159.249] has quit [Quit: aroscha_]
09:24|-|aroscha [~aroscha@193.238.159.249] has joined #uml
11:39|-|richardw [~richardw@M392P030.adsl.highway.telekom.at] has joined #uml
12:19|-|moyix [~moyix@static-72-93-243-4.bstnma.fios.verizon.net] has joined #uml
12:46|-|the_hydra [~a_mulyadi@125.164.99.136] has joined #uml
12:46<the_hydra>hi all
13:07|-|ant384 [~zen@86.107.202.160] has joined #uml
13:11<ant384>Heyas.
13:11<the_hydra>yo ant384
13:14<ant384>I see in the HOWTO that UML is used "in production systems". Is there any list of such users of UML ? I'd like to explore UML for a virtual hosting role, and I don't know where to start comparing it with other virtualization solutions.
13:15<the_hydra>check linode.com
13:15<the_hydra>IIRC this is the biggest VPS using UML..
13:18[~]ant384 grins
13:18<ant384>Thank you for the link, I read some.
13:18<the_hydra>??
13:33<ant384>I'm not sure I understand how UML + children can be controlled in terms of consumed CPU. I mean is nice/renice the only means of controlling a UML ? I admit I didn't rtfm thoroughly, should I ?
13:33<the_hydra>controlling CPU? not really sure, but AFAIK UML can't control that
13:33<the_hydra>maybe via another patch/tool such as CKRM
13:42|-|aTypical [~aTypical@24-205-131-112.dhcp.mtpk.ca.charter.com] has joined #uml
13:44<aTypical>Hello, all. I am looking for a site to get pre-compiled root_fs. I was looking for opensuse if anyone has it.
13:50|-|richardw [~richardw@M392P030.adsl.highway.telekom.at] has quit [Quit: Leaving]
13:55|-|the_hydra [~a_mulyadi@125.164.99.136] has quit [Ping timeout: 480 seconds]
13:56<aroscha>anybody here from linode.com ?
14:15<aroscha>another question: say i have a n processes in an instance. Who schedules between these? the host ? or the uml kernel?
14:30<ant384>Well, the UML kernel must have it's own scheduler, otherwise it isn't much of a kernel.
14:31<ant384>So all children of the UML should be scheduled by it, while the UML kernel is scheduled by the host's scheduler.
14:31<ant384>Then again, I'm quite new to UML, so don't take my word.
14:31<aroscha>but i see the sub instance processes in the host kernel
14:32<ant384>Well, they're just children of the UML process
14:32<aroscha>so i could imagine that the instance passes the scheduling decision on or so
14:32<ant384>Should render correctly in a pstree
14:32<aroscha>true
14:32|-|aTypical [~aTypical@24-205-131-112.dhcp.mtpk.ca.charter.com] has left #uml [Leaving]
14:42<caker><-- linode.com guy
14:42<caker>also mikegrb and tasaro :)
14:43<aroscha>ah
14:43<aroscha>caker: what is the maximum number of instances that you managed to run per server?
14:44<caker>aroscha: ~40. We've had more (maybe +10) but 40 on our recent big servers (16G ram)
14:44<caker>that number isn't set because of a bottleneck in UML, just the resources we allocate to the UMLs themselves (we don't oversell the machine's RAM)
14:46<ant384>I'll take advantage of your presence as well.. I was asking earlier about CPU partitioning. Is there no problem for all instances to simply use all the processing power they want ? (nice aside)
14:48<caker>ant384: yup
14:48<caker>disk IO is another story
14:48<caker>even with CFQ
14:48[~]ant384 nods
14:53<aroscha>caker: well what I can see from running 200 instances is that they seem all to get their scheduling slice
14:54<aroscha>but starting a new process is reallllly slow
15:00<caker>I guess I'm not surprised... doing the normal performance setup stuff for UML (skas + tmpfs)?
15:17|-|baroni [~baroni@c906a072.virtua.com.br] has joined #uml
15:51<aroscha>yes, well actually I meant starting any other process, such as "ls" or so :)
15:52<aroscha>on the host
15:59|-|aroscha_ [~aroscha@77.117.97.184] has joined #uml
15:59<aroscha_>re
16:00|-|aroscha [~aroscha@193.238.159.249] has quit [Ping timeout: 480 seconds]
16:10|-|aroscha [~aroscha@77.117.84.212] has joined #uml
16:15|-|aroscha_ [~aroscha@77.117.97.184] has quit [Ping timeout: 480 seconds]
16:22|-|aroscha [~aroscha@77.117.84.212] has quit [Ping timeout: 480 seconds]
16:32|-|aroscha [~aroscha@193.238.159.249] has joined #uml
16:46|-|aroscha [~aroscha@193.238.159.249] has quit [Ping timeout: 480 seconds]
16:46|-|aroscha [~aroscha@77.118.223.183] has joined #uml
16:54|-|albertito [~net@host110.201-252-57.telecom.net.ar] has quit [Server closed connection]
16:55|-|albertito [~net@host110.201-252-57.telecom.net.ar] has joined #uml
---Logclosed Sun Apr 22 17:04:30 2007
---Logopened Sun Apr 22 17:04:33 2007
17:04|-|mikegrb [~michael@mail.thegrebs.com] has joined #uml
17:04|-|Ekipa kanalu #uml: Wszystkich: 33 |-| +op [0] |-| +voice [0] |-| normalnych [33]
17:05|-|Kanal #uml zsynchronizowany w 71 sekundy
17:08|-|aroscha_ [~aroscha@193.238.159.249] has joined #uml
17:14|-|aroscha [~aroscha@77.118.223.183] has quit [Ping timeout: 480 seconds]
17:16|-|aroscha_ [~aroscha@193.238.159.249] has left #uml []
17:17|-|tchan [~tchan@c-24-13-84-219.hsd1.il.comcast.net] has quit [Server closed connection]
17:18|-|tchan [~tchan@c-24-13-84-219.hsd1.il.comcast.net] has joined #uml
17:27|-|richardw [~richardw@M390P022.adsl.highway.telekom.at] has joined #uml
17:47|-|aTypical [~aTypical@24-205-131-112.dhcp.mtpk.ca.charter.com] has joined #uml
17:48<aTypical>Hi, all. I was in earlier asking for an openSUSE root_fs. I never found one so I thought I'd check in here again. Does anyone know of one?
17:51<caker>maybe jailtime.org?
17:51[~]caker shrugs
17:54<aTypical>Thanks, caker. I'll check.
17:54<aTypical>That appears to be a Xen site.
17:56<caker>doesn't matter
17:56<caker>they'll work the same -- just create a few /dev/ubd nodes and you're set
17:57<aTypical>Oh. Nice. Thanks again.
18:03|-|kos_tom [~thomas@humanoidz.org] has quit [Quit: I like core dumps]
18:03|-|aTypical [~aTypical@24-205-131-112.dhcp.mtpk.ca.charter.com] has left #uml [Leaving]
18:08|-|richardw [~richardw@M390P022.adsl.highway.telekom.at] has quit [Quit: Leaving]
18:36<albertito>is it possible to build an x86-uml inside an x86-64? (without using a chroot or something like that)
18:40<albertito>wow, ARCH=uml SUBARCH=i386 seems to work! (at least it's compiling)
19:54|-|ada1 [~adam@CPE0050bacc0b3e-CM0f2049992093.cpe.net.cable.rogers.com] has joined #uml
20:08|-|aroscha [~aroscha@chello213047053193.30.11.tuwien.teleweb.at] has joined #uml
20:08<aroscha>hi
20:09<aroscha>q: i somehow don't find the output of the guest kernel - when it boots it should (according to the docs) tell me which /dev/pts/X I should redirect my screen session to, when I specified con=pts
20:09<aroscha>but where is that?
20:10<aroscha>so far I have only been quessing which /dev/pts/X I should connect to with screen
20:10<caker>I never use that, and just boot it in a screen session, with con0=fd:0,fd:1
20:10<aroscha>ah ok
20:11<aroscha>so you basically write "screen -S $umidname ./linux " ?
20:11<caker>also, you'll need to start a getty on tty0 in your uml's /etc/inittab
20:11<aroscha>ok
20:11<caker>yup
20:11<aroscha>i don't have that yet since I created my own mini mini busybox system
20:11<caker>man screen -- lotsa options for detatching right away, etc
20:11<aroscha>yes I know. screen rocks
20:12<aroscha>my favorite: screen -x :)
20:12<aroscha>so cool for working together on one text file hehe
20:12<aroscha>caker: thanks
20:12<caker>screen indeed rules
20:12<caker>np
20:29<aroscha>screen -d -m -S $UMIDNAME ./linux ... params....
20:33<aroscha>hmm.... I am wondering if i should recompile the host's kernel to enable the kernel preemption patch
20:33<aroscha>this way each uml instance can potentially preemt the host kernel in case it needs something .
20:33<aroscha>makes sense?
20:33<aroscha>higher response rate in the uml guest ?
20:41|-|cmantito [~gphreak@c-76-98-50-187.hsd1.nj.comcast.net] has quit [Quit: I aer quitted.]
20:47<caker>possibly. I've always left it off, but that was way back when it was buggy
20:55<aroscha>well the preemtion patch has been stable for some years now as far as I remember
20:56<aroscha>linux really is a "patchwork family" hehe
20:56<caker>I say go for it -- I'm curious to know if it helps
20:57<caker>Hmm, what about the scheduler time slice settings (or jiffies or whatever it's called) -- the 100, 250, 1000 option?
20:57<caker>maybe crank that down to 100
21:04<aroscha>yes
21:04<aroscha>caker: i will try that. I will report about it.
21:05<aroscha>CUrrently I am still trying to find good system status monitoring and visualization software.
21:05<aroscha>so far I use munin but it is a bit slow
21:05<aroscha>so for some real numbers I would need something that measures response time
21:07<caker>sysstat?
21:09<aroscha>hm... i guess i would have to connect that to rrd or gnuplot or so?
21:11<aroscha>well, anyway.... 500 instances running now, each of them blasting out routing daemon info to a common br device. Works!
21:11<caker>sick ...
21:11<aroscha>it is indeed interesting that I spend 1000 euros just for RAM but UML does not even need it
21:11<aroscha>rather CPU power and interruprs
21:12<aroscha>irq rate is quite high
21:12<caker>What's the purpose of all of your experimentation, anyway?
21:12<aroscha>improve the cpu load of the olsr.org routing daemon
21:13<aroscha>so there actually is a real meaning behind it :)
21:13<caker>Yeah, UML chews through context switches, too
21:13<aroscha>and not only cpu load but also it is best to test for race conditions in a protocol when you really deploy it somehow
21:14<aroscha>you mean it does not like context switches?
21:24|-|baroni [~baroni@c906a072.virtua.com.br] has quit [Ping timeout: 480 seconds]
21:29<aroscha>caker: hihi! look: http://texas.funkfeuer.at/topo/ each line represents a "connection" to another instance
21:30<caker>aroscha: unreal
21:31<caker>I think each time UML changes from kernel mode <-> user space it generates a few CS
21:31<caker>so each syscall means a few CS, then back to userspace a few more, etc
21:32<aroscha>ouch
21:32<caker>it's one of UML's largest areas of overhead
21:32<caker>supposedly, KVM+UML will eliminate that
21:32<caker>along with the syscall emulation overhead, itself :)
21:32<aroscha>can that be improved? Hm... seems like I have to get reading the source
21:33<aroscha>yeah that would be great
21:33<caker>but, Jeff will need to give you the true answers on that ..
21:33<aroscha>well, in case you need a testbed ... you know where to ask now ;-)
21:33<caker>hah
21:33<aroscha>ok, 1000 now
21:33<caker>so, one large bridge with 200+ tuntaps?
21:34<aroscha>yes, at the moment this is one bridge + 500 tuntaps
21:34<caker>doing any ebtable/iptable filtering in there?
21:34<aroscha>not yet
21:34<caker>I've been curious as to ebtable's overhead
21:34<aroscha>of course for network simulation i will use tc netem later
21:34<aroscha>ah, i will notice it i guess
21:34<aroscha>basically I want to stay as much in kernel space as possible
21:35<caker>For instance: http downloads of large files directly to/from the host, to a UML insance running on the same host provide less than stellar results
21:35<caker>That's with ebtable rules. Haven't tried without yet (just "discovered" this a few days ago)
21:36<aroscha>hm...
21:36<aroscha>but the normal iptables and iproute2 stuff should work also on the tapX devices, right?
21:37<caker>It does, but not on layer2, which is critical for making sure UML instances don't ARP reply for, say, the gateway address (which would be bad)
21:37<caker><-- learned from experience
21:37<caker>that's where ebtables comes in :)
21:37<aroscha>ahh
21:37<aroscha>hehe
21:37<aroscha>somebody was spoofing the gw? ;-)
21:38<caker>I had layer3 filtered already (also using ebtables) and they switched gateway and IP in their config file .. it was innocent, but yeah
21:39<caker>Also, I read recently there's a way to stop packets from traversing up into iptables after they've already been through ebtables .. and that avoids double buffering or something, but haven't tried it yet
21:45<aroscha>i see
21:45<aroscha>I guess that is more important for hosting
21:46<aroscha>keeping people in their place
21:46<caker>yeah.. who knows what they're up to
21:46<aroscha>hmmm. ... i just wonder why some instances crash for me: BUG: soft lockup detected on CPU#0!
21:46<aroscha>caker: especially if you don't know them hehe
21:46<caker>Do they actually crash after that, or just BUG out?
21:46<aroscha>Call Trace:
21:46<aroscha>60a7fbc8: [<60028e6c>] wake_up_process+0x10/0x12
21:46<aroscha>60a7fbd8: [<6004a2cd>] softlockup_tick+0x8f/0x93
21:46<aroscha>60a7fbf8: [<60035de5>] run_local_timers+0x13/0x15
21:46<aroscha>60a7fc08: [<60035c10>] update_process_times+0x49/0x73
21:46<aroscha>60a7fc38: [<600138dc>] timer_handler+0x36/0x5a
21:46<aroscha>60a7fc68: [<60025be3>] sig_handler_common_skas+0xf3/0x10c
21:46<aroscha>60a7fca8: [<60022609>] real_alarm_handler+0x37/0x3b
21:47<aroscha>60a7fcc8: [<60022660>] alarm_handler+0x53/0x63
21:47<aroscha>60a7fcf8: [<60024719>] hard_handler+0x15/0x18
21:47<aroscha>60a7fe18: [<60015b75>] buffer_op+0x2e/0x5f
21:47<aroscha>60a7fef8: [<60015619>] handle_syscall+0x79/0x80
21:47<aroscha>60a7ff28: [<6002459b>] move_registers+0x3f/0x59
21:47<aroscha>60a7ff68: [<6002544c>] userspace+0x9f/0x181
21:47<aroscha>60a7ffc8: [<6001532e>] fork_handler+0x86/0x8d
21:47<aroscha>so i guess that means they really crash
21:47<caker>You can get that by simply pausing the UML for a second or two, then unpausing .. that's why I never enable the DETECT SOFTLOCKUP thing
21:47<aroscha>as a kernel option?
21:47<caker>yes
21:47<caker>Also, the UML should continue to run after that
21:47<aroscha>so I should simply recompile my uml kernel without that?
21:48<caker>it just means it didn't get CPU time within the "detect softlockup" threshold
21:48<caker>Correct
21:48<aroscha>i see
22:09|-|ada1 [~adam@CPE0050bacc0b3e-CM0f2049992093.cpe.net.cable.rogers.com] has left #uml []
22:13|-|amitg [~gud@n119.user.cis.ksu.edu] has joined #uml
22:13<amitg>how can I have more info when kernel panics within UML?
22:21<amitg>It says it receives a SIGSEGV in sig_handler_common_skas, which I don't think is the real cause
22:48|-|shze [~shze@c-69-245-52-216.hsd1.tn.comcast.net] has joined #uml
22:59|-|VS_ChanLog [~stats@ns.theshore.net] has left #uml [Rotating Logs]
22:59|-|VS_ChanLog [~stats@ns.theshore.net] has joined #uml
23:12|-|peterz [~peterz@i55087.upc-i.chello.nl] has quit [Server closed connection]
23:13|-|peterz [~peterz@i55087.upc-i.chello.nl] has joined #uml
23:19|-|mheinzes [~shze@c-69-245-52-216.hsd1.tn.comcast.net] has joined #uml
23:23|-|shze [~shze@c-69-245-52-216.hsd1.tn.comcast.net] has quit [Ping timeout: 480 seconds]
23:26|-|mheinzes changed nick to shze
---Logclosed Mon Apr 23 00:00:43 2007