Back to Home / #uml / 2007 / 04 / Prev Day | Next Day
#uml IRC Logs for 2007-04-16

---Logopened Mon Apr 16 00:00:29 2007
00:31|-|besonen_mobile [] has quit [Ping timeout: 480 seconds]
00:32|-|besonen_mobile [] has joined #uml
01:39|-|motp [~motp@] has joined #uml
02:35|-|Ancalagon [] has joined #uml
04:00|-|ram [] has joined #uml
04:05|-|motp [~motp@] has quit [Quit: Leaving]
05:20|-|polyonymous [] has quit [Ping timeout: 480 seconds]
05:26|-|aroscha [] has joined #uml
05:36|-|polyonymous [] has joined #uml
06:31|-|dgraves [] has quit [Ping timeout: 480 seconds]
06:38|-|polyonymous [] has quit [Ping timeout: 480 seconds]
06:54|-|polyonymous [] has joined #uml
06:59|-|baroni [] has quit [Remote host closed the connection]
07:15|-|motp [~motp@] has joined #uml
07:19|-|krau [~cktakahas@] has joined #uml
07:20|-|Ancalagon [] has quit [Ping timeout: 480 seconds]
07:21|-|Ancalagon [] has joined #uml
07:44|-|aroscha [] has quit [Quit: aroscha]
07:48|-|motp [~motp@] has quit [Quit: Leaving]
07:52|-|motp [~motp@] has joined #uml
08:14|-|Ancalagon [] has quit [Quit: ChatZilla [Firefox]]
08:25|-|motp [~motp@] has quit [Quit: Leaving]
09:00|-|the_hydra [~a_mulyadi@] has joined #uml
09:01<the_hydra>hi all
09:04<the_hydra>kokoko1: hi, pardon me, do you know any open source tool to generate a predefined CPU load?
09:05<pat__>hmm predefined = what .. like 65%?
09:06<the_hydra>pat__: yup
09:07<pat__>doubt that is possible
09:07<kokoko1>same here no idea
09:08<pat__>it possibly could
09:08<pat__>what do you need that for?
09:08|-|hojuruku [] has joined #uml
09:09<hojuruku>is there support for linux caps in UML?
09:09<the_hydra>pat__, kokoko1: i just tried to add a background load..
09:09<the_hydra>pat__, kokoko1: in order to test scheduling latency
09:09<pat__>ahh well
09:09<the_hydra>hojuruku: by caps, you mean capability?
09:09<pat__>then you may go for 'stress'
09:10<the_hydra>pat__: i've seen that tool at a glance, but I see no option so I can throttle the CPU load...
09:10<pat__>well you can play with its settings
09:10<the_hydra>pat__: or maybe, it is technically hard to achieve?
09:10<the_hydra>pat__: trying to...
09:10<pat__>well it kinda is
09:11<pat__>the problem is .. the process needs some sort of control structure that checks cpu load regularly
09:11<pat__>and reacts on it
09:11<pat__>would be interesting to write something like that .. :)
09:12<the_hydra>pat__: i also thought about that.... and the fact that scheduler could add a twist by readjusting dynamic our cpu hogger might not be hogging all the time like we plan..CMIIW
09:19|-|the_hydra [~a_mulyadi@] has left #uml []
09:20<hojuruku>hydra: yep
09:20|-|the_hydra [~mulyadi@] has joined #uml
09:20<hojuruku>caps and MMX, i just want to know so I use the right compile time options. I can't see anything about it on the UML website
09:21<the_hydra>sorry was offline
09:22<the_hydra>MMX? the MMX instruction set? yes you can use it as long as your host has them
09:23<the_hydra>pardon me, what caps do you refer?
09:23<hojuruku>linux default capabilities
09:24<hojuruku>the caps use flag in gentoo
09:24<hojuruku>i heard some vserver implementations have problems with it
09:24[~]hojuruku is a caker customer
09:28<the_hydra>never check it, but I think UML should have it
09:31|-|aroscha [] has joined #uml
09:40|-|aroscha [] has quit [Quit: aroscha]
10:20|-|hfb [] has joined #uml
10:26|-|jdike [] has joined #uml
10:26<jdike>Hi guys
10:28<the_hydra>hi jeff
10:29<caker>jdike: hello
10:29<caker>jdike: hojuruku wants to know if UML supports/relays MMX instructions to the host (if supported)
10:30<jdike>they are just executed
10:30<jdike>they are non-privileged
10:35|-|aroscha [] has joined #uml
10:42|-|aroscha [] has quit [Quit: aroscha]
11:01|-|tchan [] has quit [Read error: Connection reset by peer]
11:13|-|tchan [] has joined #uml
11:17|-|baroni [] has joined #uml
11:17<the_hydra>baroni: hi
11:17<the_hydra>baroni: good to hear you have finished the UML docs
11:19|-|ram [] has quit [Read error: Operation timed out]
11:23|-|aroscha [] has joined #uml
11:27|-|kos_tom [] has joined #uml
11:39|-|tchan [] has quit [Quit: WeeChat 0.2.5-cvs]
11:52|-|m0sh [] has quit [Read error: Connection reset by peer]
12:07|-|ram [] has joined #uml
12:17|-|ram [] has quit [Ping timeout: 480 seconds]
12:18|-|ram [] has joined #uml
12:34|-|the_hydra [~mulyadi@] has quit [Quit: using sirc version 2.211+KSIRC/1.3.10]
12:36|-|tchan [] has joined #uml
12:39|-|pat_ [] has joined #uml
12:45|-|pat__ [] has quit [Ping timeout: 480 seconds]
12:45|-|nikkne [] has joined #uml
12:58<aroscha>i was wondering how I can set the ip address in the guest from "outside" ?
12:59<aroscha>i tried eth0=daemon,ff:ff:10:00:00:$i,unix,/tmp/uml.ctl and $i is counting from 1 to 254 but... the mac addr in the guest does not get set
12:59<aroscha>jdike: am i simply asking a stupid question again and everything is documented somewhere anyway?
13:01<jdike>is that a legitimate MAC?
13:03<jdike>and you can't directly set the inside IP from the outside, but there are various ways to pass in a suggestion and have a script inside set the IP accordingly
13:04<aroscha>i tried umid but I could not find /tmp/uml/NAME in the guest
13:05<aroscha>jdike: uff, ff:ff:10:00:00:01 is not a valid MAC ;-)
13:06<aroscha>it works with 00:00:10:00:00:01
13:06<jdike>base the IP on the MAC
13:06<aroscha>jdike: agreed! that is the easiest way
13:06<aroscha>jdike: but anyway, i was also wondering how to do it with umid
13:06<jdike>the UML doesn't know its umid
13:07<jdike>at least not in userspace
13:07<jdike>also, anything on the command line which isn't a kernel switch ends up in the environment of the boot scripts
13:09<aroscha>ahhh... ok, did not know that
13:09<aroscha>but there is a limit according to the docs (?)
13:11<jdike>so if you have a bunch of config information to pass in, glue it together into a single option and break it up in a boot script
13:13<aroscha>i see
13:13<aroscha>and the length is limited?
13:13<aroscha>of the string
13:13<jdike>by the kernel command line, I guess
13:13<jdike>but that's 4K
13:14<aroscha>well, long enough to pass it the name of a config file via a hostfs :)
13:14<jdike>that'll work too
13:29|-|nikkne [] has quit [Quit: nikkne]
13:31|-|nikkne [] has joined #uml
13:54|-|turnp [] has joined #uml
14:03|-|nikkne [] has quit [Quit: nikkne]
15:03|-|cmantito [] has quit [Ping timeout: 480 seconds]
16:07<aroscha>jdike: thx! works!
16:07<aroscha> has a topology map of the virtual network setup
16:15<jdike>So, you've got 400 running?
16:23|-|da-x [] has quit [Remote host closed the connection]
16:24|-|da-x [] has joined #uml
17:20|-|kos_tom [] has quit [Quit: I like core dumps]
17:30<aroscha>jdike: at the moment now on the pic 253
17:30<aroscha>on the topology graph
17:30<aroscha>but i had 400 running before
17:31<aroscha>jdike: how is it with memory usage?
17:31|-|baroni [] has quit [Ping timeout: 480 seconds]
17:32<aroscha>the docu was a bit confusing for me. As far as i understood the docs now it says (SKAs0 mode!) that UML will only use the memory that each instance actually consumes ... not the amount that the mem=xM specifies . Correct?
17:38|-|baroni [] has joined #uml
17:40<aroscha>hm have to go now
17:40<aroscha>jdike: i will have to come back to that question later again. It will be quite important for having so many instances
17:41|-|aroscha [] has left #uml []
18:06|-|BillyCrook [] has joined #uml
18:08<BillyCrook>is UML the right tool for creating a virtual switch with 5 tap interfaces on it, and bridging one of them to eth0 in order to present 5 indepentand MAC addressed network cards to the network?
18:16|-|ram [] has quit [Ping timeout: 480 seconds]
18:17<jdike>BillyCrook, I don't think so, but I'm not sure
18:18<jdike>you just want a bunch of virtual interfaces, which a bunch of tap devices will give you
18:23<BillyCrook>the thing is, I want then to think they're attached to my physical network, not to some program
18:23<BillyCrook>so if they're all attached to eachother through a UML network switch, and then one of them is bridged to the physical lan, I think that could do the trick, but I've never touched UML before
18:24<jdike>what are you trying to do, exactly?
18:24<BillyCrook>I tried instantiating 5 tap interfaces, and using a bridge directly globing all five together with eth0, and that failed disasterously
18:24<BillyCrook>eth0 behaved as if it were disconnected from the lan, and the taps could never aquire a DHCP Ip address
18:24<jdike>bridging should do half of what you want, but the taps still need some process to talk to
18:25<BillyCrook>I am trying to have one physical nic and one physical cable from my computer to the switch, but aquire four DHCP IP addresses
18:25<BillyCrook>could that process be a UML network switch?
18:25<jdike>that's because they are all glued together into one interface - the bridge - and that interface is what gets dhcp'd
18:26<jdike>currently, uml_switch only expects to attach to one tap device
18:26<jdike>that should be an easy limitation to lift, though
18:26<jdike>what would you be accomplishing by doing that?
18:27<BillyCrook>I basically want to do IP aliasing, but the thing is I don't necessarily know what IPs I will get via DHCP, and just grabbing ips without getting them from the official DHCP server would be rude
18:27<BillyCrook>I have several services that all must run on port 443, so they need seperate IPs
18:27<BillyCrook>and I want to use one box for all my firewalling and routing
18:28<BillyCrook>at the moment, it actually HAS six NICs in it, and they can do the job, but I want to reduce cabling and increase the elegance of the solution
18:29<jdike>so, all you want is 6 IPs from the DHCP server
18:30<BillyCrook>well, I only need 4, but I figure, I'd leave two interfaces down for future use
18:31<BillyCrook>yeah, I basically want to have four virtual NICs that behave as if they were on the physical NIC's network
18:31<jdike>that's not the same thing
18:31<BillyCrook>like VMWare Virtual Machines can actuall bridge to the physical lan, and have unique MAC addresses on that lan
18:32<jdike>you don't need different MACs
18:32<BillyCrook>to get different IPs through DHCP, how else can you do that?
18:32<BillyCrook>when the DHCP server sees a mac it recognises, it gives out the DHCP that is currently leased to it
18:33<jdike>how about changing the MAC, then DHCP-ing through it again?
18:34<BillyCrook>well, I don't know how the DHCP server determines whether an IP is still in use or not, Some ping that IP, some check by arp, some check by lease time expiration. Either way, its important to me to have unique mac addresses
18:35<BillyCrook>I realise the physical nic will have to be in promisc mode, but thats OK to me
18:35<jdike>which of these will fail if the same NIC has DHCP-ed five time?
18:36<BillyCrook>well, whatever DHCP client I used if I had to kill and reset it for the mext mac address would only check in with the dhcp server and renew the last IP it leased
18:39<BillyCrook>and the point also is I don't want a hacked DHCP client, with seperate NICs I can also more easily firewall and monitor traffic in and out of the box
18:41<jdike>If you set up a bunch of tap devices with proxy arp, is there any reason that won't work?
18:42<BillyCrook>like bridge them together then proxy arp them?
18:42<jdike>no bridging
18:43<BillyCrook>well, I'd imagine for IP to traverse from the taps to eth0 I would have to bridge them
18:43<jdike>you'd also need dhrelay
18:44<jdike>a route to the outside world through eth0 will work
18:46<BillyCrook>hmm, if the default route through eth0 works, would it even need proxyarp seeing as the only hosts on any local lan would be on eth0 where the real lan is?
18:46<BillyCrook>I'm going to try this dhrelay idea...
18:47<jdike>you need proxy arp, otherwise the tap MACs won't be visible on the LAN
18:48<BillyCrook>ahh, proxyarp for eth0's hosts sake not for me, I see
18:48<BillyCrook>dang man, I've been trying to figure this out for a week, and I think you just hit the nail on the head
18:49|-|hfb [] has quit [Quit: Leaving]
19:01<BillyCrook>when I google "fedora dhrelay" I get 8 results, and the first one is a chatlog of you in 2003
19:02<BillyCrook>could it by chance be called dhcrelay?
19:11<BillyCrook>I'm trying to find these packages for fedora, I found dhcp-forwarder, and heard of parprouted, but i can't find it in the repos.
19:13<BillyCrook>Gotta leave the computer for a while
19:23<jdike>looks like dhcp-forwarder
19:25<pat_>hey Jeff .. I have 125mb free ram on my mashine .. anyway .. once I attached 64 instead of 128mb to the UML instance .. it'll crash at some random point with the message that the tmpfs ran out of space - which is not the case
19:25<pat_>I can reproduce it tho
19:30<pat_>I am currently reproducing that case with the 'mysql-server-5.0' package from debian which is about 25mb big .. but the uml instance is not able to fetch it .. since the kernel will panic before ..
19:30<pat_>I get two 'kinds'of stacktraces
19:30<pat_>either I get
19:31<pat_>this one .. or one without the segfault with no mm .. I am not sure wether this is UML related .. I could think of broken RAM memory as well
19:32<pat_>but then again .. I wonde rbecause if I only attached 64mb tot he uml never panic'ed so far .. thus neither the host system, nor the uml instance must have tried to write to the broken memory parts .. which I find is quite unlikely
19:35<caker>damn, debug kernels are BIG... 78M
19:36<jdike>yeah, big symbol tables
19:36<caker>vs 5.9MB
19:36<jdike>pat_, the only explanation I've ever seen for this kind of crash is no tmpfs space
19:37<jdike>how sure are you that you had room there?
19:37<caker>jdike: <-- still look good?
19:38<jdike>yup, still looks good
19:38<pat_>jdike: well
19:38<pat_>I checked right before I used aptitude or apt-get
19:38<pat_>and package I am downloading is only 25 MB
19:39<pat_>while tmpfs is 63MB and 0% in use
19:39<jdike>nothing to do with the size of the package
19:39<pat_>well if it stops immediately after I issue the downloading process
19:39<jdike>but you're running a 64M UML in that
19:39<pat_>and if it works with the 64M uml
19:39<pat_>I doubt it's a space problem at all
19:39<pat_>and no the 64M uml works just fine
19:40<jdike>as soon as the UML is using all of its memory, it will be out of tmpfs space and will crash
19:40<pat_>it only panics if I increase it to 125M
19:40<jdike>OK, that makes sense
19:40<jdike>the 64M UML just barely fits
19:41<pat_>hmm but the host system has 130MB of free, unused, not-cached ram
19:41<pat_>sow hy would the uml crash if I attach 125mb to it?
19:41<jdike>doesn't matter
19:41<jdike>the host could have more, it could have less
19:42<jdike>because the tmpfs mount runs out of room
19:42<pat_>heh .. why would the 128M uml run out of space .. if the 64M uml does not?
19:42<pat_>I am doing exactly the same thing to reproduce the panic
19:42<jdike>the 128M UML will get a SIGBUS from the host when it accesses the 65th meg
19:42<jdike>coz the 65th meg ain't there
19:42<jdike>or 64th
19:43<pat_>hmmm how come it isnt?
19:43<pat_>theres enough ram left on the host .. why would the host send a SIGBUS to the uml process then?
19:44<jdike>because the mount is full
19:44<pat_>and heh as I said .. it appears at some random point .. sometimes the package I am downloading gets to 85% then the kernel panics .. sometimes the kernel panics immediately after I issued the command
19:45<pat_>of the uml instance or of the host?
19:45<jdike>doesn't surprise me
19:45<jdike>of the host
19:45<pat_>hmm that makes indeed sense
19:47<pat_>hmm but then again .. why does it sometimes panic immeditaly? I just checked /dev/shm of the host
19:47<pat_>20% in use
19:48<pat_>you're right
19:48<jdike>you'd have to see df at the point that it panics
19:48<pat_>just checked /dev/shm while I was reproducing the panic of the uml instance
19:48<jdike>just make the mount bigger than the UML memory - if it stops panicing, that was the problem
19:49<pat_>I am sure thats the problem .. I was confused .. since the error messege "Bus error - the /dev/shm or /tmp mount likely just ran out of space" does not directly refer to the host
19:50<jdike>I can add that to the error message
19:50<pat_>I always thought it's the UML that runs out of space .. which was impossible .. thats why I thought of some bug .. could be worth changing the error message to something like "Bus error - the /dev/shm or /tmp mount on the host system likely just ran out of space" or similar
19:50<pat_>would be great ;)
19:51|-|fo0bar [] has quit [Read error: Operation timed out]
19:51<pat_>thanks for the explanation anyway :)
19:51<jdike>just did
19:52<jdike>- printk("Bus error - the /dev/shm or /tmp mount likely "
19:52<jdike>- "just ran out of space\n");
19:52<jdike>+ printk("Bus error - the host /dev/shm or /tmp mount "
19:52<jdike>+ "likely just ran out of space\n");
19:53<pat_>ye .. great ;) ... do you actually have some sort of own git tree for uml .. or just sending in the patches to linus and apply them to the linux-2.6-git tree?
19:53<jdike>just sending them in
19:53<pat_>ah ok
19:53<caker>jdike: I'm assuming that a core from a non-symboled kernel, fed into gdb with a symboled kernel from an identical tree + symbols enabled wouldn't work
19:54<pat_>well thanks and good night ;)
19:54<jdike>the -mm tree has the latest stuff that I consider mainline-worthy
19:54<jdike>I don't think so
19:54<caker>me too
19:54<jdike>try it, but I have my doubts
19:54|-|pat_ [] has quit [Quit: leaving]
19:55|-|fo0bar [] has joined #uml
19:55<jdike>you are wanting to avoid running 75M images?
19:55<caker>moving forward with this core dump support ... new host kernel built/tested (older one was based on 2.6.16, w/o the core_pattern business). Been testing a new UML kernel with your core dump patches ..
19:55<caker>yeah, I haven't turned on the symbols stuff yet until tonight .. was wondering why the hell it was taking so long to scp it
19:56<jdike>the symbols don't get loaded when the thing is run - they just sit on disk until gdb wants them
19:57<caker>yeah ... too bad they're not gzipped inside the file or something
19:58<caker>that's almost as big as our smallest distro (~90M debian barebones)
19:59<jdike>stop making invidious comparisons
20:03<jdike>if you strip it and get a core from it, I would expect gdb against the original, unstripped binary to work against that core
20:21|-|jdike [] has quit [Quit: Leaving]
20:26|-|aroscha [] has joined #uml
21:52|-|baroni [] has quit [Ping timeout: 480 seconds]
22:10|-|baroni [] has joined #uml
22:30|-|silug [] has quit [Ping timeout: 480 seconds]
22:59|-|VS_ChanLog [] has left #uml [Rotating Logs]
22:59|-|VS_ChanLog [] has joined #uml
---Logclosed Tue Apr 17 00:00:37 2007