Back to Home / #xen / 2005 / 04 / Prev Day | Next Day
#xen IRC Logs for 2005-04-01

---Logopened Fri Apr 01 00:00:39 2005
00:07--- <<-- jeroney [~jeroney@rrcs-24-173-250-83.sw.biz.rr.com] has quit (Read error: Connection reset by peer)
00:24--- ---> totempole [~TotemPole@c-24-99-72-195.hsd1.ga.comcast.net] has joined #xen
00:25totempole Hi Folks, I have a very basic question, what is ioemu (what is its relevance to xen)
00:40--- <<-- totempole [~TotemPole@c-24-99-72-195.hsd1.ga.comcast.net] has quit (Quit: Leaving)
01:08--- ---> shaleh [~shaleh@dsl017-044-048.sfo4.dsl.speakeasy.net] has joined #xen
01:37--- <<-- shaleh [~shaleh@dsl017-044-048.sfo4.dsl.speakeasy.net] has quit (Quit: Leaving)
02:11--- <<-- rusty [~rusty@bh02i525f01.au.ibm.com] has quit (Quit: Client exiting)
02:46--- ---> sleon [test@p54A15705.dip.t-dialin.net] has joined #xen
02:47sleon how to select additional bootparameters in domain config file?
03:30--- <<-- anthill333 [~larsr@palwebproxy2.core.hp.com] has quit (Remote host closed the connection)
03:37mael hi guys
05:00--- <<-- DEac- [~deac@xdsl-81-173-139-25.netcologne.de] has quit (Ping timeout: 480 seconds)
05:09--- ---> hebutterworth [~harry@blueice3n1.uk.ibm.com] has joined #xen
05:11--- ---> DEac- [~deac@xdsl-213-196-203-102.netcologne.de] has joined #xen
05:22hebutterworth| Anyone know Mark Williamson's handle?
06:32knewt on here? MarkW
06:37--- <<-- DEac- [~deac@xdsl-213-196-203-102.netcologne.de] has quit (Quit: Leaving)
06:37hebutterworth| thanks
06:38--- ---> DEac- [~deac@xdsl-213-196-203-102.netcologne.de] has joined #xen
07:04buggs on my notebook it fails to bring up eth0 (tigon3)
07:07--- ---> perry [~perry@fw.office.netland.nl] has joined #xen
07:08--- <<-- hebutterworth [~harry@blueice3n1.uk.ibm.com] has quit (Quit: Leaving)
07:25* riel yawns
07:25riel (good morning)
07:26mael hi
07:26mael its 2pm here :)
07:37riel 7 am here
07:37riel or it was, when I woke up ;)
08:43--- <<-- Method [Method@pcp0010742058pcs.howard01.md.comcast.net] has quit (Ping timeout: 480 seconds)
08:54--- ---> Method [~Method@stanford.columbia.tresys.com] has joined #xen
08:55--- User: *** riel is now known as surriel
08:56mael surriel: where are u in the states? west coast?
08:56surriel east coast
08:56surriel and about to head to over to the office
08:56surriel aka "that place they pay me to hang out at"
08:59mael :)
08:59mael I thought surriel was you work nick
09:00mael uh no
09:00mael this is with you home ip
09:33--- User: *** demon is now known as mon
09:41--- User: *** unriel is now known as riel
09:42--- ---> matta-lt [~matta@69.93.28.254] has joined #xen
09:52--- <--- perry [~perry@fw.office.netland.nl] has left #xen ()
09:55--- ---> hollis [~hollis@user-0vvde2g.cable.mindspring.com] has joined #xen
09:55--- ---> rharper [~rharper@pixpat.austin.ibm.com] has joined #xen
10:08--- <<-- plars [~plars@pixpat.austin.ibm.com] has quit (Read error: Operation timed out)
10:20--- ---> plars [~plars@pixpat.austin.ibm.com] has joined #xen
10:46knewt didn't get the job :(
10:48riel ;(
11:05--- ---> jeroney [~jeroney@pixpat.austin.ibm.com] has joined #xen
11:51--- <<-- hollis [~hollis@user-0vvde2g.cable.mindspring.com] has quit (Quit: leaving)
11:55--- <<-- plars [~plars@pixpat.austin.ibm.com] has quit (Ping timeout: 480 seconds)
12:04--- ---> plars [~plars@pixpat.austin.ibm.com] has joined #xen
12:15--- ---> hollis [~hollis@pixpat.austin.ibm.com] has joined #xen
12:28sleon hi
12:28sleon is it possible to set additional kernelbootparamers to the xen's guaest domain??
12:29@Sir_Ahzz yes
12:29rharper sleon: yes, you can add them to the extra variable in your domain configuration file
12:34sleon rharper, how is it called? i can not find its name in the documentation
12:36rharper sleon: in /etc/xen/xmexample1: extra = "ro" <--- that appends ro to the kernel parameters when launching that domain. you can just add to the 'extra' variable
12:37sleon rharper, aaha thx, it is not documented though
12:39rharper sure, sounds like a documentation patch could help, I'm sure they will take it
12:42sleon rharper, also in the .patch format?
12:43rharper generated with diff -u original_file changed_file > mychanges.patch
12:45sleon rharper, i talk about the webpage docu, not the sourcedocu
12:45sleon i didn't even look at the source docu
12:45sleon cause it looks very very strange
12:45sleon it is unclear where to get information from
12:46eigood Sir_Ahzz: I get to have fun this weekend(I hope)
12:46@Sir_Ahzz oh?
12:46eigood trying to get $10k worth of hardward into a deployable state
12:46@Sir_Ahzz you finally get a date or somethign? :)
12:46eigood 2 indentical machines, 15 SATA slots, 3U
12:46@Sir_Ahzz well, I spose $10k in hardware counts as a date for you. ;)
12:46@Sir_Ahzz nice.
12:46@Sir_Ahzz I need one of those for here.
12:46eigood first has 8 160g drives, second has 8 300g drives
12:47rharper sleon: ahh, sure. There supposedly is a wiki (wiki.xensource.com) which might be open to those changes.
12:47sleon eigood, uiuiui
12:47eigood the way we(I) build machines now is rather slick
12:47sleon eigood, and cpu's?
12:47eigood netboot the new machine, copy the nfsroot image to the drive, reboot
12:47@Sir_Ahzz wanna rent me 3 units so I can develop out a software package for agregated iSCSI SAN solution? :)
12:47sleon rharper, a nice thx
12:47rharper sleon: welcome
12:47eigood single(but dual capable) p4-3.2g, 12 dimm slots(2 gig atm), dual gig-e
12:47eigood Sir_Ahzz: we only have 2
12:48sleon eigood, and it costs 10k$?!
12:48eigood the 8x160 machine will be the new xen fileserver; using raid10
12:48eigood sleon: 2 boxes
12:48--- Channel: Sir_Ahzz changed the topic of #xen to: Xen Homepage-> http://www.cl.cam.ac.uk/Research/SRG/netos/xen/index.html || Xen Wiki -> http://wiki.xensource.com
12:48@Sir_Ahzz eigood software or 3ware?
12:48eigood software; gives us more control
12:48@Sir_Ahzz *nod* takes more cpu time though.
12:48eigood 3ware cards are too expensive
12:49sleon eigood, how much ram?
12:49eigood plus their sata implementation is stupid; they have pata controllers combined with pata<->sata on their card
12:49@Sir_Ahzz how are you getting so many sata in one box then? pci cards, or 4-way SATA splitters?
12:49eigood an 8-port 3ware is too tall to fit in the box.
12:49sleon what are the 3ware caeds?
12:49eigood Sir_Ahzz: 8 port rockraid cards
12:49@Sir_Ahzz eigood: wait another 2 months, 3ware has a native sata in the pipeline.
12:49eigood sleon: read scrollback
12:50sleon eigood, what is scrollback?
12:50eigood Sir_Ahzz: can't
12:50sleon eigood, sorry
12:50@Sir_Ahzz they claim the new xor engine can sustain 150MB/sec on 12 drives.
12:50@Sir_Ahzz I know doogie. 8-P
12:50eigood sleon: I already said it, at the same time I gave the cpu specs
12:50sleon eigood, aa ok :)
12:50eigood Sir_Ahzz: we get 200MB/s on these
12:50sleon eigood, excuse me
12:50cw Sir_Ahzz: 150MB/s across 12-drives is horrible
12:51eigood each drive does 50MB by itself
12:51cw you can get about 70MB/s+ from each spindle on modern disks
12:51eigood in raid10, that's 200MB, so we are getting full speed, using software
12:51@Sir_Ahzz cw considering the curent setup 3ware offers get's 80MB/sec. it's an improvement. :)
12:51eigood well, 50-55MB/s
12:52--- ---> shuri [sjnesjd@dsl.speedline209.226.electronicbox.net] has joined #xen
12:53sleon eigood, aaa it is serial ata raid controller
12:53* Sir_Ahzz looks at his 3 year old ide's and poiuts at their measly 22MB/sec. 8-P
12:53* eigood has a personal 3x250g raid5 set, holding tv shows
12:53sleon i will buy me dual opteron server soon
12:54sleon with tyan board
12:57@Sir_Ahzz get a quad with dual core when it's out. :)
12:58tab or wait for quadri core :)
13:00@Sir_Ahzz that'l be a while I think. :)
13:00matta-lt Sir_Ahzz: how many drives with the 3ware/80MB combo?
13:00@Sir_Ahzz I'll settle for a dual with dual core for the next boxen. get single cores minimal speed cpus for now.
13:00@Sir_Ahzz matta-lt: visit 3ware.com
13:01@Sir_Ahzz they have 2,4,8,and 12 drive sata and pata controllers.
13:01matta-lt I can get about 85MB/s w/ a 3ware 8000 series with 4 7200 RPM drives... RAID-5
13:01matta-lt which I don't consider bad at all
13:01@Sir_Ahzz sounds about right from the benchmarks i've read on ata raid.
13:01matta-lt RAID-10 is a bit faster
13:01matta-lt oh, that's in a 64-bit/66mhz PCI slot
13:01@Sir_Ahzz i'm geting 72MB/sec on 6 60GB ides on 4 ports.
13:01matta-lt in 32-bit it's limited to about 70MB/s
13:02matta-lt Sir_Ahzz: PATA?
13:02@Sir_Ahzz pIII-700 512MB ram.
13:02@Sir_Ahzz parallel ata.
13:02--- <<-- sleon [test@p54A15705.dip.t-dialin.net] has quit (Ping timeout: 480 seconds)
13:02@Sir_Ahzz the old standard.
13:02@Sir_Ahzz funny thing is writes happen at 72MB/sec and reads at 70MB/sec
13:02@Sir_Ahzz probably limitations of the PCI bus and 2 drives per channel setup I have.
13:03@Sir_Ahzz that and the drives are only 2MB/sec max.
13:03@Sir_Ahzz err 22MB/sec
13:03@Sir_Ahzz 5400rpm
13:03@Sir_Ahzz so my setup is pretty cheap. :)
13:03@Sir_Ahzz seems to serve 3 xen boxes with 12 domains decently enough over 100Mbps ether.
13:03matta-lt well, it's xen :)
13:04@Sir_Ahzz all athlon XP 2000+ w/ 1G
13:04@Sir_Ahzz my setup is like this, 2 raid array machines disk1 and disk2, connected via bonded 100Mbps ether (2 links per machine) to a concentrator that runs EVMS (handles management of all disks) which pipes out of 2 100Mbps ethernet ports (separate IPs)
13:05@Sir_Ahzz bount 1.3TB total capacity on it.
13:05@Sir_Ahzz less available due to raid-1 on a few volumes in EVMS.
13:05@Sir_Ahzz attempting to create an open source SAN management setup.
13:06@Sir_Ahzz nothing releasable yet.
13:06shuri nice project
13:06@Sir_Ahzz grew from necessity.
13:07@Sir_Ahzz raiding the entire array was too wastefull and caused IO conflicts.
13:07@Sir_Ahzz so it's inteligently split the raid1 between the two disk servers.
13:07shuri "inteligently" :)
13:07@Sir_Ahzz for high bandwidth it's striped across 6 disks 3 per disk box.
13:08@Sir_Ahzz my typing sucks today.
13:08@Sir_Ahzz I'll probably GPL it and offer support services and pre-built setups.
13:08@Sir_Ahzz primary target will be supporting Xen clusters.
13:09@Sir_Ahzz right now the PHP code and logic is rather rough. get's things wrong more often than not. 8-P
13:09@Sir_Ahzz and I can't implement auto-rebuild on a third disk target until I get another raid array machien setup.
13:10@Sir_Ahzz but for now (minus the IETD lockups) it's kept my xen cluster up non-stop for 8 months. despite loosing 4 disks of the set of 18
13:10@Sir_Ahzz but IETD is improving rapidly. :)
13:10@Sir_Ahzz it's exporting via iSCSI, NFS, and samba. could export via any open source method in the future.
13:11@Sir_Ahzz after the auto-rebuild is workign i'll tackle handling redundant concentrator boxes.
13:11--- ---> sleon [test@p54A17B26.dip.t-dialin.net] has joined #xen
13:11@Sir_Ahzz that way it'll be 100% fault tolerant.
13:12@Sir_Ahzz I just wish Linux was capable of recognising and using expanded target volumes in iSCSI.
13:12@Sir_Ahzz that would make the ultimate in managed storage IMO. dirt cheap too.
13:12@Sir_Ahzz cost me ~ $5k 3 years ago for the setup.
13:21--- ---> homebaum [~michael@wbar1.sea1-4-5-031-104.sea1.dsl-verizon.net] has joined #xen
13:21aliguori hey homebaum.. i like the nick :-)
13:22homebaum Working from home today - in case it was not obvious
13:23aliguori indeed :-)
13:36--- ---> jimix [~jimix@ip13.194.susc.suscom.net] has joined #xen
13:37--- ---> Mark [~Mark@maw48.kings.cam.ac.uk] has joined #xen
13:37muli hey jimix :-)
13:37jimix muli: heya
13:38--- <--- jimix [~jimix@ip13.194.susc.suscom.net] has left #xen ()
13:38--- ---> jimix [~jimix@ip13.194.susc.suscom.net] has joined #xen
13:38matta-lt yay
13:38matta-lt x86_64!
13:39knewt anyone here in the uk know of any jobs going? (london/oxford preferred)
13:43hollis Mark: I guess nobody builds non-SMP now, huh?
13:43Mark hollis: I do
13:44hollis Mark: oh, really? there are at least 2 breaks now; Ian's opt_noht is the new one
13:44Mark but not on the latest 2.6, since at the moment I'm either preoccupied with the 2.4 kernel's USB
13:44aliguori hollis: i just built a non-SMP build and it worked fine
13:44hollis Mark: I mean the Xen core, not Linux
13:44aliguori grabbed it from bitkeeper last night
13:44Mark or the XenFS in my private tree
13:44Mark oh right, I see
13:44aliguori oh, the hv..
13:44Mark hollis: i'm not sure anyone has *ever* built non SMP Xen on a regular basis
13:45hollis Mark: hmm, ok...
13:45Mark hollis: probably somebody should do so if the option's there
13:46Mark but most of the test machines here are SMP anyhow, so for the past few years I imagine UP builds have been the exception rather than the norm
13:46hollis well it's set in include/asm-x86/config.h, so I don't think anybody would run across it by accident
13:47hollis Mark: but for a new arch port, it would be easier to do the UP port first and worry about SMP issues later
13:47hollis but not even x86 builds UP, so that's not possible right now
13:48Mark i think there's an argument for having regular UP builds here
13:48Mark so that people notice when things break
13:49hollis sounds fine to me :) I was just wondering if nobody cared any more
13:49Mark Ian might be good to ask about the test build system
13:50--- <--- cw [cw@adsl-63-202-174-57.dsl.snfc21.pacbell.net] has left #xen (grrr)
13:50Mark I don't think there's been a definitive decision not to support UP builds
13:50--- ---> cw [cw@adsl-63-202-174-57.dsl.snfc21.pacbell.net] has joined #xen
13:50--- Channel: mode/#xen [+o cw] by ChanServ
13:50hollis ok
13:50Mark but equally well, they're not really tested.
13:53tab :)
13:54tab hey Mark not in the office ? :)
13:55Mark tab: not today
13:55Mark i was there late last night squishing a bug
13:55Mark and it didn't really seem worth coming in by the time I'd got up
13:56tab so basicly never when I'm in ;)
13:57Mark might be in over the weekend, probably back in during monday
13:58tab ok
14:00tab oh no x86_64 support announce just after my vbds .. nobody going to test :)
14:14aliguori tab: don't worry, i'm testing as we speak :-)
14:29caker Is it normal for xend to be using 40M+ RAM?
14:31riel caker: not afaik ;)
14:31matta-lt caker: nope
14:31caker Yeah .. I didn't think so. It might increase with each Xen boot/reboot ..
14:31matta-lt here is an interesting bug I have found in 2.0 / -testing
14:31matta-lt boot up host
14:31matta-lt start xend
14:31matta-lt start a domain
14:31matta-lt stop xend
14:31matta-lt then try starting xend, it won't start... only xfrd will
14:32matta-lt note, if you omit the "start a domain" part you can stop/start xend to your hearts delight
14:32matta-lt not a major problem, just scary that if xend dies on a running host that it then needs to be rebooted
14:32matta-lt should be easily reproduceable (I know I can)
14:32caker matta-lt: odd. I've noticed that with stable doing that is ok..
14:32rharper you might try killing xcs
14:33rharper I've seen that before, and if you kill off all of the progs spawned, xend will launch again
14:33matta-lt rharper: I don't believe xcs lives in anything but -unstable...
14:33rharper ah, thats probably right
14:33* rharper lives in unstable land
14:34caker My stable bug is: boot a domain with an invalid kernel (/bin/true, or whatever, doesn't matter). xend gives an error, but starts a domain with a generic name (Domain-#), including consuming the ram
14:34matta-lt caker: what distro is the host?
14:34caker matta-lt: CentOS 4
14:34matta-lt i get that also
14:34matta-lt my kernels are always valid
14:34matta-lt I wrote a script to find the dupe domain (it's always a dupe), xm destroy it twice and then start the real domain
14:35caker ahh
14:35mikegrb ahh hah!
14:35matta-lt ie.
14:35matta-lt if it lists Domain-16 which is of course... id 16
14:36matta-lt there will be another domain named 'userx' (or whatever it's named) with id of 16
14:36matta-lt other times...
14:36matta-lt if I only run xm destroy 16 once...
14:36matta-lt Domain-57 57 0 0 --p-- 0.0
14:36matta-lt i get that
14:36caker matta-lt: and when does this scneario occur?
14:37matta-lt caker: have not traced it down, seems to be when running xendomains init script... but I haven't had any reboots since last tweaking
14:37matta-lt but it seems it could happen whenever
14:37matta-lt since xendomains just runs xm create
14:37caker odd -- haven't seen that one (don't use xendomains)
14:38matta-lt on mine the domain starts, it's just a dupe...
14:49eigood I wish we had an amd64 machine
14:49matta-lt i wish i had a stable x86_64 port :)
14:49matta-lt apparently my wish will be fairly soon though :)
15:27sleon hey gals
15:27sleon what are you using xen for?
15:28hollis aliguori: you're too kind. I would have told the guy to run a LOC counter himself
15:41hollis ugh... hate header dependency trees :(
15:42eigood sleon: insert pinky and the brain reference
15:42eigood hollis: aren't using gcc's dep generator?
15:43hollis eigood: the xen core isn't, but that wouldn't help here anyways. it's the usual header1 needs header2 needs header1 problem
15:48aliguori hollis: I was a bit curious myself :-)
15:48sleon eigood, haha
15:49aliguori yeah, I agree with you hollis btw.. I don't know why headers aren't explicitly included where they're needed
15:49aliguori the problem trickles all the way down to userspace
15:49aliguori all of the public headers use u8, u16, etc types but it's not defined anywhere in the public headers
15:49aliguori it's really annoying because you have to define them yourself before you can include any xen header
15:50hollis to paraphrase Christian's explanation: there are too many implicit includes already, so we should rely on more implicit includes
15:51aliguori might as well have one 'includes.h' if that's goign to be the attitude.
15:54aliguori something's really fishy with the linux smp build for domU. i get the feeling that when every somethign is being schedule on two cpus at the same time i get an oops
15:55aliguori like if i put something in the background from the shell
15:56rharper aliguori: shouldnt, I run smp domU all the time
15:56rharper you have an oops?
15:56rharper in unstable?
15:59eigood all headers should directly include headers that define symbols they directly use
15:59eigood I consider not complying with that a bug.
15:59eigood my own software follows that
16:01hollis eigood: I agree, but I was the only one on-list advocating it
16:02eigood when was this on the list?
16:02* rharper wouldnt have thought such practices needed advocating now-a-days
16:02eigood it'd be nice if sparse could check for that
16:03rharper the thread subjuct is [Xen-devel] [patch] final header fixes
16:03eigood when did it start?
16:03rharper Wed, 23 Mar 2005
16:03hollis it's come up a few times, but the most recent was 26 Mar 2005
16:05greenrd chalk up another reason why C sucks ;)
16:06sleon eigood, are you running numeric simulations on your xen?
16:06hollis greenrd: no argument here
16:06eigood sleon: no, real apps
16:06sleon eigood, like?
16:06sleon eigood, isp?
16:07jimix greenrd: programmers suck, leave poor C alone :)
16:07sleon eigood, virtual servers for clients?
16:07eigood kaffe.brainfood.com(runs several tinderboxes), mail.brainfood.com(the free mail domains, not work domains)
16:07eigood all over nfsroot; very busy machine
16:07sleon eigood, nfsroot interesting
16:07sleon eigood, for /home?
16:07--- ---> niv [~niv@bi01p1.co.us.ibm.com] has joined #xen
16:07eigood our xen boxes are diskless
16:07eigood everything is nfs
16:08eigood even xen0, even grub
16:08sleon eigood, so you have lots of xen thin clients?
16:08sleon or how does it work
16:08eigood bios -> pxe -> grub -> nfsroot-dom0 -> nfsroot-domU
16:08shuri nice!
16:08eigood 2 xen machines, 20(or so) instances
16:08sleon eigood, what for?
16:08sleon eigood, why do you need so many?
16:08riel gorgeous
16:08riel <flood>
16:08riel Unable to handle kernel paging request at virtual address cb4b197c
16:08riel printing eip:
16:08riel c015681a
16:08riel *pde = ma 076f7067 pa 0004b067
16:08riel *pte = ma 3678b061 pa 0b4b1061
16:08riel [<c0156999>] copy_page_range+0xc9/0x110
16:09riel [<c011b1f1>] copy_mm+0x301/0x400
16:09eigood we are interested in kexec+multiboot, which would let us use pxelinux instead of grub, 'cuz grub's networking impl. sucks ass
16:09riel [<c011bd24>] copy_process+0x524/0xe20
16:09riel [<c011c695>] do_fork+0x75/0x1f5
16:09riel [<c0216520>] copy_to_user+0x60/0xa0
16:09eigood sleon: better than having 20 separate machines
16:09riel [<c01089fc>] sys_clone+0x3c/0x40
16:09riel [<c010a0a7>] syscall_call+0x7/0xb
16:09riel </flood>
16:09sleon eigood, are these desktops?
16:09eigood dump 4g of ram in a box, and load it up
16:09eigood sleon: no, the 2 xen machines are 1U machines at a colo
16:10eigood it's simpler to make a xen context to host a client's resources, then to try to have them all hosted in a single environment
16:10sleon eigood, why can't you have one machine?
16:10sleon eigood, aha interesting
16:10rharper riel: nice one
16:10eigood the problem with doing it on a single machine, is when one client needs some special feature, you have to figure out if it will affect other configs
16:10sleon eigood, what kind of tasks are these 20 machines doing?
16:10rharper toss that Kier's way =)
16:10eigood mostly web serving
16:11sleon eigood, hmm interesting
16:11sleon eigood, why not usermode linux for each?
16:11riel rharper: now to figure out why it only happens in xenU, and not in xen0
16:11eigood all the xen instances are sitting in a dmz; we can either snat/dnat their private ip to a public one, or use squid in reverse-proxy mode to just export their web server
16:11rharper riel: hrm. smp domU?
16:11eigood sleon: xen has a 2-3% overhead; uml has 40-60%
16:11riel rharper: yes
16:11riel but smp dom0 too
16:11sleon eigood, insteresting
16:12--- <<-- Nigelenki [~bluefox@pcp484971pcs.whtmrs01.md.comcast.net] has quit (Read error: Connection reset by peer)
16:12* riel tries vcpus=1
16:12@cw eigood: depends what you are doing, for some things the uml overhead is *way* more and sometimes a fiar bit less
16:12sleon eigood, and it runs only on a dual cpu machines?
16:12rharper riel: right. you tried backing up a day or two? theres been a lot of changes in mm and pages tables that affect smp... thought why domU only is strange
16:12eigood uml let's you overcommit on memory(xen doesn't), but the hardware cost is so small when amortized out over time, that having huge amounts of real ram is a no-brainer
16:12--- ---> Nigelenki [~bluefox@pcp484971pcs.whtmrs01.md.comcast.net] has joined #xen
16:12@cw eigood: generally speaking though the uml overhead for most people is huge compared to xen
16:13@cw well, xen+balloon+vm-hacks might let you overcommit there in a sense, im not sure it's worth it
16:14riel rharper: backing up a few days? this is rawhide ... ;)
16:14rharper haha
16:14sleon eigood, how much ram do you have
16:14eigood sleon: we have an athlon xen, and a p4 xen; the later has hyperthreading
16:14riel I'm here to run into and fix bugs, not to avoid them
16:14* eigood notes xen+hyperthreading is a little stupid
16:14riel I'll leave avoiding bugs to users
16:14riel it's not a developer thing
16:14eigood hyperthreading is poor when 2 completely separate processes are running side-by-side
16:14sleon eigood, dual or simple? hype means 2 cpu are simualted
16:14eigood and 2 completely separate OSes will have nothing in common
16:14rharper riel: im running 20050401 snapshot as smp domU with constant kernel compiles... what were you running in domU? repeatable?
16:14rharper riel: happen on boot?
16:14riel yeah, happens on boot
16:14riel but ... with vcpus=3
16:14rharper ouch
16:14sleon eigood, are you using simple pc hardware, or are these special boards?
16:14riel on a single cpu w/ hyperthreading system
16:14rharper ooo, whats the hardware?
16:14sleon eigood, do they only have 4 gigs ram?
16:14eigood sleon: the p4 has 4g, the athlon has 1.5(the latter doesn't support more on the mobo)
16:14riel P4 w/ HT, 1GB memory
16:14riel nothing special
16:14rharper yeah, wonder if Ian's ht cpu distribution has something to do with it
16:14eigood simple machines, ie, whiteboxes
16:14rharper you saw that changes go in? yesterday
16:14riel rharper: I suspect it's to do with #vcpus > #physicalcpus
16:14rharper I dont have an HT box
16:14sleon eigood, and they are 24/7 up?
16:14--- <<-- jimix [~jimix@ip13.194.susc.suscom.net] has quit (Quit: Download Gaim: http://gaim.sourceforge.net/)
16:14eigood sleon: definately
16:14rharper no, I run that fine, vcpus=4,8 , no problem
16:14eigood the only problems we have are with nfs(the .nfs################ deleted file crap)
16:15sleon eigood, but nfsroot is raidarray?
16:15riel rharper: mmm ok
16:15eigood xen 2.0 is rock solid
16:15riel rharper: and an odd number of cpus ?
16:15rharper riel: on my two-way
16:15* rharper boots domU with vcpus=7
16:15eigood sleon: yes, raid5 on the fileserver
16:15sleon eigood, have you already heard about parallel nfs ?
16:15eigood we have plans to switch to iscsi or some such thing for importing block devices, and not using nfs at all
16:16--- Netsplit xenon.oftc.net <-> oxygen.oftc.net quits: DEac-, rharper, LaidBack_01, jeroney, mael, Shaun, surriel, hollis, schweeb, muligone, (+9 more, use /NETSPLIT to show all of them)
16:16--- Netsplit xenon.oftc.net <-> oxygen.oftc.net quits: mon, muli, sleon, homebaum, sunny
16:16--- Netsplit xenon.oftc.net <-> oxygen.oftc.net quits: knewt, jonmason, JViz, riel, apw, Mark, aliguori
16:17--- Netsplit over, joins: schweeb, Nigelenki, Mark, homebaum, sleon, hollis, jeroney, rharper, matta-lt, DEac- (+21 more)
16:17sleon or coda, andrew fs
16:17eigood all this running on 100M
16:17rharper riel: http://lists.xensource.com/archives/html/xen-changelog/2005-03/msg00278.html
16:17sleon eigood, 100M???
16:18sleon eigood, maybe this is a bottleneck?
16:18eigood sleon: I never discussed the speed
16:18sleon eigood, interesting
16:18eigood the speed isn't a problem; nfs just isn't stable
16:18* riel looks
16:18sleon eigood, interesting
16:18eigood the biggest issue tho, with nfs, is when you delete an open file
16:18sleon eigood, so when i have lots of parallel accesses it brakes??
16:19sleon eigood, i heard it has fiellocking mecahnism
16:19eigood since the client still needs to keep the file around, it renames it to .nfs#############, and then that breaks when something is trying to delete an entire directory
16:19sleon eigood, is it nfsv3 or nfsv4?
16:19eigood 3, I believe
16:19eigood actually, that's probably just a stpuid bug
16:20eigood if a dir is deleted that contains .nfs#### files, rename the dir to .nfs######
16:20sleon eigood, have you tryed anrew or coda filesystems?
16:20rharper riel: maybe I spoke too soon on the 7 vcpus... xen just rebooted, lemme try to repeat that test
16:20eigood I didn't think coda was being developed anymore
16:20sleon i think it is developed
16:20sleon the last time i heard about it
16:20sleon eigood, i spoke with parallel nfs developer
16:20eigood and using a network filesystem is not the way we want to go; that requires the domU kernels/OS to know how to access the remote fs, which means more config, and more things to go wrong
16:20sleon eigood, the try to solve this problems
16:21eigood using block devices means only dom0 has to know about them, and can just export as virtual devices to domU
16:21sleon eigood, understood
16:21sleon eigood, why don't you use networkblockdevices capability of kernel?
16:21sleon is it not fast enough?
16:21rharper riel: yeah, 7 vcpus in domU hoses xen up, hard reboot, nothing on seriel either
16:22rharper riel: here some
16:22rharper (XEN) Assertion '(x & PGT_count_mask) != 0' failed, line 1087, file mm.c
16:22rharper (XEN) BUG at mm.c:1087
16:22rharper (XEN) CPU: 1
16:22eigood the (e)nbd doesn't really support high-count setups
16:22sleon what does high-count mean?
16:22riel rharper: so an even number of vcpus works, an odd number breaks?
16:23eigood I tried AoE; it would only ever get me 5 megabyte/s, on a 100m lan; even their website confirmed that they had only gotten that in speed tests
16:23eigood which is stupid to me; nfs can do 11.5MB/s on a 100m lan easy
16:23eigood sleon: lots and lots and lots of devices
16:23sleon eigood, AoE?
16:23riel sleon: coraid.com
16:23eigood ATA over Ethernet
16:23rharper riel: lemme try it with even
16:23eigood dead simple to configure(unlike iscsi)
16:24sleon eigood, why not to export one raid device
16:24@Sir_Ahzz iscsi isn't hard to configure either.
16:24eigood it's nice that 2.6 now has kernel-side iscs initiator code
16:24sleon eigood, and then make lots of virtual discs from it in domain0?
16:24eigood but the other half of the equation still sucks for configuring
16:24eigood sleon: the fileserver hosts the separate devices, and exports them as separate devices
16:24rharper riel: vcpus = 4 failed too, I think you had it right, vcpus > physical
16:24eigood that way a domU instance could roam easily
16:24sleon eigood, i thought it was one big raid5
16:24riel rharper: ok ;)
16:24riel rharper: flushing problem I guess
16:24rharper riel: was just running with vcpus=2 fine
16:25eigood sleon: it is, we are now speaking hypothetical
16:25rharper riel: yeah =(
16:25eigood are you guys breaking xen?
16:25rharper eigood: yes
16:25riel then I don't have to worry about FC4 test2
16:25riel few people are running with vcpus>cpus
16:25rharper you have a surprise for them if they do =)
16:25eigood riel: I think that would be the point of having smp support
16:25sleon eigood, hmm interesting thank you very much for explanations
16:25eigood I mean, UML supports that
16:26eigood we've been running xen stuff for over a year now(deployed the first in february of 2004)
16:26eigood well, end of january
16:26eigood then, my dad died feb 5, and I went away for a week to his funeral
16:26eigood machine stayed up the entire time I was gone, which was good
16:27riel eigood: yeah, but how many of your virtual machines have more cpus than the physical system has ? ;)
16:27rharper eigood: not really, you can have many domUs with vcpus < physicals, such that you can run domUs in parallel, you can also run with vcpus > physical cpus, but there is a performance penality since you will be switching cpu context more often
16:27eigood riel: running 2.0.4 on them, so none
16:27sleon eigood, what are you using for tracking if services are up or down ?
16:27sleon or behaive correctly?
16:27eigood if you have 4 real cpus, 10 domU, with each one having vcpu=2, is it possible to have 4 domU running at once?
16:27sleon snmp?
16:28eigood or does xen only swap the entire domU?
16:28eigood sleon: nagios
16:28sleon eigood, is it free?
16:28eigood we don't have anything magic yet
16:28eigood apt-get install nagios
16:28rharper eigood: xen swaps vcpus within a domU and the entire domain, you have a timer for each domU, and within that slice, xen-unstable schedules each vcpu as a thread
16:28sleon eigood, cause it have lots of problems when squid for example crashes due lack of resources
16:29eigood rharper: I have no idea what that means
16:30riel ok, office social event!
16:30rharper eigood: heh, ok. I guess in short, with xen-unstable, you can overcommit your physical processors, as it allows you to have up to 32 virtual processors in each domain
16:30* riel backs away from the bugs
16:30sleon rharper, coool
16:30rharper riel: you send an oops to the list yet?
16:30sleon rharper, so i can simualate quad cpu system??! :D
16:30sleon JAJAJAJAJAJAJA
16:30sleon COOOl
16:31sleon :))
16:31riel rharper: not yet, will do afterwards
16:31rharper riel: ok
16:31riel rharper: unless you beat me to it ;)
16:31rharper riel: =)
16:31* rharper notes he has an office social as well
16:31sleon rharper, does it mean i can have a 4 cpu system on a 1 cpu machine?
16:31rharper sleon: yes, at the moment, there is no performance advantage for doing so...
16:31rharper sleon: yes
16:31caker What, if any, is the current solution for preventing a swap-thrashing domain from storming the host? Are there any disk QoS features planned?
16:31sleon rharper, ok but it is cool
16:31aliguori ah, ok, i'm not crazy. i was seeing the same thing
16:32rharper sleon: build unstable with CONFIG_SMP in domU config
16:32rharper aliguori: it seems =)
16:32sleon rharper, sure there is no performance advantages, but it is good for testing parallelising software
16:32aliguori not to mention the fact that once domU oop's, xend eats up 100% of the cpu in dom0
16:32rharper sleon: indeed
16:32rharper aliguori: my xen crashes hard, so no cpu gobbling
16:32sleon rharper, how can i find out if i am using stable or unstable?
16:32aliguori i don't have smp enabled in dom0
16:33rharper you should see that when you boot, try xm dmesg, unstable says Xen 3.0 -devel
16:33rharper aliguori: ahh
16:33sleon rharper, 2.0.4
16:34rharper sleon: Xen version 3.0-devel <--- thats in my xm dmesg, you should see something in the 2.X range if you are testing or stable
16:34--- <<-- Method [~Method@stanford.columbia.tresys.com] has quit (Ping timeout: 480 seconds)
16:34rharper sleon: I believe that is stable
16:34sleon rharper, where should i select CONFIG_SMP feature? by domU kernel?
16:34sleon rharper, what prevents binary nvidia driver from working properly with xen?
16:34sleon rharper, or do they work with unstable?
16:35@cw sleon: it probably can be made to work if it doesn't
16:35rharper sleon: only with xen-unstable, yes, select CONFIG_SMP in the domU kernel config
16:35sleon cw, i get kernel panik on agp
16:35--- ---> yuval [~yuval@line104-133.adsl.actcom.co.il] has joined #xen
16:35@cw sleon: with the recent agp fixes?
16:35sleon cw, when i switch agp off,, and only use mtrr i get errors in the modules internal funktions
16:35rharper sleon: not sure, the agp drivers have been troublesome because they execute priviledged instructions that xen might not emulate
16:36sleon guys
16:36sleon as i say even without agp nvidia binary does not work
16:36sleon i can search in chatlogs for exact error i get
16:36rharper I didnt think the nvidia card works without an agp driver...
16:36sleon yes it does
16:37* rharper learns new things all the time
16:37sleon there is option: nvagp kernl agp or no agp
16:37sleon i compiled kernel without agp
16:39sleon rharper, i can load the module now
16:39sleon it is laoded :))
16:39rharper =)
16:39sleon but as i said
16:39sleon then when i really start X server with glx
16:39sleon i get problems
16:40rharper might email the list, Ian and the rest might be able to help you out.
16:40sleon i try it again to reproduce an eror
16:40niv eigood: was that nfs over udp or tcp?
16:40rharper cool
16:40* rharper bails for the social
16:41sleon rharper, what is "social office"?
16:41sleon (EE) NVIDIA(0): Failed to initialize the NVIDIA kernel module!
16:41sleon (EE) NVIDIA(0): *** Aborting ***
16:41sleon (EE) Screen(s) found, but none have a usable configuration.
16:42sleon moment i try to finish current X
16:42sleon i am back in a minute
16:42--- <<-- sleon [test@p54A17B26.dip.t-dialin.net] has quit (Remote host closed the connection)
16:42--- <<-- yuval [~yuval@line104-133.adsl.actcom.co.il] has quit (Quit: Leaving)
16:46--- ---> yuval [~yuval@line104-133.adsl.actcom.co.il] has joined #xen
16:47--- <<-- yuval [~yuval@line104-133.adsl.actcom.co.il] has quit (Quit: )
16:47jonmason anyone seeing a build break on xen-unstable (downloaded 20 minutes ago)
16:47jonmason ../memory/libmemory.a(misc_mem.o)(.text+0x3fb): In function `bx_mem_c::init_memory(int)':
16:48jonmason : undefined reference to `xc_get_pfn_list'
16:51jonmason Is anyone else seeing this?
16:57jonmason guess not
16:57jonmason well, you were warned
16:57@cw jonmason: i just built it here w/o problems
16:57jonmason really
16:58jonmason I can't even get to the part where it downloads the linux kernel
16:58@cw gcc -v ?
16:59jonmason gcc version 3.3.5 (Gentoo Linux 3.3.5-r1, ssp-3.3.2-3, pie-8.7.7.1)
16:59jonmason I can run 2.0.5 w/o problems
16:59--- ---> sleon|tuX [test@p54A17B26.dip.t-dialin.net] has joined #xen
16:59sleon|tuX re
16:59@cw dunno, i just build xen and tools in the last minute w/o problems
17:00sleon|tuX so i get simply a blank screen
17:00sleon|tuX when i look at the Xorg log
17:00sleon|tuX there is no problems seen
17:00sleon|tuX it ends with lines alocated 16 bit framebuffer
17:00sleon|tuX rharper: ideas?
17:00sleon|tuX rharper: i have framebuffer device off
17:01sleon|tuX in kernel
17:01jonmason sleon|tuX: I think rharper stepped away from his desk
17:01sleon|tuX jonmason: hmmm :((
17:01sleon|tuX jonmason: for long time?
17:01jonmason sleon|tuX: can be sure, free food in the cafeteria
17:02sleon|tuX hrrhrhr
17:02sleon|tuX jonmason: will he be in 15 minutes back=
17:02sleon|tuX ?
17:02@cw jonmason: w/o more information it's hard to say ... check it's a clean clone of the tree
17:04sleon|tuX is 3.x.x xen stable enough for running at home desktop?
17:05eigood 5there is no 3.x.x xen(yet)
17:05eigood s/^5//
17:05@cw -unstable calls itself 3
17:06eigood is there any interest in a frontend/backend driver for vfs stuff?
17:06eigood yes, I know you could use nfs, samba, coda, etc, but just for kicks, fun, and giggles
17:07eigood a smart implementation could use page sharing/borrowing for speed
17:07@cw eigood: what do you mean? for shared fs access?
17:07--- <<-- matta-lt [~matta@69.93.28.254] has quit (Quit: If you are going to walk on thin ice, you may as well dance.)
17:07--- ---> anthill333 [~larsr@palwebproxy2.core.hp.com] has joined #xen
17:07eigood yes
17:08@cw nfs works for most people
17:08@cw beyond that there is lustre, gfs, ocfs2, cxfs, etc. if you wanted to put some effort into it
17:09sleon|tuX pnfs
17:09sleon|tuX :))
17:09eigood lustre doesn't really support HA metadata nodes
17:09eigood gfs looks like a pain to configure(haven't tried, but the docs don't make it sound simple)
17:09eigood never heard of ocfs2 nor cxfs
17:10@cw eigood: why isn't nfs good enough for you?
17:10niv eigood: talk to Mark W about XenFS
17:11eigood cw: it'd just be an interesting experiment
17:11eigood well, if dom0 was importing a large FS, reexporting that with nfs would be poor performance; all the data would be copied
17:11eigood a smart XenFS would just do page sharing, which would be fast
17:11@cw sounds more like some gentoo-users desire to muck about and waste time to me :)
17:11eigood cw: doing NIH can be a good way to learn a system
17:12@cw sure, a little research probably wouldn't hurt either though
17:15--- User: *** mon is now known as demon
17:16* eigood reads about ocfs2
17:16eigood not many docs
17:21eigood ocfs2 was announced/released, then it looks like development disappeared
17:22eigood is cxfs free?
17:23@cw nope
17:23@cw cxfs you get from sgi for $
17:25eigood bah
17:25eigood ocfs2 at least is gpl
17:25aliguori nfs is kinda crappy as far as network file systems go
17:25aliguori it's fast as heck but nfs < v3 has no security
17:25sleon|tuX loool
17:25aliguori plus, the posix semantics kill you.
17:25@cw jonmason: ok, i did a clean pull & build w/o problems here including letting it download, etc.
17:25aliguori if we had a sharable fs in xen, it would be very nice to make it as posix-neutral as possible
17:26* eigood hates bitkeeper
17:26eigood my free software projects don't let me work with anything that uses bk
17:26@cw i hear they are accepting patches, im sure any contributed fs work wouldn't go amiss
17:26@cw eigood: there is a free bk-pull client thingy
17:26aliguori eigood: there's an opensource bk client now
17:28eigood I heard about that; didn't actually appear to really be open
17:28aliguori turns out it's a very simple protocol
17:28eigood I'm a dpkg developer; dpkg contains dpkg-source; this could be considered a revision control system; this matches the definition in the bk license
17:29aliguori it took me about an hour to write my own client
17:30aliguori but yeah, the bk licensing is a bit evil
17:34knewt eigood: it is actually. the no whining license was just a joke. lm suggested bsd as the real one. i'm using it now which is probably a good thing cause i shouldn't really use the full thing
17:37--- <<-- homebaum [~michael@wbar1.sea1-4-5-031-104.sea1.dsl-verizon.net] has quit (Quit: Client exiting)
17:41--- <<-- DEac- [~deac@xdsl-213-196-203-102.netcologne.de] has quit (Ping timeout: 480 seconds)
17:45--- User: *** riel is now known as unriel
17:46unriel rharper: aim7 seems to run well on the xenU domain
17:46unriel rharper: (single cpu xenU domain though ...)
17:46* unriel disappears
17:52--- Channel: mode/#xen [+oo unriel surriel] by cw
17:53--- ---> DEac- [~deac@xdsl-81-173-160-118.netcologne.de] has joined #xen
18:17--- <--- jeroney [~jeroney@pixpat.austin.ibm.com] has left #xen (Leaving)
18:18jonmason cw: ya, it is clean. might be my version of gcc
18:19@cw jonmason: gentoo right?
18:19jonmason yup
18:19jonmason I'm a bit of a gentoo whore
18:19@cw well, unless they have weird patches it doesn't (looking) at the version seem that weird
18:20@cw but for whatever reason(s) gentoo people get more than their fair-share of problems
18:20jonmason of course
18:20jonmason that's the fun of running gentoo
18:21sleon|tuX what is aim7??
18:21sleon|tuX unriel:
18:24@cw aim7 is a benchmark/stress-test that's fairly popular for beating on kernels and machines
18:24--- <<-- hollis [~hollis@pixpat.austin.ibm.com] has quit (Quit: leaving)
18:25sleon|tuX cw: like spec2000?
18:25--- User: *** surriel is now known as riel
18:25@cw sleon|tuX: not really, it's more useful to beat the crap out of a system and see if it breaks or if there are regression relative to kernel changes
18:25@cw and it's open
18:26sleon|tuX cw: thank you good to know
18:26@riel very importantly, it's rediculously easy to start
18:27sleon|tuX can it be used to dos systems?
18:27@riel and it can search for the "breaking point" all by itself
18:27@cw true... and change around, i used to chasing SHub IO bugs when i had a few hundred disks to play with
18:27@cw (and 64 cpus)
18:28@riel I just set it to go to crossover mode
18:28@riel if it survives that far, the kernel is probably robust ;)
18:28@cw riel: has anyone beaten xen on a 8x machine or thereabouts?
18:29@riel cw: dunno - but I certainly haven't
18:29aliguori 8x? gees, i don't think so
18:30aliguori xen think pae's going to prevent that on must hardware
18:30aliguori ah, s/xen/i/
18:30@cw eh? what?
18:30aliguori and must=>most.. gees, i'm glad it's friday
18:30aliguori most 8-way machines have > 4gb of memory
18:30eigood muwahahaha
18:30* eigood sends 4-1 mail
18:32--- <<-- rharper [~rharper@pixpat.austin.ibm.com] has quit (Quit: Leaving)
18:37--- <--- niv [~niv@bi01p1.co.us.ibm.com] has left #xen ()
18:43knewt pleeeeeese say that the latest -dev post is an AFJ *g*
18:44sleon|tuX does unstable crash only one time per day?
18:44eigood knewt: who, me?
18:44knewt well, /was/ the latest post. not any more
18:44eigood mips2java really *does* exist
18:46--- <<-- shuri [sjnesjd@dsl.speedline209.226.electronicbox.net] has quit (Quit: )
19:18--- <<-- sleon|tuX [test@p54A17B26.dip.t-dialin.net] has quit (Remote host closed the connection)
19:37Mark eigood: my current project is a VFS level split driver
19:37Mark sounds rather like what you're proposing
19:37Mark i should probably stick some information up on the web somewhere
19:39--- ---> shaleh [~shaleh@dsl017-044-048.sfo4.dsl.speakeasy.net] has joined #xen
19:42--- <<-- shaleh [~shaleh@dsl017-044-048.sfo4.dsl.speakeasy.net] has quit (Quit: )
20:12--- ---> Chang [~coffmant@cpe-24-93-161-148.neo.res.rr.com] has joined #xen
20:12--- ---> niv [~Nivedita_@c-67-171-167-143.hsd1.or.comcast.net] has joined #xen
20:45caker riel: no attachment on your last email, btw
20:46@riel that's corret
20:46@riel attachments are usually annoying
20:46caker ahh, read it incorrectly -- nm :)
20:48--- ---> griffinn [~griffinn@pcd584162.netvigator.com] has joined #xen
20:48mikegrb caker: you are fired
20:48caker mikegrb: who writes the checks?
20:49mikegrb oh
20:49mikegrb never mind then boss, as you were
20:49@riel luckily it only seems to happen when vcpus>cpus
20:50@riel so I can ship FC4 test2 xen this way, if there's no low risk fix
20:51* cw wonders what's the largest initramfs image he can get away with
20:51--- <<-- LaidBack_01 [~jax@69.146.20.246] has quit (Quit: Leaving)
20:51caker When running with vcpus > 1, the dom still runs on only one real cpu -- is that correct?
20:51eigood cw: default limit(compile time option in kernel) is 4meg
20:59--- <--- griffinn [~griffinn@pcd584162.netvigator.com] has left #xen ()
21:04--- <--- Chang [~coffmant@cpe-24-93-161-148.neo.res.rr.com] has left #xen ()
21:11@cw eigood: for initramfs? it's also more about the bootloader than the kernel
21:11@riel caker: no
21:11@riel caker: I think the domain can get multiple CPUs
21:12@riel if you allow xen to spread te virtual cpus around
21:13eigood oh, initramfs, no idea
21:13eigood what I said was for initrd
21:47--- ---> hollis [~hollis@user-0vvde2g.cable.mindspring.com] has joined #xen
22:08--- ---> Davidh [~chatzilla@mail.messagegate.com] has joined #xen
22:10--- ---> niv_ [~Nivedita_@c-67-171-167-143.hsd1.or.comcast.net] has joined #xen
22:12--- <--- Davidh [~chatzilla@mail.messagegate.com] has left #xen ()
22:13--- <<-- niv [~Nivedita_@c-67-171-167-143.hsd1.or.comcast.net] has quit (Read error: Operation timed out)
22:14--- <<-- plars [~plars@pixpat.austin.ibm.com] has quit (Ping timeout: 480 seconds)
22:23--- ---> plars [~plars@pixpat.austin.ibm.com] has joined #xen
22:28--- <<-- Mark [~Mark@maw48.kings.cam.ac.uk] has quit (Remote host closed the connection)
22:49--- <<-- hollis [~hollis@user-0vvde2g.cable.mindspring.com] has quit (Quit: leaving)
22:59--- ---> hollis [~hollis@user-0vvde2g.cable.mindspring.com] has joined #xen
23:24--- ---> Chang [~coffmant@cpe-24-93-161-148.neo.res.rr.com] has joined #xen
23:59--- <--- VS_ChanLog [~stats@ns.theshore.net] has left #xen (Rotating Logs)
23:59--- ---> VS_ChanLog [~stats@ns.theshore.net] has joined #xen
---Logclosed Sat Apr 02 00:00:08 2005