Back to Home / #xen / 2022 / 02 / Prev Day | Next Day
#xen IRC Logs for 2022-02-21

---Logopened Mon Feb 21 00:00:22 2022
00:11-!-jgross_ is now known as jgross
05:32<Knorrie>Hi, I'm wondering why we (in Debian) recommend qemu-system-x86 on arm64 armhf... What is a practical use case for using xen+qemu on arm?
05:34<royger>Knorrie: AFAICT it's for the PV backeds implemented in QEMU
05:34<royger>backends
05:39<stsquad>Knorrie: it's a throwback to the different way QEMU interacts with Xen compared to other hypervisors - we don't use the various arch cpu loops so it doesn't really matter what the binaries target arch is. I have posted patches in the past to build a xen enabled qemu-system-aarch64 but currently there is no practical difference in the result
05:40*stsquad still has a desire to build a -M xen-virt model for virtio though
05:41<Knorrie>Ok, I'm not a heavy qemu user (only some windows HVM in one place with xen). Can you give a specific example of a use case and what kind of device I'd want in my arm domU where qemu helps? (just for my understanding)
05:45<stsquad>Knorrie: TBH I'm not even sure QEMU gets invoked for a standard ARM Xen guest. But as it's started in daemon mode by xencommons you need to have it there so the tooling doesn't get confused. Possibly you could use a 16550 uart (but my kernels are using hvc0 for consoles)
05:46<royger>try using a qdisk backend for example, you should get a QEMU instance then for that domU
05:46<Knorrie>qemu is optional, I don't have it installed on all my servers
05:47<Knorrie>royger: aha, thanks for the example
05:47<royger>ie: backendtype=qdisk in the configuration line
05:47<royger>you will get that by default if using non raw formats (qcow, vhd...)
05:50<gwd>Knorrie: As royger said, the most obvious reason to use QEMU would be if you wanted to use a disk format that QEMU supported; qcow2 being the most obvious one.
06:53<szy_mat>however, qemu running as root in dom0 is a pretty huge potential security vulnerable as compared to qemu-less domUs
06:54<szy_mat>s/vulnarable/vulnerability
07:09<royger>szy_mat: you also have depriv qemu, which runs the process as a regular user only used by that domain
07:11<royger>running the backends inside dom0 kernel is even worse, as a bug will leak to dom0 kernel crashing, or worse
11:39-!-ChmEarl [~prymar56@098-147-150-167.res.spectrum.com] has joined #xen
11:39-!-ChmEarl is "Mark Pryor" on #xen ##xen-packaging #mock #packaging #virt
15:42-!-neilthereildeil [~oftc-webi@pool-71-191-164-234.washdc.fios.verizon.net] has joined #xen
15:42-!-neilthereildeil is "OFTC WebIRC Client" on #xen
15:42<neilthereildeil>hey guys
15:43<neilthereildeil>im debugging an issue on Xen
15:43<neilthereildeil>i have a bunch of VMs that are stuck running at 100% as reported by xentop, but the VMs are not responsive
15:44<neilthereildeil>any ideas how to debug this?
15:44<gwd>Not responsive to pings, or not responsive on the console?
15:45<neilthereildeil>i dont see anythign on the VNC console
15:45<neilthereildeil>its black
15:46<neilthereildeil>even when i click into the VNC console
16:07<gwd>I mean, I can think of things that I would do, but they all draw on my 25 years' experience doing operating system development...
16:09<gwd>Actually the first thing to do is look at `xl dmesg` and see if there's anything obvious in there
16:09<gwd>or suspicious
16:11<neilthereildeil>yea im pretty familiar with OS internals, so i could handle somethign comple
16:11<neilthereildeil>complex*
16:11<neilthereildeil>i dont see anything special in xl dmesg
16:12<neilthereildeil>also this is a very very busy server
16:12<neilthereildeil>i have lot of DomUs running
16:34<gwd>So you can attach a debugger, but I've never used that.
16:34<gwd>My go-to tool when I was doing more development was xentrace paired with xenalyze
16:35<gwd>There's also xenctx and xen-hvmctx which will tell you things like the IP and the stack
16:35<gwd>If you take a couple of "samples" you might be able to figure out where t's spinning
16:39<neilthereildeil>ok, also is QEMU is stuck in a livelock or something, the DomU wouldnt be charged for that CPU time, right?
16:39<neilthereildeil>what would you recommend attaching a debugger to? the hypervisor, or the DomU kernel?
16:41<gwd>I would start with the domU kernel if that's what you're familiar with.
16:42<gwd>If Xen is still ticking, then xentrace / xenalyze would probably be your best bet to figure out what's happening from Xen's perspective.
16:57-!-neilthereildeil [~oftc-webi@pool-71-191-164-234.washdc.fios.verizon.net] has quit [Quit: Page closed]
18:03-!-jgross_ [~juergen_g@2a01:41e1:20c6:2c00:327:fdf9:39c5:d7f3] has joined #xen
18:03-!-jgross_ is "realname" on #xen
18:10-!-jgross [~juergen_g@2a01:41e1:20b0:700:4bf9:3d5b:7909:219f] has quit [Ping timeout: 480 seconds]
18:29-!-jgross [~juergen_g@2a01:41e1:20f0:900:2015:951a:ca1a:6ab6] has joined #xen
18:29-!-jgross is "realname" on #xen
18:33-!-jgross_ [~juergen_g@2a01:41e1:20c6:2c00:327:fdf9:39c5:d7f3] has quit [Ping timeout: 480 seconds]
22:03-!-jos2 [~jos3@host-091-097-186-159.ewe-ip-backbone.de] has joined #xen
22:03-!-jos2 is "jos4" on #xen #virt #Qubes_OS #kvm #oftc #fdroid #retroshare
22:10-!-jos1 [~jos3@dyndsl-091-248-053-047.ewe-ip-backbone.de] has quit [Ping timeout: 480 seconds]
22:11-!-jgross_ [~juergen_g@2a01:41e1:2118:d000:3441:d34e:795b:2695] has joined #xen
22:11-!-jgross_ is "realname" on #xen
22:17-!-jgross [~juergen_g@2a01:41e1:20f0:900:2015:951a:ca1a:6ab6] has quit [Ping timeout: 480 seconds]
22:47-!-rcvalle [~rcvalle@c-73-15-52-53.hsd1.ca.comcast.net] has joined #xen
22:47-!-rcvalle is "rcvalle" on #xen #vbox #kvm @#debian-dev @#xubuntu-dev #xubuntu #debian-xfce #pax #mm #llvm #kernelnewbies
22:57-!-rcvalle [~rcvalle@c-73-15-52-53.hsd1.ca.comcast.net] has quit []
22:57-!-rcvalle [~rcvalle@c-73-15-52-53.hsd1.ca.comcast.net] has joined #xen
22:57-!-rcvalle is "rcvalle" on #xen #vbox #pax #mm #llvm #kvm #kernelnewbies #debian-xfce
---Logclosed Tue Feb 22 00:00:23 2022