--- | Log | opened Fri Apr 22 00:00:47 2022 |
02:18 | -!- | jgross_ is now known as jgross |
03:25 | <mjt> | andyhhp: so on arm, qemu-system-i386 is not used for actual domUs, only on x86, right? And generally, the pre-spawned qemu is actually not used except of the cases where one has qcow domU images (which is, I guess, rare)? |
04:04 | <gwd> | mjt: To be precise: 1) it's not (yet) used for emulation (although this may change) |
04:04 | <gwd> | 2) It's only spawned as a disk backend when the backendtype is set to "qdisk" |
04:05 | <gwd> | This is the default when using qcow or vhd I believe. But you could manually set it to qdisk even for other types. |
04:08 | <mjt> | I'm not a xen guy. I know right to nothing about xen, - I come here "from" qemu side. In debian we found it is a bit difficult to have xen depending on full-blown qemu-system-i386 which pulls in whole GUI stack, and it is difficult for regular qemu to pull whole xen |
04:09 | <mjt> | so I'm trying to understand what's needed by xen on the qemu side |
04:10 | <mjt> | for the 2) I see a different picture: qemu-system-i386 is spawned at boot just if it exists, by xen-utils-4.16's initscript |
04:10 | <mjt> | if [ -x $QEMU ]; then $QEMU $QEMU_ARGS; fi |
04:11 | <mjt> | qemu-system-i386 -xen-domid 0 -xen-attach -name dom0 -nographic -M xenpv -daemonize -monitor none -serial none -parallel none |
04:11 | <gwd> | OK -- I *think* that's to help dom0 mount guest disks which may qcow2 / vhd encoded |
04:12 | <gwd> | royger may be able to give a more confident answer ^ |
04:12 | <mjt> | that's I thought too from andyhhp answer |
04:12 | <royger> | yes, that's for pygrub. If the disk is not directly accessible dom0 will local-attach it as a xvd* in order to run pygrbu against it |
04:13 | <royger> | ie: not directly accessible == not in raw format |
04:13 | <mjt> | tho it was a surprize for me to know xen can work with qcow files :) |
04:13 | <royger> | it should handle any format that qemu can handle |
04:13 | <royger> | qcow, vhd, qcow2 |
04:14 | <mjt> | can't it use something like qemu-nbd for that? just curious, I understand it is the way it is for quite some time |
04:14 | <gwd> | We heavily depend on QEMU, no reason not to use functionality that's there. :_) |
04:14 | <mjt> | heh |
04:14 | <mjt> | now *that* is interesting |
04:14 | <mjt> | so far I heard xen depending on qemu less and less, just for some rare corner cases |
04:15 | <mjt> | that's why f.e. debian xen package does not depend on qemu, but only recommends it |
04:15 | <gwd> | I mean historically we have depended on QEMU. |
04:16 | <royger> | we have new modes (like PVH) that don't depend on QEMU emulated devices, but use QEMU as a backed for some PV devices |
04:16 | <royger> | in this case, disk devices that use non-raw formats |
04:16 | <mjt> | now I wonder. Should I build qemu-system-i386 for arm, or should it be qemu-system-arm instead? |
04:17 | <mjt> | (for block devices it doesn't matter) |
04:17 | <royger> | AFAICT it doesn't matter as long as it has the Xen PV backends. none of the emulation there will be used on Arm |
04:18 | <gwd> | As long as the toolstack is able to start up the binary appropriately |
04:18 | <mjt> | I refer to the first answer by gwd here, the 1) it's not (yet) used for emulation (although this may change) |
04:18 | <gwd> | (i.e., it knows the path / binary name / arguments) |
04:19 | <gwd> | I don't work on the ARM side, but my understanding was that they were working on getting QEMU emulation on ARM working. It may be a few release down the line. |
04:19 | <gwd> | (And of course I may have misunderstood / the plan may have chanded.) |
04:19 | <gwd> | *changed |
04:19 | <mjt> | ok. We'll be able to adopt too when this happens |
04:20 | <mjt> | when xen encounters a non-raw disk image, does it just fail if qemu isn't running? where's this place? I'd love to include something like "please install qemu-system-xen for this to work" message in there |
04:20 | <mjt> | (and it'd be interesting to know how often people use qemu in this context) |
04:22 | <mjt> | Please excuse me for these stupid questions. I never run xen before :)) |
04:22 | <gwd> | No, they're not stupid at all -- unfortunately there's a lot of complexity which we do our best to hide, but can't help poke through the wallpaper pretty quickly |
04:23 | <gwd> | PV / PVH guests (and ARM could be considered the latter) don't have a built-in BIOS |
04:23 | <mjt> | heh. qemu can't build qemu-system-i386 without emulation (tcg) AND without native kvm support. For arm neither of these are useful :) |
04:24 | <gwd> | So the question is how the kernel gets loaded. |
04:25 | <gwd> | You can use pvgrub, which runs inside the guest, but that's more complicated to build (and on x86 you have to know ahead of time whether you're going to boot a 32-bit or 64-bit kernel) |
04:25 | <mjt> | in debian we have: Suggests: qemu-utils [amd64], seabios [amd64], ovmf |
04:25 | <mjt> | which is *suggests*, which is even lower than recommends |
04:25 | <gwd> | Right, seabios has been the default for HVM guests |
04:26 | <gwd> | OVMF is another option, but I don't think it has feature parity yet. |
04:26 | <mjt> | (and recommends: grub-xen-host [amd64]) |
04:26 | <gwd> | Oh, maybe grub-xen-host is pvgrub? |
04:27 | <gwd> | The admin can manually specify a kernel / initrd, but that's a pain, because you can't just "apt-get update" from within the guest and get a new kernel |
04:27 | <mjt> | grub-xen-host: This package arranges for GRUB binary images which can be used to boot a Xen guest (i.e. PV-GRUB) to be present in the control domain filesystem. |
04:27 | <gwd> | So pygrub, which runs in dom0 and fishes the kernel out of the guest image, is the simplest thing to do |
04:27 | <gwd> | grub-xen-host> OK, good |
04:27 | <jgross> | pygrub is deprecated due to security reasons |
04:28 | <gwd> | security> Indeed it is; but it's still the most convenient. |
04:29 | <gwd> | jgross: Do you know what the default PVH boot path is now? If OVMF is sufficient maybe we should try to make that the default. |
04:29 | <jgross> | Then there is pvgrub based on legacy grub and Mini-OS, which is deprectaed,too. |
04:30 | <jgross> | And grub-pv, which is upstream grub2 configured for Xen PV guest support |
04:30 | <gwd> | Actually, grub-pv in PV should be able to handle both 32 and 64-bit kernels, right? |
04:30 | <jgross> | gwd: the most convenient way is to use grub-pvh |
04:31 | <jgross> | No, grub-pv exists in two flavors, one for 32- and one for 64-bit mode |
04:31 | <jgross> | Same as pvgrub |
04:31 | <gwd> | Sorry, mis-typed; I meant grub in PVH. |
04:31 | <jgross> | Yes, grub-pvh handles both kernel flavors |
04:32 | <gwd> | OK, so in short, the Debian dependency recommendations are correct |
04:32 | <gwd> | People should be encouraged to use PVH guests (on ARM or x86) and use grub-pvh. |
04:33 | <gwd> | If you want to use qcow* / vhd files, you need some form of QEMU. |
04:33 | <jgross> | Same for pvusb |
04:33 | <gwd> | If you try to create a guest w/ a qcow2 disk on a system w/o QEMU, the toolstack will fail; I'm not sure how useful the error message will be. |
04:34 | <mjt> | i'm interested to find it out :) |
04:34 | <mjt> | Knorrie: can you try it on your testground? :) |
04:36 | <jgross> | I think it will tell you the device model couldn't be spawned. |
04:37 | <mjt> | we're moving things around a bit in debian and I don't let people to face a failure without any hint about how to resolve it |
04:37 | <gwd> | ...whereas, ideally it would say, "Please install qemu-system-i386" or something. |
04:37 | <mjt> | don't want to let* |
04:38 | <jgross> | I think it would be rather beneficial to split backend and platform emulation for Xen. So one qemu for HVM guests doing the platform emulation, and another one acting as the backend. Basically a similar approach like a HVM domain with ioemu-stubdom. |
04:39 | <jgross> | PVH and PV guests would require only the backend variant, and this could be arch agnostic and be used on Arm, too |
04:43 | <mjt> | hmm. The docs, everywhere, refer to qemu(1) manpage for details.. but there's no such manpage.. :( |
04:44 | <mjt> | I guess it was done long time ago when qemu was providing system emulation only. This thing (qemu(1) manpage) does not exist since qemu started providing user emulation too |
04:44 | <gwd> | The Xen docs? |
04:44 | <mjt> | yes |
04:44 | <jgross> | mjt: on openSUSE the man page is there. |
04:45 | <mjt> | it is qemu-system and qemu-user, two *entirely* different beasts. I'd rather not provide qemu(1) because of this confusion |
04:45 | <mjt> | but that's minor thing for sure |
04:46 | <gwd> | IN a number of cases we just pass guest config strings directly to QEMU; so using QEMU's docs is the best way to keep things accurate |
04:46 | <gwd> | If you can recommend alternate wording, I'm sure it's an easy change to make. :-) |
04:46 | <mjt> | that's entirely okay. I'm commenting about *which* manpage to refer to :) |
04:47 | <gwd> | IF Debian is renaming the QEMU man pages, they should probably be carrying a patch to maintain appropriate references as well. :-) |
04:48 | <gwd> | Although it looks like 4 of the 5 references are about a single topic (keymaps) |
04:48 | <jgross> | On openSUSE the qemu man-page seems to refer to qemu-system |
04:49 | <mjt> | that's still a very minor thing. Yes it turns out it is me who renames qemu.1 in debian, and the comment there says to bring this upstream :) |
04:51 | <mjt> | I wonder how many years this comment is there.. a bit less than 10 it looks like :) |
04:51 | <mjt> | so.. n/m :) |
04:52 | <gwd> | Haha -- there's a comment in the Xen code I wrote in 2006 that says, "When we implement shadow superpages, we'll have to..." |
04:52 | <gwd> | We stil haven't implemented shadow superpages... |
04:53 | <mjt> | I bet the name for the manpage is a bit easier ;)) |
06:11 | <Knorrie> | in Debian, we have all the PV/PVH grub conveniently packaged, so I really am looking to try ship next Debian release without pygrub (but, it will need some work, mostly to provide proper documentation and reach out to users to make them change something :) ) |
06:47 | -!- | inisider [~inisider@176.120.99.237] has joined #xen |
06:47 | -!- | inisider is "realname" on #xen #kernelnewbies |
06:52 | <inisider> | Hello. I would like to understand how can i debug xenmem_reservation_increase()? The problem is that xenmem_reservation_increase() function returns number of pages not equal to number of pages there were passed as function argument. |
07:01 | <mjt> | ok. qemu-system-i386 can't be built on arm without tcg, it says "no accelerator available" |
07:01 | <mjt> | so I guess it should be qemu-system-arm not qemu-system-i386 |
09:12 | -!- | Tonux_ [~Tonux@185.195.232.155] has joined #xen |
09:12 | -!- | Tonux_ is "Tonux" on #xen #kernelnewbies #oftc #fdroid |
09:12 | -!- | Tonux [~Tonux@0002ad88.user.oftc.net] has quit [Ping timeout: 480 seconds] |
09:12 | <mjt> | ..and it turned out qemu can not be built with xen support on non-x86 to start with, so it does not understand -xen-woodoo options on arm, so it can't be started at boot to handle qdisks (I assume it is the right term to refer to qemu-based didks in xen) |
09:21 | <andyhhp> | 23:21 < andyhhp> and it's qemu-system-i386 because of how intertwined the Xen support is with i386 in Qemu |
09:21 | <andyhhp> | yes, it's a bug/misfeature in qemu |
09:22 | <andyhhp> | which noone has addressed |
09:23 | <andyhhp> | it would be fantastic if someone could find the time to fix this |
09:47 | -!- | ccw [~ccw@0002a6fe.user.oftc.net] has quit [Quit: ZNC - https://znc.in] |
09:48 | -!- | ccw [~ccw@97-122-67-169.hlrn.qwest.net] has joined #xen |
09:48 | -!- | ccw is "Cheyenne Wills" on #xen |
09:59 | -!- | zkrx [~slimshady@adsl-89-217-230-95.adslplus.ch] has quit [Quit: zkrx] |
12:04 | -!- | XCPngXOTeam [~XCPngXOTe@2a01:240:ab08::2] has quit [Remote host closed the connection] |
12:04 | -!- | XCPngXOTeam [~XCPngXOTe@2a01:240:ab08::2] has joined #xen |
12:04 | -!- | XCPngXOTeam is "XCPngXOTeam" on #xen #xen-orchestra #xsxoteams #xcp-ng #xensummit #xendevel |
14:56 | -!- | inisider [~inisider@176.120.99.237] has quit [Remote host closed the connection] |
15:12 | -!- | overholts [~overholts@192.18.128.79] has quit [Quit: overholts] |
15:13 | -!- | overholts [~overholts@192.18.128.79] has joined #xen |
15:13 | -!- | overholts is "overholts" on #xen #xcp-ng |
17:03 | -!- | Tonux_ [~Tonux@185.195.232.155] has quit [Remote host closed the connection] |
17:04 | -!- | Tonux [~Tonux@185.195.232.155] has joined #xen |
17:04 | -!- | Tonux is "Tonux" on #xen #kernelnewbies #oftc #fdroid |
22:21 | -!- | jgross_ [~juergen_g@2a01:41e1:2d03:bf00:e4e5:bb7d:51d3:f10d] has joined #xen |
22:21 | -!- | jgross_ is "realname" on #xen |
22:27 | -!- | jgross [~juergen_g@2a01:41e1:2cd2:d500:a6ef:e08b:dce2:ee24] has quit [Ping timeout: 480 seconds] |
--- | Log | closed Sat Apr 23 00:00:49 2022 |