Jump to content


HowTo convert HVM to Paravirt/manual P2V (RHEL5/SLES10SP1)

Started by Guest , 31 December 2007 - 02:05 PM
3 replies to this topic

Guest Members

  • 6,766 posts

Posted 31 December 2007 - 02:05 PM

after a few searches on the kb/forum and on the web I realize that there is no documentation on how to convert Linux system running in HVM mode to paravirtual mode. so here is my notes, that what worked for me,. I hope it would be helpful for anyone seeking to do the same, in case any one have more suggestion, please add comments.

This howto is written for Xen 4.0.1 and work for RHEL5 and SLES10SP1, I haven't tried to do it on other distros but it should work(with a few conditions met)

In order to make paravirtual work on a linux distro 2 things need to be done, first we need to install on the guest a kernel that support Xen, second, we need to change some parameters on the host system to let it know that we want it to boot in a paravirt environment. the following procedure can be applied on a fresh installation of a HVM system but can also be applied to manually p2v a already working system.in order to install the OS in an HVM mode you will need to choose "other media", that will allow the xenserver to install the system from the a CDROM

Installing the kernel:
- as a condition for paravirtual to work, a kernel that support Xen hypervisor need to be install on the guest, both SLES10SP1 and RHEL5 have Xen enabled kernels as part of there virtualization packages but we when we install SLES i386 version (oppose to x86_64) we'll find more than one xen kernel. specifically on xen we will need to install the xen kernel that ends with the word "pae" on 32bit distro guest (if you installing a x86_64 bit version you have only one xen kernel, and that the one we need). "PAE" which stands for Physical Addressing Extension basically add the support for the translation from the Xen server 4.0.1 64bit hypervisor to the regular 32bit guest OS. (for more info http://en.wikipedia.org/wiki/Physical_Address_Extension), RHEL have only one xen kernel and that kernel is fine.
it is important to make a small note to ourselves about what is the name of the kernel and where is it installed, what I mean by that for example is that on RHEL you have the name of the kernel and it is installed on the boot directory on a separate partition from root (which means that the reference to it will be /vmlinuz-<version>xen) but on SLES its still in the /boot directory but on the root partition (and the reference to it will be /boot/vmlinuz-<version>xen). the best way to verify it is by checking the configuration file for grub, we will need the name and the full path to the kernel and initrd for later.

note: Other distros or releases might not have a kernel that support Xen hypervisor, in this case, you might need to compile one yourself (haven't done it myself, yet... that probably a different howto :)

Tweaking parameters on Dom0

- once the guest is install, we need to tweak the xen database, in order to boot it in paravirt mode and not in HVM. login to the xen server (ssh or local console) command line.
note: paravirt mode uses a customize pygrub to boot the xen enabled kernel, HVM uses a customize qemu to bootstrap the boot partition and proceed with normal boot operation)

run this command to list the installed VMs:
# xe vm-list

copy the uuid of the VM that you have just installed and run the command:
# xe vm-param-list uuid=<vm uuid>

this command will output all the parameters available for the VM including the on that we are about to change.
the main parameter is "HVM-boot-policy", right now you should have "HVM-boot-policy=BIOS order". we need to set empty this parameter, that how xen engine knows to use a bootloader and not qemu:
# xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""

note: by setting the parameter HVM-boot-policy back to "BIOS order" you could boot the guest OS back to HVM, very useful in case something doesn't work well on paravirt.

note: each parameter that is set with vm-param-set can be validated by using vm-param-get command, e.g:
# xe vm-param-get uuid=<vm uuid> param-name=HVM-boot-policy
this will get you the value of the HVM-boot-policy parameter. in general it is a good idea to validate each parameter you modify.

now we need to tell the xen engine which bootloader to use by setting this parameter "PV-bootloader=pygrub" (it should be empty right now) with this command:
#xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub

we also need to tell the bootloader which kernel/initrd (we should use the note we made earlier of the full path to the kenrel/initrd) to load by setting this parameter "PV-bootloader-args" with this command:
#xe vm-param-set uuid=<vm uuid> PV-bootloader-args="--kernel <full path to xen kernel> --ramdisk <full path to xen initrd>"

note: there is the parameters "PV-kernel" and "PV-ramdisk" but they do not work for some reason... :(

it is possible to also add some kernel options using this parameter "PV-args" for example:
# xe vm-param-set uuid=<vm uuid> PV-args="console=ttyS0 xencons=ttyS"
note: this will cause the kernel to set the serial0 as the console but you need to make sure that you add "console" to the file /etc/securetty

another parameter that need to be tweaked before we can boot is on the virtual disk of the vm or to be more accurate on the virtual block device(vbd)of the vm, so first we need to get the corolating uuids.

run the command, in order to get the list of devices attached to the vm (HD/CD):
# xe vm-disk-list uuid=<vm uuid>

you should see the virtual disk(tagged as VDI) of the vm there
note: in case you are not sure which one is the HD you want ot change then go back to the XenCenter, switch to the storage tab of the installed VM and change the "name" of the HD and press apply. run again the command above and you should see the new name there.

the parameter that we want to change is not in the VID parameter but in the VBD of the disk, so in order to get the correlating uuid for the VBD copy the uuid (note: that is not the same uuid as the VM uuid! this uuid is the uuid of the Virtual disk/VDI) and run this command:
# xe vdi-param-list uuid=<virtual disk/VDI uuid>

now copy the "vbd-uuids" parameter's value, that is the uuid of the correlating virtual block device of the virtual disk.
run the command:
# xe vbd-param-list uuid=<Virtual Block Device/VBD uuid>

the parameter that need to be changed is "bootable", and it need to be set to true, with the command:
# xe vbd-param-set uuid==<Virtual Block Device/VBD uuid> bootable=true

ok, a few more tweaking to make it work, we need to change the a few things on the guest OS, it is specific to SLES10SP1 because the way they handle boot, which usually differ from one distro to an other, so here what I have found so far:
on SLES10SP1 the root device is written inside the initrd, that cause a problem because the name of the root device is different. so before booting we will need to set the parameter PV-args to include root=/dev/xvda2 (in a condition that it is a default configuration) so that would look something like that:
# xe vm-param-list uuid=<vm uuid> PVargs="console=ttyS0 xencons=ttyS root=/dev/xvda2"
and we also need to change the /etc/fstab.
RHEL doesn't have this problem because it uses LVM which are automatically detected on the given harddrives and LVM doesn't change names.

of course now would be a good idea to install the Xen tools on the guest, I haven't explorer what exactly they are for (I think they are mostly for monitoring) any way I guess that its better with then without.

if you would like also to enable the "Switch to X console" button on the XenCenter you will need edit the gdm configuration, the location of the file differ between SLES and RHEL, it is located in /etc/opt/gnome/gdm/gdm.conf on SLES and in /etc/gdm/custom.conf in RHEL but I could only make it work on RHEL ... not the same version of Xvnc and differences in the parameters, haven't sorted it yet. anyway here is what you need to do to make RHEL work:
open the file in you favorite editor (vi) and seek for "[servers]"
under it add the line:

Guest Members

  • 6,766 posts

Posted 11 January 2008 - 06:33 PM

Wow ... I was looking for this information from long time and I thought that it was not possible to convert later. YOU ARE THE MAN!!!!

Thanks a million!

Guest Members

  • 6,766 posts

Posted 20 January 2008 - 08:52 AM

after long testings and readings, i've found that "PV-kernel" and "PV-ramdisk" are only effective if you don't specify "PV-bootloader"...
Also if your image doesn't contain a kernel, ramdisk, and a bootable flag of a partition...

In other words, PV-bootloader replaces grub, and it uses menu.conf in /boot/grub of the HD image...

So, "PV-kernel" and "PV-ramdisk" are absolute paths for a kernel and ramdisk images on the HOST Xen server.. not relative to the guest HD image... ;)

Satinder Singh Members

Satinder Singh
  • 17 posts

Posted 05 November 2012 - 01:29 PM

Where do I find these paths? Can anyone please tell me???

#xe vm-param-set uuid=<vm uuid> PV-bootloader-args="--kernel <full path to xen kernel> --ramdisk <full path to xen initrd>"

I need to use a CentOS live cd/ Win XP cd as a VM inside a VMware workstation. Please tell me how to do this. :(