Chapter 8: Beyond Linux: Using Xen with Other Unix-like OSs
One major benefit of paravirtualization which we’ve thus far ignored is the ability to run multiple operating systems on a single paravirtualized physical machine. Although Linux is the most popular OS to run under Xen, it’s not the only option available. Several other Unix-like OSs can run as a dom0, and rather more have been modified to run as paravirtualized domUs.
Apart from Linux, only Solaris and NetBSD are capable of functioning as a dom0 with current versions of Xen. Some work has been done with the other BSDs and with Plan9, but these OSs either can only work as a domU or can only work with older Xen versions. Support is evolving rapidly, however. (FreeBSD seems especially close to having functional Xen bits.)
In this chapter, we’ll focus on Solaris and NetBSD. Partially this is because they have mature Xen support, with active community involvement and ongoing development. Most importantly, though, it’s because we have run them in production. In a later chapter, we’ll discuss Windows.
- 1 Solaris
- 1.1 VIRTUALIZATION WITH SOLARIS
- 1.2 Getting Started with Solaris
- 1.3 Solaris Dom0
- 1.4 Setting Up Xen
- 1.5 Solaris SMF
- 1.6 Creating a Solaris DomU
- 1.7 Creating a Solaris DomU Under Linux
- 1.8 OpenSolaris DomU Postinstall Configuration
- 2 NetBSD
- 3 Beyond Paravirtualization: HVM
- 4 Footnotes
- 5 Navigation
Sun has been pushing Xen virtualization heavily in recent community releases of OpenSolaris, and their effort shows. Solaris works well as both a dom0 and a domU, with closely integrated Xen support. The only caveat is that, as of this writing, OpenSolaris does not support Xen 3.3 and paravirt_ops domUs.
NOTES: Sun doesn’t actually call their shipping version of Xen Xen. They use the term xVM
for marketing purposes, and include the unrelated VirtualBox under the xVM label.
We’re going to continue to call it Xen, however, because it’s the name we’re used to.
Only the x86 version of Solaris supports Xen—Solaris/SPARC uses alternate virtualization technologies.
VIRTUALIZATION WITH SOLARIS
Sun, being traditionally a “medium iron” company, has emphasized virtualization for a long time, with a few different, complementary technologies to implement virtualization at different levels. Here’s an overview of their non-Xen virtualization offerings.
On new UltraSparc Niagara-based systems, pure hardware virtualization is provided by means of Logical Domains, or LDoms. These are a successor to the Dynamic System Domains found on earlier Sun Enterprise platforms, which allowed you to devote CPU and memory boards to independent OS instances. Similarly, on a reasonably new SPARC box, you can partition the CPU and memory to run multiple, independent operating systems, using the processor’s hardware virtualization support. On x86, Sun addresses full virtualization by way of their VirtualBox product. VirtualBox executes guest code directly where possible and emulates when necessary, much like VMware.
Finally, Sun addresses OS-level virtualization through Solaris Zones,* which are themselves an interesting, lightweight virtualization option. Like other OS-level virtualization platforms, Zones provide a fair amount of separation between operating environments with very little overhead.
Sun even offers the option to run Linux binaries under Solaris on x86_64, via lx branded Zones. (These lx branded Zones provide a thin compatibility layer between the Solaris kernel and Linux userspace. Pretty cool.) However, the Linux emulation isn’t perfect. For example, since lx branded Zones use the same Solaris kernel that’s running on the actual hardware, you can’t load Linux device drivers.
* We tend to use the terms Zone and Container interchangeably. Technically, a Solaris Container implements system resource controls on top of Zones.
Getting Started with Solaris
To run Solaris under Xen, you’ll need to get a copy of Solaris. There are several versions, so make sure that you pick the right one.
You do not want Solaris 10, which is the current Sun version of Solaris. Although it’s a fine OS, it doesn’t have Xen support because its development lags substantially behind the bleeding edge. (In this it caters to its market segment. We are personally acquainted with people who are running Solaris 8—a welcome contrast to the prevailing Linux view that software more than six months old is some sort of historical curiosity.)
Fortunately, Solaris 10 isn’t the only option. Solaris Express acts as a preview of the next official Solaris version, and it’s a perfectly capable OS for Xen in its own right. It incorporates Xen, but is still a bit behind the latest development. It’s also not as popular as OpenSolaris.
Finally, there’s OpenSolaris. Sun released huge tracts of Solaris source code a while ago under the Common Development and Distribution License1 (CDDL), and the community’s been pounding on it ever since. OpenSolaris is the result—it’s much like Sun’s release of Solaris but with new technology and a much faster release cycle. Think of the relationship between the two as like Red Hat Enterprise Linux and Fedora, only more so.
Both Solaris Express and OpenSolaris incorporate Xen support. Solaris Express has the Xen packages included on the DVD, while OpenSolaris requires you to download Xen as an add-on. Both of them provide a fairly polished experience. Although there are other distros based on the released Solaris code, none of them are particularly Xen-oriented, so the officially blessed distros are probably the best place to start.
RUNNING SOLARIS EXPRESS
We had some trouble deciding whether to focus on OpenSolaris or Solaris Express while we were writing this chapter. We decided to go with OpenSolaris because it seemed more popular, based on a completely unscientific poll of our friends.
However, Solaris Express is still a perfectly fine OS with excellent Xen support, so we’ve also included some notes on setting it up.
Believe it or not, Xen support should exist pretty much out of the box.* When you install Solaris Express on a system that supports Xen, it installs a Xen kernel and gives you the option to boot it from GRUB—just select Solaris xVM and off you go. (The included Xen version is 3.1.4 as of snv_107.)
From there, you can install domUs normally. It’s even got virt-manager. Take a look at the next section for more details on setting up domUs. Most of these steps will apply to Solaris Express and OpenSolaris equally well.
* That’s another reason we gloss over Solaris Express: Focusing on it would not, in the words of Douglas Adams, “make for nice fat books such as the American market thrives on.”
In general, there are three possible configurations of (Open)Solaris that are of interest in our discussion of Xen.
- First, we have the Solaris dom0.
- Second, there’s the Solaris domU on a Solaris dom0. This is a fairly straightforward setup.
- Finally, you can run a Solaris domU under Linux with a minimum2 of fuss.
Let’s start by setting up an OpenSolaris dom0, since you’ll need one for the next section. (Although we suppose this applies only if you’re doing something crazy like running through all our examples in order.)
Note that we’re going to be using pfexec, the Solaris equivalent of sudo,3 for these examples, so it’s not necessary to be root for these steps.
First, download the distribution from http://opensolaris.org/os/downloads/. Follow the directions to unpack and burn it, and boot from the CD, just like virtually any other OS install.
The OpenSolaris LiveCD will probably be a familiar experience to anyone who’s installed Ubuntu. It’s really quite similar, with a standard GNOME desktop, some productivity software, and a cute Install OpenSolaris icon on the desktop. Double-click the Install OpenSolaris icon to launch the installer, then follow its directions.
When the installer finishes, it’ll prompt you to reboot.
Setting Up Xen
If, once you reboot, you notice that you don’t have Xen available, don’t panic. OpenSolaris, unlike Solaris Express, doesn’t include the Xen packages in the initial install. (Everything had to fit on a CD, after all.) You will have to install and set them up manually.
First, we create a ZFS boot environment. (If you’re not familiar with boot environments, substitute the word snapshot. The idea is, if you break your system trying to install Xen, you can reboot into the original environment and try again.)
$ pfexec beadm create -a -d xvm xvm $ pfexec beadm mount xvm /tmp/xvm
Next, we use the OpenSolaris pkg</tt. command to install the Xen packages in the new boot environment.
$ pfexec pkg -R /tmp/xvm install xvm-gui
As of OpenSolaris 2008.11, the xvm-gui package cluster provides all the necessary Xen packages. Previous versions may require you to install the packages individually. If you need to do that, you should be able to get away with running:
# pkg install SUNWxvmhvm # pkg install SUNWvirtinst # pkg install SUNWlibvirt # pkg install SUNWurlgrabber
These packages provide Xen (with HVM), virt-install, and virtinstall’s dependencies.
Next, we need to update GRUB to boot the Xen kernel properly.
Under OpenSolaris, menu.lst is at /rpool/boot/grub/menu.lst. Edit the xvm menu item to look something like the following:
title xvm findroot (pool_rpool,0,a) bootfs rpool/ROOT/xvm kernel$ /boot/$ISADIR/xen.gz module$ /platform/i86xpv/kernel/$ISADIR/unix /platform/i86xpv/kernel/$ISADIR/ unix -B $ZFS-BOOTFS,console=text module$ /platform/i86pc/$ISADIR/boot_archive
Note that we’re using extensions to GRUB that enable variables in menu.lst, such as $ISADIR (for Instruction Set Architecture). Apart from that, it’s a fairly normal Xen GRUB config, with the hypervisor, kernel, and ramdisk.
When you begin to configure a Solaris dom0, you’ll probably notice immediately that some files aren’t quite where you expect. For one thing, Solaris doesn’t have an /etc/xen directory, nor does it have the customary scripts in /etc/init.d. The various support scripts in /etc/xen/scripts instead live in /usr/lib/xen/scripts. You can keep domain configurations wherever you like. (We actually make an /etc/xen directory and put domain configurations in it.)
Instead of relying on the standard Xen config files, Solaris handles configuration and service startup via its own management framework, SMF (Service Management Facility). You can examine and change xend’s settings using the svccfg command:
# svccfg -s xend listprop</tt> This will output a list of properties for the <tt>xend</tt> service. For example, to enable migration: <pre># svccfg -s xend setprop config/xend-relocation-address = \"\" # svcadm refresh xend # svcadm restart xend
You may have to enable the Xen-related services manually using svcadm, particularly if you initially booted the non-Xen kernel. To look at which services are stopped, use svcs:
# svcs -xv
If the Xen services are stopped for maintenance or disabled, you can enable them using svcadm:
# svcadm enable store # svcadm enable xend # svcadm enable virtd # svcadm enable domains # svcadm enable console
From that point, you should be able to use Solaris as a perfectly normal dom0 OS. It’s even got libvirt. Have fun.
Creating a Solaris DomU
You didn’t really think it would be that easy, did you? There are a couple of small caveats to note—things that make Xen under Solaris a slightly different animal than from under Linux. We’ll start by creating a Solaris domU on a Solaris dom0, then extend our discussion to a Solaris domU on a Linux dom0.
ZFS Backing Devices
First, we suggest handling virtual block devices a bit differently under Solaris. Although you can create domU filesystems as plain loopback-mounted files, ZFS is probably a better option. It’s been praised far and wide, even winning some grudging accolades from Linus Torvalds. It is, in fact, ideal for this sort of thing, and the generally accepted way to manage disks under Solaris— even more so now that OpenSolaris uses a ZFS root filesystem.
ZFS is pretty simple, at least to get started with. Users of LVM should find that creating a pool and filesystem are familiar tasks, even though the commands are slightly different. Here we’ll make a pool, create a ZFS filesystem within the pool, and set the size of the filesystem:
# zpool create guests c0d0 # zfs create guests/escalus # zfs set quota=4g guests/escalus
Now we can define a domain that uses the phy: device /dev/zvol/dsk/guests/escalus for its backing store, as shown in the config file.
We’ll leave further subtleties of ZFS administration to Sun’s documentation.
Installing a DomU via PyGRUB
The last thing to do before creating the domU is to write an appropriate config file. Here’s ours:
# cat /etc/xen/escalus name = "escalus" memory = 512 disk = [ 'file:/opt/xen/install-iso/os200805.iso,6:cdrom,r', 'phy:/dev/zvol/dsk/guests/escalus,0,w' ] vif = [''] bootloader = 'pygrub' kernel = '/platform/i86xpv/kernel/unix' ramdisk = 'boot/x86.microroot' extra = /platform/i86xpv/kernel/unix -B console=ttya,livemode=text on_shutdown = 'destroy' on_reboot = 'destroy' on_crash = 'destroy'
Note that the disk specifier works differently than with Linux domUs. Rather than using symbolic device names, as under Linux:
disk = ['file:/export/home/xen/solaris.img,sda1,w'] root = "/dev/sda1"
we instead specify the disk number:
disk = ['phy:/dev/zvol/dsk/guests/ecalus,0,w'] root = "/dev/dsk/c0d0s0"
Here we’re installing Solaris from an ISO image (os200805.iso) using PyGRUB to pull the correct kernel and initrd off the CD image, boot that, and proceed with a normal install.
NOTES: One thing to watch out for is that domU networking will only work if you’re using a
GLD3-based network driver. The drivers that ship with Solaris are all fine in this
regard—however, you may have trouble with third-party drivers.
Once the install’s done, we shut the machine down and remove the disk entry for the CD.
At this point your Solaris domU should be ready to go. Setting up a Linux domU is equally straightforward, since standard Linux domU images and kernels should work unmodified under Solaris.
Next, we’ll look at setting up a Solaris domU under Linux.
Creating a Solaris DomU Under Linux
For the most part, a domU is independent of the dom0 OS, and thus the install under Linux uses much the same installation procedure as under Solaris. There are only a few pitfalls for the unwary.
First, you might have a bit more work to do to ensure that the domain can find an appropriate kernel. The Solaris image will complain bitterly, and in fact will not boot, with a Linux kernel.
If you’re using PyGRUB on a Xen 3.1 or later system, you shouldn’t need to do anything special. PyGRUB itself will load the appropriate files from OpenSolaris installation media without further intervention, just as in the previous example.
If you’re not using PyGRUB, or if you’re using the stock RHEL5.1 hypervisor, you’ll need to extract the kernel and miniroot (initrd, for Linux people) from the OpenSolaris install package and place them somewhere that Xen can load them.
# mount -o loop,ro osol200811.iso # cp /mnt/cdrom/boot/platform/i86pv/kernel/unix /xen/kernels/solaris/ # cp /mnt/cdrom/x86.miniroot /xen/kernels/solaris/ # umount /mnt/cdrom
Just as under Solaris, begin by writing a config file. We’ll set up this config file to load the installer from the CD, and later alter it to boot our newly installed domU. Note that we’re grabbing the kernel from the ISO, using the kernel and ramdisk options to specify the files we need.
bootloader = '/usr/bin/pygrub' kernel = "/platform/i86xpv/kernel/amd64/unix" ramdisk = "/boot/x86.microroot" extra = "/platform/i86xpv/kernel/amd64/unix -- nowin -B install_media=cdrom" cpu_weight=1024 memory = 1024 name = "rosaline" vif = ['vifname=rosaline,ip=192.0.2.136,bridge=xenbr0,mac=00:16:3e:59:A7:88' ] disk = [ 'file:/opt/distros/osol-0811.iso,xvdf:cdrom,r', 'phy:/dev/verona/rosaline,xvda,w' ]
Make sure to create your backing store (/dev/verona/rosaline in this case).
Now create the domain. Next step, installation.
Although OpenSolaris has a perfectly functional console when running as a domU, it unfortunately does not include a text mode installer. It does, however, include a VNC server and SSH server, either of which can be used to get a remote graphical display. Here’s how to set up VNC.
Log in at the domU console with username jack and password jack.
Once you’re in locally, set up your network. (If you’re using DHCP, it’ll probably already be set up for you, but it doesn’t hurt to make sure.)
# pfexec ifconfig xnf0 xnf0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2 inet 192.0.2.128 netmask ffffff00 broadcast 192.0.2.255 ether aa:0:0:59:a7:88
You can see that our network is in fine shape, with the address 192.168.2.128. If it’s not set up already, assign an address manually:
pfexec ifconfig xnf0 192.0.2.128/24
The VNC server should already be running. To enable remote access to it, run the vncpasswd command:
pfexec vncpasswd /etc/X11/.vncpasswd
vncpasswd will ask you to make up a password and enter it twice. Use this password to connect to the VNC server using your favorite VNC client. You should be greeted with an OpenSolaris desktop.
Finally, click the Install OpenSolaris icon on the desktop and proceed with the graphical install.
OpenSolaris DomU Postinstall Configuration
Once the installer has done its work, you’ll be ready to shut down the domain and move to the next step: setting up the dom0 to load a kernel from a ZFS filesystem.
The problem is that in Xen 3.3, PyGRUB’s version of libfsimage isn’t able to handle recent versions of ZFS directly. Our solution was to download the Xen-unstable source tree (as of this writing, Xen 3.4-rc) from http://xenbits.xen.org/ and build PyGRUB from that. (Alternatively, you can mount the install media, extract the kernel and microroot, specify these manually in the config file, and pass the correct “extra” line to the kernel—that works just as well.)
# hg clone http://xenbits.xen.org/xen-unstable.hg # cd xen-unstable # make tools # cd xen-unstable.hg/tools/pygrub; make install # cd xen-unstable.hg/tools/libfsimage; make install
Now we update the domain config file. Since we went to all the trouble of updating PyGRUB, we’ll use it directly here:
bootloader='pygrub' cpu_weight=1024 memory = 1024 name = "rosaline" vif = ['vifname=rosaline,ip=192.0.2.136,bridge=xenbr0,mac=00:16:3e:59:A7:88' ] disk = [ #'file:/opt/distros/osol-0811.iso,xvdf:cdrom,r', 'phy:/dev/verona/rosaline,xvda,w' ]
NOTES: PV-GRUB, at this time, isn’t able to load an OpenSolaris kernel properly. Use PyGRUB instead.
Start your new domain as usual with xm:
# xm create rosaline
NetBSD is a popular choice for a dom0 OS because of its small and versatile design, which is a good match for the dedicated virtualization server model that Xen encourages. In our experience, a dom0 running NetBSD will use less memory and be at least as stable as one running Linux.
However, Linux people often make the mistake of assuming that NetBSD is exactly like Linux. It’s not—it’s kind of close, but NetBSD is the product of an evolution as long as Linux’s, and it requires some practice to work with. In this section, we’re going to assume that you’re familiar with NetBSD’s idiosyncrasies; we’re only going to cover the Xen-related differences.
NetBSD’s Historical Xen Support
NetBSD has supported Xen for a very long time—since NetBSD version 3.0, which incorporated support for Xen2 as dom0 and as a domU. This Xen2 support is quite stable. However, it has the obvious drawback of being Xen2, which lacks the Xen3 features like live migration and HVM. It’s also 32 bit only, and doesn’t support PAE (Physical Address Extension). (We’ve used this version quite a bit. The first Xen setup we used for hosting at prgmr.com was a dual Xeon running NetBSD 3.1 and Xen2, supporting Linux and NetBSD domUs.) NetBSD 3.1 introduced support for Xen 3.0.x—but only as a domU.
NetBSD 4.0 added Xen 3.1 support as both a domU and a dom0, and it also introduced support for HVM. The only remaining problem with NetBSD 4.0 was that it, like its predecessors, it did not support PAE or x86_64, which means that it was unable to use more than 4GB of memory. It also could not run as a domU on a 64-bit or PAE system, such as is used by Amazon’s EC2. That last bit was the real killer—it meant NetBSD 4 required a non-PAE 32-bit hypervisor, which in turn limited you to 4GB of address space, which translates about 3.5GB of physical memory. (This limitation is so significant that Xen.org doesn’t even distribute a non-PAE binary package anymore.)
Finally, the new and shiny NetBSD 5 adds PAE support for NetBSD domUs, x86-64 support for both dom0 and domUs, and support for 32-bit domUs on 64-bit dom0s (32-on-64 in Xen parlance). Work is still being done to add features to bring NetBSD Xen support into feature parity with Linux’s Xen support, but NetBSD is already a perfectly viable platform.
Installing NetBSD as a Dom0
The basic steps to get started using NetBSD with Xen are pretty much the same as for any other OS: Download it, install it, and make it work. Again, we’re assuming that you’re familiar with the basic NetBSD install procedure, so we’re just going to outline these directions briefly.
Begin by downloading NetBSD and installing it as usual. (We opted to download and burn the ISO at http://mirror.planetunix.net/pub/NetBSD/iso/5.0/amd64cd-5.0.iso.) Configure the system according to your preference.
NOTE: ftp:// and http:// are interchangeable on all of the ftp.netbsd.org URLs. http://
gets through firewalls better, and ftp:// is slightly faster. Pick one. Also, you often get significantly better speeds using a mirror rather than the netbsd.org site. If your FTP
install fails partway through, the first thing to do is to try another mirror.
However you install NetBSD, go through the installer and reboot into your new system. Next, install the Xen kernel and supporting tools using the NetBSD ports system, pkgsrc. Get pkgsrc at http://ftp.netbsd.org/pub/NetBSD/packages/pkgsrc.tar.gz. Untar pkgsrc.tar.gz, then install Xen:
# cd pkgsrc/sysutils/xenkernel3 ; make install # cd pkgsrc/sysutils/xentools3 ; make install
After installing the Xen tools, NetBSD will remind you to create the Xen device nodes:
- cd /dev ; sh MAKEDEV xen
Now that Xen is installed, our next task is to install GRUB in place of the standard NetBSD bootloader so that we can perform the multistage boot that Xen requires:
- cd pkgsrc/sysutils/grub ; make install
Our next step is to download and install NetBSD Xen kernels—we’re already running off standard NetBSD kernels, and we’ve got the hypervisor installed, but we still need kernels for the dom0 and domUs. Download netbsd-XEN3_DOM0.gz, netbsd-XEN3_DOMU.gz, and netbsd-INSTALL_XEN3_DOMU.gz from your favorite NetBSD mirror. (We used http://mirror.planetunix.net/pub/NetBSD/NetBSD-5.0/.)
Now that we have suitable Xen kernels to go with the hypervisor and supporting tools that we installed in the previous step, we can set up GRUB in the usual way:
# grub-install --no-floppy sd0a
Edit /grub/menu.lst so that it boots the Xen kernel and loads NetBSD as a module. Here’s a complete file, with comments (adapted from a NetBSD example at http://www.netbsd.org/ports/xen/howto.html):
# Boot the first entry by default default=1 # after 10s, boot the default entry if the user didn't hit keyboard timeout=10 # Configure serial port to use as console. Ignore this bit if you're # not using the serial port. serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1 # Let the user select which console to use (serial or VGA). Default # to serial after 10s terminal --timeout=10 console serial # An entry for NetBSD/xen, using /xen/kernels/xen.gz as the domain0 # kernel, with serial console. Domain0 will have 64MB RAM allocated. # Assume NetBSD is installed in the first MBR partition. title Xen 3.3 / NetBSD (sd0a, serial) root(hd0,0) kernel (hd0,a)/xen/kernels/xen.gz dom0_mem=65536 com1=115200,8n1 module (hd0,a)/xen/kernels/XEN3_DOM0 root=sd0a ro console=ttyS0 # Same as above, but using VGA console # We can use console=tty0 (Linux syntax) or console=pc (NetBSD syntax) title Xen 3.3 / NetBSD (sd0a, vga) root(hd0,0) kernel (hd0,a)/xen/kernels/xenkernel3-3.1.0nb2 dom0_mem=65536 noreboot module (hd0,a)/xen/kernels/XEN3_DOM0 root=sd0a ro console=pc # Load a regular NetBSD/i386 kernel. Can be useful if you end up with a # nonworking /xen.gz title NetBSD 5 root (hd0,a) kernel (hd0,a)/netbsd-GENERIC
The important bits are the kernel name, XEN3_DOM0, and the root device, which we specify using NetBSD syntax.
NOTE: We’ve also set up this config file to use the serial console. No matter which operating system
you use, we strongly recommend using a serial console with Xen, even if you prefer to use a KVM or other method of remote management normally. See Chapter 14 for
more discussion of the many and varied uses of the serial console.
Copy over the basic Xen config files to the directory where the Xen tools will expect to find them:
# cp /usr/pkg/share/examples/xen/* /usr/pkg/etc/xen/
Now that we have all the parts of a NetBSD dom0, we need to start xenbackendd and xend (in that order, or it won’t work).
# cp /usr/pkg/share/examples/rc.d/xen* /etc/rc.d/ # echo "xenbackendd=YES">>/etc/rc.conf # echo "xend=YES">>/etc/rc.conf
Finally, to get networking to work, create /etc/ifconfig.bridge0 with these contents:
create !brconfig $int add fxp0 up
At this point you’re most likely done. Reboot to test, or start the Xen services manually:
# /etc/rc.d/xenbackendd start Starting xenbackendd. # /etc/rc.d/xend start Starting xend
You should now be able to run xm list:
# xm list Name ID Mem VCPUs State Time(s) Domain-0 0 64 1 r----- 282.1
Installing NetBSD as a DomU
Installing NetBSD as a domU is easy, even with a Linux dom0. In fact, because NetBSD’s INSTALL kernels include a ramdisk with everything necessary to complete the installation, we can even do it without modifying the configuration from the dom0, given a sufficiently versatile PyGRUB or PV-GRUB setup.
For this discussion, we assume that you’ve got a domU of some sort already set up—perhaps one of the generic prgmr.com Linux domains. In this domU, you’ll need to have a small boot partition that GRUB4 can read. This is where we’ll store the kernel and GRUB configuration.
First, from within your domain, download the NetBSD kernels:
# wget http://mirror.planetunix.net/pub/NetBSD/NetBSD-5.0/amd64/binary/kernel/ netbsd-INSTALL_XEN3_DOMU.gz # wget http://mirror.planetunix.net/pub/NetBSD/NetBSD-5.0/amd64/binary/kernel/ netbsd-XEN3_DOMU.gz
Then, edit the domain’s GRUB menu (most likely at /boot/grub/menu.lst) to load the INSTALL kernel on next reboot. (On the reboot after that, when the installation’s done, you’ll select the NetBSD run option.)
title NetBSD install root (hd0,0) kernel /boot/netbsd-INSTALL_XEN3_DOMU title NetBSD run root (hd0,0) kernel /boot/netbsd-XEN3_DOMU root=xbd1a
Reboot, selecting the NetBSD install option.
As if by magic, your domain will begin running the NetBSD installer, until you end up in a totally ordinary NetBSD install session. Go through the steps of the NetBSD FTP install. There’s some very nice documentation at http://netbsd.org/docs/guide/en/chap-exinst.html.
NOTE: At this point you have to be careful not to overwrite your boot device. For example,
prgmr.com gives you only a single physical block device, from which you’ll need to carve
a /boot partition in addition to the normal filesystem layout.
The only sticky point is that you have to be careful to set up a boot device that PyGRUB can read, in the place where PyGRUB expects it. (If you have multiple physical devices, PyGRUB will try to boot from the first one.) Since we’re installing within the standard prgmr.com domU setup, we only have a single physical block device to work with, which we’ll carve into separate /boot and / partitions. Our disklabel, with a 32 MB FFS /boot partition, looks like this:
We now have your BSD-disklabel partitions as: This is your last chance to change them. Start MB End MB Size MB FS type Newfs Mount Mount point --------- --------- --------- ---------- ----- ----- ----------- a: 31 2912 2882 FFSv1 Yes Yes / b: 2913 3040 128 swap c: 0 3071 3072 NetBSD partition d: 0 3071 3072 Whole disk e: 0 30 31 Linux Ext2 f: 0 0 0 unused g: Show all unused partitions h: Change input units (sectors/cylinders/MB) >x: Partition sizes ok
Once the install’s done, reboot. Select the regular kernel in PyGRUB, and your domU should be ready to go.
After NetBSD’s booted, if you want to change the bootloader configuration, you can mount the ext2 partition thus:
# mount_ext2fs /dev/xbd0d /mnt
This will allow you upgrade the domU kernel. Just remember that, whenever you want to upgrade the kernel, you need to mount the partition that PyGRUB loads the kernel from and make sure to update that kernel and menu.lst. It would also be a good idea to install the NetBSD kernel in the usual place, in the root of the domU filesystem, but it isn’t strictly necessary.
And there you have it—a complete, fully functional NetBSD domU, without any intervention from the dom0 at all. (If you have dom0 access, you can specify the install kernel on the kernel= line of the domain config file in the usual way—but what would be the fun of that?)
Beyond Paravirtualization: HVM
In this chapter, we have outlined the general steps necessary to use Solaris and NetBSD as both dom0 and domU operating systems. This isn’t meant to exhaustively list the operating systems that work with Xen—in particular, we haven’t mentioned Plan9 or FreeBSD at all—but it does give you a good idea of the sort of differences that you might encounter and easy recipes for using at least two systems other than Linux.
Furthermore, they each have their own advantages: NetBSD is a very lightweight operating system, much better than Linux at handling lowmemory conditions. This comes in handy with Xen. Solaris isn’t as light, but it is extremely robust and has interesting technologies, such as ZFS. Both of these OSs can support any OS as domU, as long as it’s been modified to work with Xen. That’s virtualization in action, if you like.
NOTE: The new Linux paravirt_ops functionality that is included in the kernel.org
kernels requires Xen hypervisor version 3.3 or later, so it works with NetBSD but
Finally, the addition of hardware virtualization extensions to recent processors means that virtually any OS can be used as a domU, even if it hasn’t been modified specifically to work with Xen. We discuss Xen’s support for these extensions in Chapter 12 and then describe using HVM to run Windows under Xen in Chapter 13. Stay tuned.
1The CDDL is a free software license that’s GPL-incompatible but generally inoffensive.
2The temptation exists to write “elegant minimum,” but it’s simply not so.
3Anyone planning to take offense to the comparison of pfexec and sudo: Please assume that we have been utterly convinced by your rhetoric and carry on with your day-to-day life.
4More accurately, of course, your GRUB simulator. If this is PyGRUB, it relies on libfsimage.