This page explains how to get a two-node virtual machine (e.g. Xen) cluster up and running on a single domain-0 host.
Before you Begin
- fence_xvmd was designed with Xen Bridged networking in mind. If you can't reach domU guests from outside the dom0 (e.g. another physical machine), chances are, fence_xvm won't work
- Never mix domU and dom0 nodes in the same cluster. Doing so leads to ugly quorum problems. For example, if you have 3 physical nodes and 2 domUs in the same cluster (5 votes total) and both domUs are on the same physical host, losing that physical host will break quorum, causing an unrecoverable failure (2/5 votes is not enough to sustain a quorum).
- If you wish to migrate a clustered domU between physical hosts, those physical hosts must be clustered together. I.e. there should be two clusters: a cluster of (dom0) physical hosts, and a cluster of (domU) virtual hosts.
- Until 5.2 is out, if you wish to use fence_xvmd, you must create a cluster of one (configuration example below). This is requirement is fixed in git.
<?xml version="1.0"?> <cluster alias="londesktop" config_version="1" name="londesktop"> <clusternodes> <clusternode name="ayanami.boston.devel.redhat.com" nodeid="1" votes="1"/> </clusternodes> <cman/> <fencedevices/> <rm/> <fence_xvmd/> </cluster>
Change the host name and cluster alias/name to match your environment (your cluster name / alias could be anything as long as it doesn't collide with other hosts on the network). There, that wasn't hard, was it? Now, we need to create a key file which we will give to the guests:
# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
This key file will need to be copied to the virtual machines - speaking of which, we need to create some!
Creating virtual machines
- Create a an initial image. You probably want at least 10GB per machine. I created mine using dd, and gave each one 10GB:
# mkdir /guests # dd if=/dev/zero of=/guests/default.img bs=1M count=10240
- Create some shared images. If you want to play with a quorum disk, be sure to include that.
# dd if=/dev/zero of=/guests/quorum.dsk bs=1M count=10 # dd if=/dev/zero of=/guests/shared.dsk bs=1M count=10240
- Install the default image.
[root@ayanami ~]# virt-install What is the name of your virtual machine? Default How much RAM should be allocated (in megabytes)? 256 What would you like to use as the disk (path)? /guests/default.img Would you like to enable graphics support? (yes or no) no What is the install location? nfs:install.test.redhat.com:/rhel5-server-x86_64 Starting install... Creating domain... 0 B 00:04 Bootdata ok (command line is method=nfs:install.test.redhat.com:/rhel5-server-x86_64) Linux version 2.6.18-58.el5xen (firstname.lastname@example.org) (gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)) #1 SMP Tue Nov 27 17:15:58 EST 2007 ...
- After the installation is complete, make your real guest images. This way you only have to run the installer once.
# cp /guests/default.img /guests/molly.img # cp /guests/default.img /guests/frederick.img # cp /etc/xen/Default /etc/xen/molly # cp /etc/xen/Default /etc/xen/frederick
- Edit the two domain configuation files you just created. You *MUST* change the uuid, name, disk, and vif lines at a minimum:
# # name = "Default" # Must match domain file name # name = "molly" # # uuid = "46fcd834-e5b5-17ba-c3df-a4091fd62955" # ... new uuid generated by running the 'uuidgen' utility # uuid = "8d1a8210-69de-4c55-8cf8-deb2356fc7b2" maxmem = 256 memory = 256 vcpus = 1 bootloader = "/usr/bin/pygrub" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ ] # # disk = [ "tap:aio:/guests/default.img,xvda,w" ] # # Note the 'w!' flags on the quorum.dsk and shared.dsk. This means # that multiple domains can now write to this disk. They are effectively # shared storage between the Xen domains. # disk = [ "tap:aio:/guests/molly.img,xvda,w", "tap:aio:/guests/quorum.dsk,xvdb,w!", "tap:aio:/guests/shared.dsk,xvdc,w!" ] # # vif = [ "mac=00:16:3e:00:cf:fd,bridge=xenbr0" ] # I just increment this one. # vif = [ "mac=00:16:3e:00:cf:fe,bridge=xenbr0" ]
- Boot each domain one at a time and make sure /etc/sysconfig/network-scripts/ifcfg-eth0 has a MAC address matching what is in their individual Xen domain configuration files. Once this is verified, you can boot both domains.
Setting up the cluster
- Install the cluster packages on the Xen domains if this has not already been done, but make sure they are turned off at boot for now.
- Configure the cluster. Be sure to use fence_xvm as the agent. My cluster configuration looks like this:
<?xml version="1.0"?> <cluster alias="lolcats" config_version="41" name="lolcats"> <cman expected_votes="1" two_node="1" /> <totem token="21000"/> <clusternodes> <clusternode name="frederick" nodeid="1" votes="1"> <fence> <method name="1"> <device domain="frederick" name="xvm"/> </method> </fence> </clusternode> <clusternode name="molly" nodeid="2" votes="1"> <fence> <method name="1"> <device domain="molly" name="xvm"/> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent="fence_xvm" name="xvm"/> </fencedevices> <rm> <failoverdomains/> <resources/> <service name="test" autostart="0"/> </rm> </cluster>
- Copy /etc/cluster/fence_xvm.key from the host (domain-0) to the guest domains in /etc/cluster.
- Test fence_xvm fencing.
- On domain-0, start the cluster software, but stop fence_xvmd:
[root@ayanami xen]# /etc/init.d/cman start Starting cluster: Enabling workaround for Xend bridged networking... done Loading modules... done Mounting configfs... done Starting ccsd... done Starting cman... done Starting daemons... done Starting fencing... done Starting virtual machine fencing host... done [ OK ] [root@ayanami xen]# killall fence_xvmd [root@ayanami xen]# fence_xvmd -fdddd Debugging threshold is now 4 -- args @ 0x7ffff6601e20 -- args->addr = 22.214.171.124 args->domain = (null) args->key_file = /etc/cluster/fence_xvm.key args->op = 2 args->hash = 2 args->auth = 2 args->port = 1229 args->family = 2 args->timeout = 30 args->retr_time = 20 args->flags = 1 args->debug = 4 -- end args -- Reading in key file /etc/cluster/fence_xvm.key into 0x7ffff6600e20 (4096 max size) Actual key length = 4096 bytesOpened ckpt vm_states My Node ID = 1 Domain UUID Owner State ------ ---- ----- ----- Domain-0 00000000-0000-0000-0000-000000000000 00001 00001 frederick ef84f5cf-8589-41f6-8728-ac5170c42cbc 00001 00002 molly d1cf68a4-0bdf-5a88-81a7-5c41976147f6 00001 00002 Storing frederick Storing molly ...
- Run a test to see if fence_xvmd gets the request and processes it from one of the domUs. This test will not work fence_xvm.key is not exactly the same on the domUs and dom0. Note the -o null at the end - this signals to fence_xvmd to not actually do anything, just send a failure response to the caller.
[root@molly ~]# fence_xvm -H frederick -ddd -o null Debugging threshold is now 3 -- args @ 0x7fff5164af10 -- args->addr = 126.96.36.199 args->domain = frederick args->key_file = /etc/cluster/fence_xvm.key args->op = 0 args->hash = 2 args->auth = 2 args->port = 1229 args->family = 2 args->timeout = 30 args->retr_time = 20 args->flags = 0 args->debug = 3 -- end args -- Reading in key file /etc/cluster/fence_xvm.key into 0x7fff51649ec0 (4096 max size) Actual key length = 4096 bytesIgnoring fec0::2:216:3eff:fe78:b532: wrong family Sending to 188.8.131.52 via 127.0.0.1 Sending to 184.108.40.206 via 10.12.32.98 Waiting for connection from XVM host daemon. Issuing TCP challenge Responding to TCP challenge TCP Exchange + Authentication done... Waiting for return value from XVM host Remote: Operation failed
- On the server, you should see
Request to fence: frederick frederick is running locally Plain TCP request NULL operation: returning failure Sending response to caller...
- On domain-0, start the cluster software, but stop fence_xvmd:
- Restart fence_xvmd in the background on dom0 if the test succeeded (Press Control-C and run fence_xvmd)
- Start the cluster software in the Xen guests.
That's it! Now you can make a GFS file system on /dev/xvdc and mount it from both nodes (assuming you used /dev/xvdc for your shared disk...)