wiki:XVM_FencingConfig
Last modified 3 years ago Last modified on 05/23/11 17:51:37

Describes how to set up fencing using the XVM host. XVM is tested and known to work with:

  • Xen
  • KVM

The default in the stable3 branch of cluster.git is KVM; this can be changed using the -U option on the command line:

fence_xvmd -U xen:///

Or the uri parameter in cluster.conf:

<fence_xvmd uri="xen:///" />

Single Host using Private Networking

Private networking is the default method of operation when using libvirt/KVM in Fedora 9 and Fedora 10. Basically, libvirtd creates a special IP (e.g. 192.168.122.1) on a special interface (e.g. virbr0), which acts as a gateway between the virtual machines and the rest of the network.

This consequently is the easiest method to configure! Simply place the following in rc.local on the host operating system for KVM:

/sbin/fence_xvmd -LI virbr0

Or this if you are using Xen:

/sbin/fence_xvmd -LI virbr0 -U xen:///

... and configure the cluster software on the guests:

  <clusternodes>
    <clusternode name="molly" votes="1" nodeid="1">
      <fence>
        <method name="1">
          <device name="xvm" domain="molly"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="frederick" votes="1" nodeid="2">
      <fence>
        <method name="1">
          <device name="xvm" domain="frederick"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  ...
  <fencedevices>
    <fencedevice name="xvm" agent="fence_xvm"/>
  </fencedevices>
  ...

That's it; you're done. No key generation, no key distribution, no nothing.

Single Host using Bridged Networking

If your virtual machines are using routable IP addresses, you will need to generate a key file for your cluster so that your guests can not erroneously fence guests running on other hosts which happen to have the same name. For this configuration, we have a few additional steps:

  • We need to create a key file which we will give to the guests:
    # dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
    
  • Copy this key file to /etc/cluster on the virtual machines using scp or some other suitable utility.
  • Also, depending on your networking configuration, you may need to change the fence_xvmd line in rc.local to something like the following:
    fence_xvmd -LX
    

Multiple Clustered Hosts using Bridged Networking

See VMClusterCookbook.

Multiple, NON-Clustered hosts using Bridged Networking

I strongly recommend using different keys per physical host. To make it simpler from a configuration perspective, you can put the key_file argument on the fencedevice line rather than on the device link:

<clusternodes>
  <clusternode name="node1">
    <fence>
      <method>
        <device name="host-1" domain="vm-on-host-1"/>
      </method>
    </fence>
  </clusternode>
  <clusternode name="node2">
    <fence>
      <method>
        <device name="host-2" domain="vm-on-host-2"/>
      </method>
    </fence>
  </clusternode>
  ...
</clusternodes>   
<fencedevices>
  <fencedevice agent="fence_xvm" name="host-1" key_file="/etc/cluster/host-1.key" />
  <fencedevice agent="fence_xvm" name="host-2" key_file="/etc/cluster/host-2.key" />
  ...
</fencedevices>

On 'host-1' and 'host-2', generate the keys normally:

on host-1:

# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
# scp /etc/cluster/fence_xvm.key root@vm1:/etc/cluster/host-1.key
# scp /etc/cluster/fence_xvm.key root@vm2:/etc/cluster/host-1.key

on host-2:

# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
# scp /etc/cluster/fence_xvm.key root@vm1:/etc/cluster/host-2.key
# scp /etc/cluster/fence_xvm.key root@vm2:/etc/cluster/host-2.key