Changes between Version 12 and Version 13 of VMClusterCookbook


Ignore:
Timestamp:
05/22/11 01:44:45 (3 years ago)
Author:
digimer
Comment:

Fixed the last copy from the old wiki to work on the new Trac syntax.

Legend:

Unmodified
Added
Removed
Modified
  • VMClusterCookbook

    v12 v13  
    1 This page explains how to get a two-node virtual machine (e.g. Xen) cluster up and running on a single domain-0 host.  Per the manual page of fence_xvmd, the dom0 node must be in a cluster apart from the virtual cluster ([https://bugzilla.redhat.com/show_bug.cgi?id=362351 this has been fixed in CVS and will be fixed in RHEL5.2], but we'll include the conf anyway).  The simplest 1-node cluster config looks like this: 
    2  
     1This page explains how to get a two-node virtual machine (e.g. Xen) cluster up and running on a single domain-0 host. 
     2 
     3== Before you Begin == 
     4 * fence_xvmd was designed with Xen Bridged networking in mind.  If you can't reach domU guests from outside the dom0 (e.g. another physical machine), chances are, fence_xvm won't work 
     5 * '''Never mix domU and dom0 nodes in the same cluster'''.  Doing so leads to ugly quorum problems.  For example, if you have 3 physical nodes and 2 domUs in the same cluster (5 votes total) and both domUs are on the same physical host, losing that physical host will break quorum, causing an unrecoverable failure (2/5 votes is not enough to sustain a quorum). 
     6 * If you wish to migrate a clustered domU between physical hosts, those physical hosts must be clustered together.  I.e. there should be two clusters: a cluster of (dom0) physical hosts, and a cluster of (domU) virtual hosts. 
     7 * Until 5.2 is out, if you wish to use fence_xvmd, you must create a ''cluster of one'' (configuration example below).  [https://bugzilla.redhat.com/show_bug.cgi?id=362351 This is requirement is fixed in git]. 
    38{{{ 
    49<?xml version="1.0"?> 
     
    2126 
    2227== Creating virtual machines == 
    23  1. Create a an initial image.  You probably want at least 10GB per machine. 
    24   . I created mine using dd, and gave each one 10GB: 
    25   {{{ 
     28 1. Create a an initial image.  You probably want at least 10GB per machine. I created mine using dd, and gave each one 10GB:  
     29{{{ 
    2630# mkdir /guests 
    2731# dd if=/dev/zero of=/guests/default.img bs=1M count=10240 
    2832}}} 
    29  1. Create some shared images.  If you want to play with a quorum disk, be sure to include that. 
    30  {{{ 
     33 2. Create some shared images.  If you want to play with a quorum disk, be sure to include that. 
     34{{{ 
    3135# dd if=/dev/zero of=/guests/quorum.dsk bs=1M count=10 
    3236# dd if=/dev/zero of=/guests/shared.dsk bs=1M count=10240 
    3337}}} 
    34  1. Install the default image. 
    35  {{{ 
     38 3. Install the default image. 
     39{{{ 
    3640[root@ayanami ~]# virt-install 
    3741What is the name of your virtual machine? Default 
     
    4549Bootdata ok (command line is   method=nfs:install.test.redhat.com:/rhel5-server-x86_64) 
    4650Linux version 2.6.18-58.el5xen (brewbuilder@hs20-bc2-2.build.redhat.com) (gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)) #1 SMP Tue Nov 27 17:15:58 EST 2007 
    47 ...}}} 
    48  1. After the installation is complete, make your '''real''' guest images.  This way you only have to run the installer once. 
    49  {{{ 
     51... 
     52}}} 
     53 4. After the installation is complete, make your '''real''' guest images.  This way you only have to run the installer once. 
     54{{{ 
    5055# cp /guests/default.img /guests/molly.img 
    5156# cp /guests/default.img /guests/frederick.img 
     
    5358# cp /etc/xen/Default /etc/xen/frederick 
    5459}}} 
    55  1. Edit the two domain configuation files you just created.  You *MUST* change the '''uuid''', '''name''', '''disk''', and '''vif''' lines at a minimum: {{{ 
     60 5. Edit the two domain configuation files you just created.  You *MUST* change the '''uuid''', '''name''', '''disk''', and '''vif''' lines at a minimum:  
     61{{{ 
    5662# 
    5763# name = "Default" 
     
    8995vif = [ "mac=00:16:3e:00:cf:fe,bridge=xenbr0" ] 
    9096}}} 
    91  5. Boot each domain '''one at a time''' and make sure /etc/sysconfig/network-scripts/ifcfg-eth0 has a MAC address matching what is in their individual Xen domain configuration files. 
    92  6. Install the cluster packages on the Xen domains if this has not already been done, but make sure they are turned off at boot for now. 
    93  7. Configure the cluster.  Be sure to use fence_xvm as the agent.  My cluster configuration looks like this:{{{ 
     97 5. Boot each domain '''one at a time''' and make sure /etc/sysconfig/network-scripts/ifcfg-eth0 has a MAC address matching what is in their individual Xen domain configuration files.  Once this is verified, you can boot both domains. 
     98 
     99== Setting up the cluster == 
     100 1. Install the cluster packages on the Xen domains if this has not already been done, but make sure they are turned off at boot for now. 
     101 2. Configure the cluster.  Be sure to use fence_xvm as the agent.  My cluster configuration looks like this: 
     102{{{ 
    94103<?xml version="1.0"?> 
    95104<cluster alias="lolcats" config_version="41" name="lolcats"> 
     
    123132</cluster> 
    124133}}} 
    125  9. Copy /etc/cluster/fence_xvm.key from the host (domain-0) to the guest domains in /etc/cluster. 
    126  10. Test fence_xvm. 
    127     * On domain-0, start the cluster software, but stop fence_xvmd: {{{ 
     134 3. Copy /etc/cluster/fence_xvm.key from the host (domain-0) to the guest domains in /etc/cluster. 
     135 4. Test fence_xvm fencing. 
     136    * On domain-0, start the cluster software, but stop fence_xvmd: 
     137{{{ 
    128138[root@ayanami xen]# /etc/init.d/cman start 
    129139Starting cluster:  
     
    166176... 
    167177}}} 
    168     * Run a test to see if fence_xvmd gets the request and processes it from one of the domUs.  This test '''will not work''' fence_xvm.key is not exactly the same on the domUs and dom0.  Note the '''-o null''' at the end - this signals to fence_xvmd to not actually do anything, just send a failure response to the caller. {{{ 
     178    * Run a test to see if fence_xvmd gets the request and processes it from one of the domUs.  This test '''will not work''' fence_xvm.key is not exactly the same on the domUs and dom0.  Note the '''-o null''' at the end - this signals to fence_xvmd to not actually do anything, just send a failure response to the caller. 
     179{{{ 
    169180[root@molly ~]# fence_xvm -H frederick -ddd -o null 
    170181Debugging threshold is now 3 
     
    194205Remote: Operation failed 
    195206}}} 
    196     * On the server, you should see {{{ 
     207    * On the server, you should see 
     208{{{ 
    197209Request to fence: frederick 
    198210frederick is running locally 
     
    201213Sending response to caller... 
    202214}}} 
    203  11. Restart fence_xvmd in the background if it works (Press Control-C and run `fence_xvmd`) 
    204  12. Start the cluster software in the Xen guests. 
    205  13. That's it!  Now you can make a GFS file system on /dev/xvdc and mount it from both nodes (assuming you used /dev/xvdc for your shared disk...) 
     215 5. Restart fence_xvmd in the background on dom0 if the test succeeded (Press Control-C and run `fence_xvmd`) 
     216 6. Start the cluster software in the Xen guests. 
     217 
     218That's it!  Now you can make a GFS file system on /dev/xvdc and mount it from both nodes (assuming you used /dev/xvdc for your shared disk...) 
     219 
     220---- 
     221CategoryHowTo