wiki:LVMFailover
Last modified 3 years ago Last modified on 05/22/11 17:37:35

Introduction

LVM can be used in two capacities: single machine and cluster-aware. The cluster-aware version allows a group of machines to use a logical volume at the same time - even make changes to the layout while in-use. The cluster-aware version is considered active-active. A logical volume should only be used on a single machine at a time if the single machine version is used.

There are cases where a user may wish to employ an active/passive solution. It is possible to do this, but it requires some specific steps to achieve the correct result. Failure to put the proper protection in place can result in fail-over not working; or worse, corrupted LVM metadata and an inability to get at the logical volumes. Although there is no official name for it, when LVM is used in an active/passive way, it is called "highly available LVM" or HA LVM.

HA LVM

This solution does not require cluster LVM (CLVM), but a cluster infrastructure and resource fail-over agent (rgmanager) are necessary. It avoids synchronization issues on particular LVM volume groups by avoiding the problem: only one cluster member may touch any part of a given volume group at a time. Because only one member may access a volume group at a time, it is not possible to use this method with GFS (or any shared-disk cluster file system). Additionally, it is important to stress that the atomic unit here is the volume group - *NOT* the logical volume!

Steps for creating an HA LVM solution

1) Create the logical volume. Example:

prompt> pvcreate /dev/sd[cde]1
prompt> vgcreate my_volume_group /dev/sd[cde]1
prompt> lvcreate -L 10G -n my_logical_volume my_volume_group

2) Edit /etc/cluster/cluster.conf to include the newly created logical volume as a resource in one of your services. (Optionally, you can use the system-config-cluster or conga GUIs.) Example resource manager section from /etc/cluster/cluster.conf:

  <service name="foo">
    <lvm name="mylvm" vg_name="my_volume_group"/>
    <fs name="myfs" device="/dev/my_volume_group/my_logical_volume" mountpoint="/mnt/myfs" fstype="ext3" />
    <script name="myscript" file="/mnt/myfs/myscript"/>
  </service>

3) Edit the "volume_list" field in /etc/lvm/lvm.conf. Include the name of your root volume group and your machine's name as given in /etc/cluster/cluster.conf preceded by an "@". Example from /etc/lvm/lvm.conf:

volume_list = [ "VolGroup00", "@neo-01" ]

4) Update your initrd on all your cluster machines. Example:

prompt> new-kernel-pkg --mkinitrd \
        --initrdfile=/boot/initrd-halvm-`uname -r`.img --install `uname -r`

5) Reboot all of your machines to ensure the correct initrd is in-use.

It's important to keep in mind that a volume group can only be active on one machine at a time. If you have 2 services and you want them on separate machines, then you must create two volume groups - each with 1 logical volume.


CategoryHowTo