Changes between Version 18 and Version 19 of VMware_FencingConfig


Ignore:
Timestamp:
05/22/11 02:45:31 (3 years ago)
Author:
digimer
Comment:

Fixed the last copy from the old wiki to work on the new Trac syntax.

Legend:

Unmodified
Added
Removed
Modified
  • VMware_FencingConfig

    v18 v19  
    11= VMware fence agent & Red Hat Cluster Suite = 
    2 Last updated 02-Dec-2008 
    3  
    4 == Pre-req == 
    5  
    6 The vmware "client" machine must have VMware Tools installed. So I recommend to install vmware tools in all cluster machine. 
     2Last updated 04-May-2010 
     3 
     4Updates history: 
     5 * 2010-05-04: Fence_vmware in RHEL 5.5 is currently fence_vmware_ng (same syntax but named as fence_vmware) so fence_vmware_ng is '''no longer needed'''. 
     6 * 2010-15-01: Fence_vmware from RHEL 5.3, 5.4 doesn't work with ESX 4.0. Working agent included 
     7 * 2009-10-07: Tested on ESX 4.0.0 and vCenter 4.0.0 
     8 * 2009-01-19: Fixed vmware-ng. Old one mail fail, if somebody turn on VM (for example VMware cluster itself) before agent itself. Before, this leads to error and fence agent fail. Now only warning is displayed and fencing is considered sucessfull. 
     9 * 2009-01-15: New vmware-ng. There is speed improvement with many VMs registered in VMware for status operation => whole fencing. Default type esx is now really default. 
    710 
    811We have 2 agents for VMware virtual machines fencing. 
    9  * First named vmware_fence is in RHEL5/STABLE2/master branch. It's designed and tested against VMware ESX server (not ESXi!) and Server 1.x 
    10  * Second named vmware_vmrun_fence is only in master branch. It's designed and tested against VMware ESXi and Server 2.x 
     12 * First fence_vmware is in RHEL5.3 and 5.4/STABLE2 branch. It's designed and tested against VMware ESX server (not ESXi!) and Server 1.x. This one is replaced by new fence_vmware (currently named fence_vmware) in RHEL 5.5/STABLE3 
     13 * Second in master/STALBLE3 branch. It's designed and tested against VMware ESX/ESXi/VC and Server 2.x, 1.x. This is what replaced old fence_vmware. (in master/STABLE3 is actually named fence_vmware) 
    1114 
    1215== Fence_vmware == 
    13 This is older fence agent, which should work on every ESX server, which has allowed ssh connection and has vmware-cmd command on it. Basic idea of this agent is to connect via ssh to ESX server, there run vmware-cmd which is able to run/shutdown virtual machine. Biggest problem of this solution is many parameters, which must be entered.  
     16It's union of two older agents. fence_vmware_vix and fence_vmware_vi.  
     17 
     18VI (in following text, VI API is not only original VI Perl API with last version 1.6, but VMware vSphere SDK for Perl too) is VMware API for controlling their main business class of VMware products (ESX/VC). This API is fully cluster aware (VMware cluster). So this agent is able to do fencing guests machines physically running on ESX but managed by VC and able to work without any reconfiguration in case of migrating guest to another ESX. 
     19 
     20VIX is newer API, working on VMware "low-end" products (Server 2.x, 1.x), but there is some support for ESX/ESXi 3.5 update 2 and VC 2.5 update 2. This API is NOT cluster aware, and recommended only for Server 2.x and 1.x. But if you are using only one ESX/ESXi or doesn't have VMware Cluster and never use migration, you can use this API too. 
     21 
     22If you are using RHEL 5.5/RHEL 6 just install fence-agents package and you are ready to use fence_vmware. For distributions with older fence_agetns, you can get this agent from GIT (RHEL 5.5/STABLE3/master) repository and use (please make sure to use current library (fencing.py) too). 
     23 
     24=== Pre-req === 
     25 
     26VI Perl API or/and VIX API installed on every node in cluster. This is big difference against older agent, where you don't need install anything, but new agent has little less painful configuration (and many bonuses) 
     27 
     28=== Running === 
     29If you run fence_vmware with -h you will see something like this: 
     30{{{ 
     31Options: 
     32   -o <action>    Action: status, reboot (default), off or on 
     33   -a <ip>        IP address or hostname of fencing device 
     34   -l <name>      Login name 
     35   -p <password>  Login password or passphrase 
     36   -S <script>    Script to run to retrieve password 
     37   -n <id>        Physical plug number on device or name of virtual machine 
     38   -e             Command to execute 
     39   -d             Type of VMware to connect 
     40   -x             Use ssh connection 
     41   -s             VMWare datacenter filter 
     42   -q             Quiet mode 
     43   -v             Verbose mode 
     44   -D <debugfile> Debugging to output file 
     45   -V             Output version information and exit 
     46   -h             Display this help and exit 
     47}}} 
     48 
     49Now parameters one by one, little more deeper (format is short option - XML argument name - description). 
     50 * o - action - This is same as with any other agent. 
     51 * a - ipaddr - Hostname/IP address of VMware ESX/ESXi/VC or Server 2/1.x. You can enter tcp port in this option in usually way (hostname:port). Port is not needed for standard ESX/ESXi/VC installations, but Server 2.x runs management console on other than usual, so this is why you have this possibility. 
     52 * l - login - This is login name for management console. 
     53 * p - passwd - This is password for management console. 
     54 * S - passwd_script - Script which retrieve password 
     55 * n - port - Virtual machine name. This is in case of VI API guest name, you are able to see in VI Client (for example node1). For VIX Api, this name is in form [datacenter] path/name.vmx. 
     56 * d - vmware_type - Type of VMware to connect. This parameter distinguish, what API you will use (VI or VIX). Possible values are esx, server2 and server1. Default is esx. 
     57   * esx - VI API is used. Only one cluster aware, able to work with ESX/ESXi/VC 
     58   * server2 - VIX API is used. Works for Server 2.x, ESX/ESXi 3.5 update 2 and VC 2.5 update 2, but not cluster aware!!! 
     59   * server1 - VIX API. Works only for Server 1.x 
     60 * s - vmware_datacenter - Used for filter available guests. Default is show all guests in all datacenters. With this, you will be able to fence same-named guests, if they are in different datacenters (so two node1 isn't any problem). If you never have same-named guests, this option is useless for you. 
     61 * e - exec - Executable to operate. In every mode, this agent works by forking another helping program, which really does useful work. In case of VI, it's Perl vmware_fence_helper. In case of VIX, it's vmrun from VIX API package. If you have commands on nonstandard locations, you can use this option, to specify, where command lives. 
     62 
     63Example usage of agent in CLI mode: 
     64You have VC (named vccenter) with node1 which you want to fence. You will use Administrator account with password pass. 
     65 
     66{{{ 
     67fence_vmware -a vccenter -l Administrator -p pass -n 'node1' 
     68}}} 
     69 
     70If everything works, you can modify your cluster.conf as follows (in this example, you have two nodes, guest1 and guest2): 
     71 
     72{{{ 
     73      ... 
     74      <clusternodes> 
     75              <clusternode name="guest1" nodeid="1" votes="1"> 
     76                      <fence> 
     77                              <method name="1"> 
     78                                      <device name="vmware1"/> 
     79                              </method> 
     80                      </fence> 
     81              </clusternode> 
     82              <clusternode name="guest2" nodeid="2" votes="1"> 
     83                      <fence> 
     84                              <method name="1"> 
     85                                      <device name="vmware2"/> 
     86                              </method> 
     87                      </fence> 
     88              </clusternode> 
     89      </clusternodes> 
     90      <fencedevices> 
     91              <fencedevice agent="fence_vmware" ipaddr="vccenter" login="Administrator" name="vmware1" passwd="pass" port="guest1"/> 
     92              <fencedevice agent="fence_vmware" ipaddr="vccenter" login="Administrator" name="vmware2" passwd="pass" port="guest2"/> 
     93      </fencedevices> 
     94      ... 
     95}}} 
     96 
     97You can test setup with fence_node fqdn command. 
     98 
     99=== Changing configuration from old fence_vmware to new fence_vmware === 
     100 * Install needed VI Perl API on every node 
     101 * remove login, and passwd parameter 
     102 * change vmlogin to login and vmpasswd to passwd 
     103 * change port value to shorter name (basically remove /full/path/ and .vmx) 
     104 * If you have vmipaddr, delete ipaddr and change vmippadr to ipaddr. 
     105 
     106=== Problems === 
     107One of biggest problem of  ESX 3.5/ESXi 3.5/VC 2.5 behaves very badly in case you have many virtual machines registered, because get list of VMs takes just too long. This will make fencing of larger datacenter unusable, because in case of 100+ registered VMs, whole fencing can take few minutes. This appears to be fixed in ESX 4.0.0/vCenter 4.0.0 (200+ registered VMs, fencing of one takes ~17 sec).. In case you don't want to upgrade, you can use separate datacenter for each cluster. 
     108  
     109== Old Fence_vmware == 
     110This is older fence agent, which should work on every ESX server, which has allowed ssh connection and has vmware-cmd command on it. Basic idea of this agent is to connect via ssh to ESX server, there run vmware-cmd which is able to run/shutdown virtual machine.  
     111 
     112In ESX 4.0, vmware-cmd changed little, so it will not work anymore. You can solve this, by deleting lines 32 and 33 ('if options.has_key("-A"):' and 'cmd_line+=" -v"') or download [attachment:fence_vmware.gz], unpack it and replace original /sbin/fence_vmware. 
     113 
     114Biggest problem of this solution is many parameters, which must be entered.  
    14115 
    15116If you run fence_vmware with -h you will see something like this: 
     
    34135}}} 
    35136 
     137 
    36138Now parameters one by one, little more deeper (format is short option - XML argument name - description). 
    37139 * o - action - This is same as with any other agent. 
     
    111213}}} 
    112214 
    113 == Fence_vmware_vmrun == 
    114 TODO 
     215== Recomendation for every VMware == 
     216 
     217The vmware "client" machine should have VMware Tools installed. So I recommend to install vmware tools in all cluster machine. This improve speed of guest. 
    115218 
    116219----