This describes some of the feature differences between [RGManager rgmanager] and Pacemaker. Pacemaker generally has more features and better dependency support than RGManager. A partial or full rewrite of RGManager would be required in order to make RGManager be able to attain the same level of resource control as Pacemaker - so it really makes a lot of sense to just use Pacemaker.
|Resource Management Model||Resource Group||Resource-Dependency, Resource Group|
|Dependency Models||Collocation, Start-After||User-defined|
|Event Handling Model||Distributed or Centralized||Centralized|
|CLI Management||Status, Control||Status, Control, Administration|
|Fencing Model||Assumed(7)||Resource Level|
|Multi State Resources||No||Yes|
|Maximum Node Count||16~32(3)||More ;)|
|Time-Based Resource Control||No||Yes(4)|
|Resource Attribute Inheritance||Yes||Yes|
|Cloned Resources||Sort of(5)||Yes|
|Resource Agent APIs||OCF(6), SysV||OCF(6), SysV, Heartbeat|
|Multi-Partition Resource Management||No||Yes(9)|
Pacemaker 1.2 feature
RIND was introduced in to [RGManager rgmanager] as a way to avoid implementing a dependency engine by making the business logic of RGManager external. It works fairly well, but is very complicated to use as it requires some amount of programming experience in order to use.
RGManager has a built-in distributed state replication algorithm which is inefficient as implemented. This is used to distribute service states individually.
Requires proper configuration of Pacemaker's dependencies.
RGManager supports resources shared between services by using reference counts so that only one instance of a resource is started on a node if multiple services are started there, but this is a different construct than a cloned resource which defines a number of clones cluster-wide.
RGManager does not implement the OCF-required monitor action; instead it uses status. It also stores its agents in a FHS/LSB location instead of OCF. Both Pacemaker and RGManager have diverged from the OCF RA API 1.0 draft in differing ways in order to support other features not possible with the original API draft specification.
RGManager waits for the cluster infrastructure to complete fencing before initiating recovery, whether or not any resources exist on a given node which require fencing. Pacemaker initiates fencing based on resource allocation - if a node has no resources requiring fencing, fencing of the host is not required to perform recovery.
Pacemaker lets you stop managing a resource in order to perform maintenance. This accomplishes the same task as freezing a service in RGManager.
If the cluster partitions, Pacemaker can elect a resource manager in each partition and continue managing the already-running resources within each partition without the need for fencing. You cannot start/stop new resource instances, of course.