wiki:Luci
Last modified 6 years ago Last modified on 05/25/11 23:56:46

Luci

Note: Any kind of "official status" for this page still pending. After that, this page to be referred to from various places in connection with luci 0.20+.

This is a home wiki page for new generation luci (rewrite based on different framework), a web frontend providing cluster/HA suite management. This rewrite originally targeted RHEL 6.0 (starting with version 0.22) offering compatibility with cluster v.2 and v.3 and can also be referred to as High Availability management tool. For the rest of information about related Conga package—especially its ricci component and the whole design, although the original version of luci is covered as well—refer to Conga Home Page (not updated for quite a long time and not all apply for current luci).


History of Luci, relationship to Conga

Apparently, luci has its history in which the newest stage (the one this page refers to) has been started with the mentioned migration to another framework (Plone dropped in favour of !TurboGears2). This was as of version luci 0.20 chosen to mark a major logical increment from original luci that used to form—together with ricci—a single component called conga (Conga Home Page (not updated for quite a long time and not all apply for current luci)) which itself was versioned up to 0.1x (originally targeted RHEL 4/5).

The preference of single aggregated component in the previous stage had its rationale as there was (and still is) a dependency of luci on ricci (client–server relationship). However, it is quite common to run luci on another machine than are the cluster's nodes running ricci daemon (in order to enable luci to manage this cluster), so conga has been split into the two separate parts. Still, conga is sometimes used to refer the cooperating pair of the two, though.

What

Basically, luci makes it easy to handle most of common tasks connected with cluster/HA suite management. Thanks to it, one can avoid "manual" (either literally manual or with the use of command-line facilities) maintaining of cluster.conf file on each node of the cluster he/she manages.

Such administrative tasks luci is providing are:

  • inspect various aspects of managed clusters (incl. reports about the current "health" of the cluster, etc.)
  • do manipulations with
    • whole clusters:
      • create (setup) a new cluster of specified nodes, add existing to/remove selected from the "awareness of luci"
    • cluster nodes:
      • add, remove, reboot and configure properties (e.g. set a node fencing, which includes configuring per-node properties specific to particular fence device instance)
    • services withing cluster:
      • add (define), delete (undefine), start, restart, disable and configure properties (e.g. add/remove resources for particular service or set failover domain)
    • resources within cluster:
      • add (define), delete (undefine) and configure resource-specific properties
    • failover domains within cluster:
      • add (define), delete (undefine) and configure properties
    • global fence device instances within cluster:
      • add (define), delete (undefine), configure global properties specific to particular fence device instance
  • configure some properties of the whole cluster
    • general properties (e.g. ability to override automatical configuration versioning)
    • fence daemon properites (e.g. delays)
    • network configuration (e.g. setting of multicast)
    • quorum disk configuration

Where


Deployment

Pre-installation procedures

  • Ensure that there is no user nor group named luci on the system prior to the installation.

Post-installation procedures

Ricci deployment on nodes of the cluster to be managed with Luci

It is a required prerequisite to have ricci deployed and started on respective nodes before the cluster can be managed with luci. Please refer to ricci's documentation for details (installation, what firewalls rules may be needed, etc.).

Enabling remote clients (optional)

On the machine where luci was installed in order to be accessed by remote client(s), you may need to allow TCP/IP traffic from such client(s) to the local port 8084, where luci operates (at least by default). For iptables, a rule like this will handle that (only for demonstration, see respective documentation):

INPUT -p tcp -s <address of client(s)> -m state --state NEW -m tcp --dport 8084 -j ACCEPT

Correct configuration for saslauthd (SASL authentication server)

By default, it is expected that user identification and authentication of luci users are arranged in the same way as direct user authentication to the system (i.e. through his/her system account credentials). luci does this indirectly through SASL library communicating to saslauthd and, in turn, to PAM backend.

In order to ensure proper work of this chain, the saslauthd daemon has to configured to use PAM authentication mechanism (which is default on many systems, incl. RHEL and Fedora). This can be checked in the configuration used by respective initscript (e.g., in /etc/sysconfig/saslauthd file) and check that PAM is selected mechanism.

To test whether this authentication chain works properly, you can use testsaslauthd utility (part of Cyrus SASL standard distribution). For the purpose of using it, it might be a good idea to work with temporary dummy user:

useradd -MU -s /sbin/nologin dummy_user
echo dummy_password | passwd --stdin dummy_user
testsaslauthd -u dummy_user -p dummy_password -s luci
userdel -r dummy_user

If testsaslauthd command in the example test sequence returned success (0: OK "Success.") then the authentication chain works as expected. Otherwise there is a problem in system configuration that will prevent authorized users from logging into luci (check /var/log/messages for reason).


~+See also+~


CategoryLuci