#398 Add a way to monitor status in LDAP while CLEANRUV is executing
Closed: wontfix None Opened 11 years ago by rcritten.

Right now you can monitor the status of CLEANRUV in the errors log (see ticket #337).

It would be helpful to be able to do this online similar to replication status. This way tools can periodically check to see if the RUV cleanup has been completed.

I think for IPA I'd block while waiting for this to complete so the user would know for sure that cleanup is done.


Fix Description: Set REPLICA_IN_CLEANRUV bit in the replica
status flag while CLEAN{RUV}ALL task/extop is being executed.
If the replica is searched and returned when the bit is on,
it includes "nsds5replicaExtopStatus: replica in cleanruv".
Otherwise, the attribute value is not included in the entry.

[Usage]
If CLEANRUV is in progress
$ ldapsearch -LLLx -h localhost -p 389 -b "cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config" -D 'cn=directory manager' -W -s base nsds5replicaExtopStatus
dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config
nsds5replicaextopstatus: replica in cleanruv

The task/extop is done,
$ ldapsearch -LLLx -h localhost -p 389 -b "cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config" -D 'cn=directory manager' -W -s base nsds5replicaExtopStatus
dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config

Hi Rob,

Do you think my proposal solves your issue? If there's anything which had better be improved/fixed, please let me know.
Thanks!
--noriko

This looks good, but I think we are going to change how this works. We are not going to use extended op's, so maybe change the "_replicaExtopStatus" variables and attributes to "_replicaTaskStatus"?

However, I think I'm going to make CLEANALLRUV an official task - instead of modifying the replication config entry to trigger the action. So I can just update the "task" status in the new code, and that should take care of this ticket's needs.

Maybe it might be best to wait until we redo the CLEANALLRUV code, before proceeding with this ticket.

Mark

My question will be: will this status reflect the whole cleanruv status or just the status on any given instance?

Replying to [comment:9 rcritten]:

My question will be: will this status reflect the whole cleanruv status or just the status on any given instance?

The status of the server which launched the CLEANALLRUV task reflects the whole CLEANRUV status in the MMR topology. (I think the "main" server is waiting for the response from the other replicas.) On the other servers, the status is just for the local instance.

Based upon Mark's comment 8, the usage might be changed to adjust to the ticket #403 fix. We need to take care of the case the other replica is down... So, we are holding on this ticket until then (or merge this to 403...)

Closing ticket, as ticket #403 uses a Slapi Task. So monitoring will be very easy.

Added initial screened field value.

Metadata Update from @nhosoi:
- Issue assigned to nhosoi
- Issue set to the milestone: 1.2.11.8

7 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/398

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: Duplicate)

3 years ago

Login to comment on this ticket.

Metadata