By default a cleanallruv task will wait for the results of cleanallruv on the others replicas before cleaning its own RUV/Changelog.
This can lead to indefinite wait if the some replica are unreachable.
Cleanallruv should not wait for the others servers to complete the task.
This would require to add the attribute/value"replica-force-cleaning: yes" in the task entry (freeipa/ipaserver/install/replication.py:cleanallruv)
requires ds functionality which is not pushed yet: https://fedorahosted.org/389/ticket/48218
Not 4.3 blocker. Moving to 4.3.x given that DS ticket is not implemented yet.
The 389 ticket was fixed in master(1.3.5): https://fedorahosted.org/389/changeset/ec3f8da524bf406a24a261b48dd5643ccb48ec7a/
How exactly do we test this? Turn off replica and call {{{ ipa-replica-manage del}}} on master?
for CA RUVS: try
ipa-csreplica-manage del
There should be left over RUVs.
Calling
ipa-replica-manage del $replica
on domain level 0 will remove domain RUV but CA RUV will not be deleted. On domain level 1, both RUVs will be removed.
the description is more for #5411. But yes, it can be tested by shutting down ipa on the replica before cleanall RUV task is started. The uninstall will manage it as well.
How to reproduce it easily (breaking your topology): 1. Install replica or two with or W/O CA 2. Turn it/them off 3. Run
ipa-replica-manage clean-ruv <RUV-id>
moving to 4.3.2, patch is on the list " [PATCH 0032] Remove dangling RUVs even if replicas are offline"
master:
ipa-4-3:
Metadata Update from @tbordaz: - Issue assigned to stlaz - Issue set to the milestone: FreeIPA 4.3.2
Login to comment on this ticket.