#49062 Reset incremental update status after re-initialization
Closed: wontfix 7 years ago Opened 7 years ago by mreynolds.

Ticket was cloned from Red Hat Bugzilla (product Red Hat Enterprise Linux 7): Bug 1379424

Description of problem:

This is not a real bug. It's something that could confuse our customers when
using ipa command line. Feel free to change component if needed.

Sometimes we do a re-initialization of the replicas and customer uses the ipa
command line to look at the status and we could have something like this:

[root@rz1b022 ~]# ipa-csreplica-manage list -v buxv023.hv.ibb.lan
Directory Manager password:

rz2hh022.rz2hh.ibb.lan
  last init status: 0 Total update succeeded
  last init ended: 2016-09-23 06:16:10+00:00
  last update status: -1 Incremental update has failed and requires
administrator actionLDAP error: Can't contact LDAP server
  last update ended: 1970-01-01 00:00:00+00:00


So, the replicas has been re-initialized but as no incremental update took
place after that, it seems to me that the attribute
nsds5replicaLastUpdateStatus has never been updated. And this is confusing
customers very often.

It could be good to reset the nsds5replicaLastUpdateStatus after a re-init.

But this change could be controversial since someone could have the idea to
check both status separatedly.

Another possibility is to change the ipa command line not to display the bare
status but to do something with it just to show an unified information.

Feel also free to close this bug if considered not to be relevant.

Metadata Update from @mreynolds:
- Issue set to the milestone: 1.3.6.0

7 years ago

Metadata Update from @mreynolds:
- Issue assigned to mreynolds

7 years ago

@gparente I can not reproduce the problem. After I do a reinit the last update status is correctly reset. Can you provide reproducible steps (outside of ipa)? Thanks!

Metadata Update from @mreynolds:
- Custom field reviewstatus adjusted to needinfo
- Issue close_status updated to: None

7 years ago

mark,

if it's not reproducible, let's close this issue as not a bug.

Thanks.

German.

mark,
if it's not reproducible, let's close this issue as not a bug.
Thanks.
German.

@gparente:

The thing is I see where we should clear the status after a total init, I just can't get the server into that state where the status is not updated after the init. I also can't get the server into the same state that is in this issue - so there could still be a problem here.

The "fix" is simple though and I'm tempted to apply it anyway, but I just want to make sure I'm reproducing "your" issue first.

Metadata Update from @mreynolds:
- Custom field reviewstatus adjusted to review (was: needinfo)

7 years ago

The fix looks good to me. Ack.

I wonder if the problem could be reproducible only after a incremental update failure.
For example, replicating a memberof update on a consumer where the update is invalid and rejected.

The fix looks good to me. Ack.
+1

Metadata Update from @nhosoi:
- Custom field reviewstatus adjusted to ack (was: review)

7 years ago

be93d90..3038411 master -> master

Pushed for now, I'll try and do the replication test in the near future.

Metadata Update from @mreynolds:
- Custom field reviewstatus adjusted to review (was: ack)
- Issue close_status updated to: fixed
- Issue status updated to: Closed (was: Open)

7 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/2121

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: fixed)

3 years ago

Login to comment on this ticket.

Metadata