steps:
1) on IPA replica, lets create 4 IPA users: A,B,C and D. Now make a backup with 'db2ldif.pl -r ...'
2) on IPA replica, delete the user D. 'ipa user-del D'.
3) on IPA master, delete the user C. 'ipa user-del C'.
4) now check on other IPA master and IPA replica, both shows only two users 'A' and 'B'. this is expected.
5) now on IPA replica, restore the backup with 'ldif2db.pl'
6) check on IPA replica immediately, 'ipa user-find' shows 4 users 'A, B, C, D' at the beginning.
7) check IPA Master, 'ipa user-find' shows still only two users 'A, B'.
8) wait 3 minutes or so, check on IPA replica, and found that there are only THREE users 'A, B, D'. The users 'C' is deleted now -- change propagated from IPA Master.
9) check on IPA Master again and again, there are still only two users 'A, B'.
10) check on IPA Replica again and again, there are still three users 'A, B,D'. --- this status is different from IPA Master's 'A,B', or backup's 'A, B, C, D'.
If backup was created without '-r' option, then the step 8 above will always show 'A,B,C,D', the same as backup. with '-r' option make the final result between.
I think the delete of C that first occurred on the replica should have been propagated to the master, and then back to the replica after the restore from ldif.
I can not reproduce this problem on trunk/master
Here are my steps:
[1] Create two instances: master and dedicated consumer [2] Setup replication and initialize consumer [3] Create 4 users on the master: a, b, c, d [4] do a "db2ldif -r" on the consumer [5] On master: delete 'c' [6] On Consumer: delete 'd' [7] do a ldif2db on consumer -> now the consumer has entries: a,b,c,d [8] Either wait, or update entry 'a' on master. [9] Both master and consumer only have entries: a, b
Are there any other steps that were not listed?
Here is the original mail list thread: https://www.redhat.com/archives/freeipa-users/2012-May/msg00223.html
There two differences from what I did:
[1] They used some IPA tools [2] They used the perl script versions of db2ldif/ldif2db
I retested using the perl scripts, and once again it works fine.
Replying to [comment:5 mreynolds]:
There two differences from what I did: [1] They used some IPA tools [2] They used the perl script versions of db2ldif/ldif2db I retested using the perl scripts, and once again it works fine.
Since the original reporter is not listed in the CC list of this ticket, you will either need to contact him/her directly, or reply on the freeipa-users thread.
I was able to reproduce the issue by using multi master replication. So instead of a dedicated consumer, it had to be a master as well(two masters).
Continuing investigation...
The problem is that we are intentionally skipping updates from other consumers in clcache_skip_change()
I'm not sure if there is a proper fix, or if there would be a better process? For example, instead of doing a db2ldif/ldif2db on the same replica, do the db2ldif from the Master A, and then do a ldif2db on Master B.
Going to run this by the team...
attachment 0001-Ticket-369-restore-of-replica-ldif-file-on-second-ma.patch
git merge ticket369 Updating 72b7e91..a640ac2 Fast-forward ldap/servers/plugins/replication/cl5_clcache.c | 15 +++++++++++++-- 1 files changed, 13 insertions(+), 2 deletions(-)
git push origin master Counting objects: 13, done. Delta compression using up to 4 threads. Compressing objects: 100% (7/7), done. Writing objects: 100% (7/7), 1.13 KiB, done. Total 7 (delta 5), reused 0 (delta 0) To ssh://git.fedorahosted.org/git/389/ds.git 72b7e91..a640ac2 master -> master
Ticket has been cloned to Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=830335
Added initial screened field value.
Metadata Update from @mreynolds: - Issue assigned to mreynolds - Issue set to the milestone: 1.2.11
389-ds-base is moving from Pagure to Github. This means that new issues and pull requests will be accepted only in 389-ds-base's github repository.
This issue has been cloned to Github and is available here: - https://github.com/389ds/389-ds-base/issues/369
If you want to receive further updates on the issue, please navigate to the github issue and click on subscribe button.
subscribe
Thank you for understanding. We apologize for all inconvenience.
Metadata Update from @spichugi: - Issue close_status updated to: wontfix (was: Fixed)
Login to comment on this ticket.