#47721 Schema Replication Issue
Closed: wontfix None Opened 10 years ago by mharmsen.

In the process of working on Dogtag TRAC Ticket #816 - pki-tomcat cannot be start after instalation of ipa replica with ca, I believe that I have discovered a DS schema replication issue.

I was attempting to create a Dogtag Master (Fedora 19) and Dogtag Clone (Fedora 20) using the following directory servers for data storage:

  • 389-ds-base-1.3.1.17-1.fc19.x86_64 (Fedora 19)
  • 389-ds-base-1.3.2.9-1.fc20.x86_64 (Fedora 20)

I cleaned and installed a fresh Fedora 19 DS and Master CA, and verified that everything works.

I then copied the P12 backup file containing the certs and keys for clone configuration.

I cleaned and installed a fresh Fedora 20 DS and Clone CA.

Unfortunately, the CS was unable to start because the '/etc/dirsrv/slapd-<fedora20>/schema/99user.ldif' was not replicated from '/etc/dirsrv/slapd-<fedora19>/schema/99user.ldif'.

The reasons for this, however, appear to be because DS schema replication failed prior to being able to copy our schema from Fedora 19 --> Fedora 20. I verified this by manually copying over the '99user.ldif' schema from Fedora 19 and was able to successfully start the CS clone and test that it works correctly.

Schema related log messages on Fedora 19 DS:

    # cd /var/log/dirsrv/slapd-fedora19

    # grep -i schema *
    access:[25/Feb/2014:13:50:47 -0800] conn=1 op=69 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="* aci aci"
    access:[25/Feb/2014:13:50:47 -0800] conn=1 op=70 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:12 -0800] conn=8 op=7 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:13 -0800] conn=8 op=8 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:13 -0800] conn=8 op=9 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:13 -0800] conn=8 op=10 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:13 -0800] conn=8 op=11 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:13 -0800] conn=8 op=12 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:14 -0800] conn=8 op=13 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:14 -0800] conn=8 op=14 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:14 -0800] conn=8 op=15 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:14 -0800] conn=8 op=16 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:14 -0800] conn=8 op=17 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:15 -0800] conn=8 op=18 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:15 -0800] conn=8 op=19 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:15 -0800] conn=8 op=20 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:15 -0800] conn=8 op=21 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:15 -0800] conn=8 op=22 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:15 -0800] conn=8 op=23 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:16 -0800] conn=8 op=24 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:16 -0800] conn=8 op=25 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:16 -0800] conn=8 op=26 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:16 -0800] conn=8 op=27 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:16 -0800] conn=8 op=28 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:17 -0800] conn=8 op=29 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:17 -0800] conn=8 op=30 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:17 -0800] conn=8 op=31 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:17 -0800] conn=8 op=32 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:17 -0800] conn=8 op=33 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:18 -0800] conn=8 op=34 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:18 -0800] conn=8 op=35 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:18 -0800] conn=8 op=36 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:18 -0800] conn=8 op=37 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:18 -0800] conn=8 op=38 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:19 -0800] conn=8 op=39 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:19 -0800] conn=8 op=40 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:19 -0800] conn=8 op=41 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:19 -0800] conn=8 op=42 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:19 -0800] conn=8 op=43 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:19 -0800] conn=8 op=44 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:20 -0800] conn=8 op=45 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:20 -0800] conn=8 op=46 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:20 -0800] conn=8 op=47 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:20 -0800] conn=8 op=48 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:20 -0800] conn=8 op=49 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:21 -0800] conn=8 op=50 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:21 -0800] conn=8 op=51 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:21 -0800] conn=8 op=52 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:21 -0800] conn=8 op=53 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:21 -0800] conn=8 op=54 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:22 -0800] conn=8 op=55 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:22 -0800] conn=8 op=56 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:22 -0800] conn=8 op=57 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:22 -0800] conn=8 op=58 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:22 -0800] conn=8 op=59 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:23 -0800] conn=8 op=60 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:23 -0800] conn=8 op=61 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:23 -0800] conn=8 op=62 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:23 -0800] conn=8 op=63 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:23 -0800] conn=8 op=64 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:24 -0800] conn=8 op=65 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:24 -0800] conn=8 op=66 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:24 -0800] conn=8 op=67 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:24 -0800] conn=8 op=68 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:24 -0800] conn=8 op=69 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:24 -0800] conn=8 op=70 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:25 -0800] conn=8 op=71 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:25 -0800] conn=8 op=72 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:25 -0800] conn=8 op=73 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:25 -0800] conn=8 op=74 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:25 -0800] conn=8 op=75 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:26 -0800] conn=8 op=76 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:26 -0800] conn=8 op=77 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:26 -0800] conn=8 op=78 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:26 -0800] conn=8 op=79 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:26 -0800] conn=8 op=80 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:27 -0800] conn=8 op=81 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:27 -0800] conn=8 op=82 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:27 -0800] conn=8 op=83 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:27 -0800] conn=8 op=84 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:27 -0800] conn=8 op=85 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:28 -0800] conn=8 op=86 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:28 -0800] conn=8 op=87 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:28 -0800] conn=8 op=88 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:28 -0800] conn=8 op=89 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:28 -0800] conn=8 op=90 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:29 -0800] conn=8 op=91 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:29 -0800] conn=8 op=92 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:29 -0800] conn=8 op=93 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:29 -0800] conn=8 op=94 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:29 -0800] conn=8 op=95 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:30 -0800] conn=8 op=96 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:30 -0800] conn=8 op=97 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:30 -0800] conn=8 op=98 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:30 -0800] conn=8 op=99 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:30 -0800] conn=8 op=100 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:31 -0800] conn=8 op=101 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:31 -0800] conn=8 op=102 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:31 -0800] conn=8 op=103 MOD dn="cn=schema"
    access:[25/Feb/2014:13:57:31 -0800] conn=8 op=104 MOD dn="cn=schema"
    errors:[25/Feb/2014:14:08:52 -0800] NSMMReplicationPlugin - agmt="cn=masterAgreement1-fedora20.example.com-pki-tomcat" (fedora20:389): Schema replication update failed: Server is unwilling to perform
    errors:[25/Feb/2014:14:08:53 -0800] NSMMReplicationPlugin - Warning: unable to replicate schema to host fedora20.example.com, port 389. Continuing with total update session.
    errors:[25/Feb/2014:14:09:05 -0800] NSMMReplicationPlugin - agmt="cn=masterAgreement1-fedora20.example.com-pki-tomcat" (fedora20:389): Schema replication update failed: Server is unwilling to perform
    errors:[25/Feb/2014:14:09:05 -0800] NSMMReplicationPlugin - agmt="cn=masterAgreement1-fedora20.example.com-pki-tomcat" (fedora20:389): Warning: unable to replicate schema: rc=1
    errors:[25/Feb/2014:14:09:09 -0800] NSMMReplicationPlugin - agmt="cn=masterAgreement1-fedora20.example.com-pki-tomcat" (fedora20:389): Schema replication update failed: Server is unwilling to perform
    errors:[25/Feb/2014:14:09:09 -0800] NSMMReplicationPlugin - agmt="cn=masterAgreement1-fedora20.example.com-pki-tomcat" (fedora20:389): Warning: unable to replicate schema: rc=1
    errors:[25/Feb/2014:14:28:08 -0800] NSMMReplicationPlugin - agmt="cn=masterAgreement1-fedora20.example.com-pki-tomcat" (fedora20:389): Schema replication update failed: Server is unwilling to perform
    errors:[25/Feb/2014:14:28:08 -0800] NSMMReplicationPlugin - agmt="cn=masterAgreement1-fedora20.example.com-pki-tomcat" (fedora20:389): Warning: unable to replicate schema: rc=1
    errors:[25/Feb/2014:14:33:10 -0800] NSMMReplicationPlugin - agmt="cn=masterAgreement1-fedora20.example.com-pki-tomcat" (fedora20:389): Schema replication update failed: Server is unwilling to perform
    errors:[25/Feb/2014:14:33:10 -0800] NSMMReplicationPlugin - agmt="cn=masterAgreement1-fedora20.example.com-pki-tomcat" (fedora20:389): Warning: unable to replicate schema: rc=1
    errors:[25/Feb/2014:14:43:08 -0800] NSMMReplicationPlugin - agmt="cn=masterAgreement1-fedora20.example.com-pki-tomcat" (fedora20:389): Schema replication update failed: Server is unwilling to perform
    errors:[25/Feb/2014:14:43:08 -0800] NSMMReplicationPlugin - agmt="cn=masterAgreement1-fedora20.example.com-pki-tomcat" (fedora20:389): Warning: unable to replicate schema: rc=1

and schema related log messages on Fedora 20 DS:

    # cd /var/log/dirsrv/slapd-fedora20

    # grep -i schema *
    access:[25/Feb/2014:14:06:34 -0800] conn=1 op=69 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="* aci aci"
    access:[25/Feb/2014:14:06:34 -0800] conn=1 op=70 MOD dn="cn=schema"
    access:[25/Feb/2014:14:08:52 -0800] conn=12 op=4 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="nsSchemaCSN"
    access:[25/Feb/2014:14:08:52 -0800] conn=12 op=5 MOD dn="cn=schema"
    access:[25/Feb/2014:14:09:05 -0800] conn=27 op=4 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="nsSchemaCSN"
    access:[25/Feb/2014:14:09:05 -0800] conn=27 op=5 MOD dn="cn=schema"
    access:[25/Feb/2014:14:09:08 -0800] conn=27 op=13 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="nsSchemaCSN"
    access:[25/Feb/2014:14:09:08 -0800] conn=27 op=14 MOD dn="cn=schema"
    access:[25/Feb/2014:14:28:07 -0800] conn=42 op=4 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="nsSchemaCSN"
    access:[25/Feb/2014:14:28:08 -0800] conn=42 op=5 MOD dn="cn=schema"
    access:[25/Feb/2014:14:33:10 -0800] conn=43 op=4 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="nsSchemaCSN"
    access:[25/Feb/2014:14:33:10 -0800] conn=43 op=5 MOD dn="cn=schema"
    access:[25/Feb/2014:14:43:07 -0800] conn=44 op=4 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="nsSchemaCSN"
    access:[25/Feb/2014:14:43:08 -0800] conn=44 op=5 MOD dn="cn=schema"
    errors:[25/Feb/2014:14:08:53 -0800] schema - Local objectClasses must not be overwritten (set replication log for additional info)
    errors:[25/Feb/2014:14:09:05 -0800] schema - Local objectClasses must not be overwritten (set replication log for additional info)
    errors:[25/Feb/2014:14:09:09 -0800] schema - Local objectClasses must not be overwritten (set replication log for additional info)
    errors:[25/Feb/2014:14:28:08 -0800] schema - Local objectClasses must not be overwritten (set replication log for additional info)
    errors:[25/Feb/2014:14:33:10 -0800] schema - Local objectClasses must not be overwritten (set replication log for additional info)
    errors:[25/Feb/2014:14:43:08 -0800] schema - Local objectClasses must not be overwritten (set replication log for additional info)

For further granularity, I removed the Fedora 20 DS and Clone CA, and re-installed them (this time enabling replication logging):

    # cat replication.ldif 
    dn: cn=config
    changetype: modify
    replace: nsslapd-errorlog-level
    nsslapd-errorlog-level: 8192

    # ldapmodify -h fedora20.example.com -p 389 -D "cn=Directory Manager" -w <password> -f ./replication.ldif

    # cd /var/log/dirsrv/slapd-fedora20

    # grep -i schema *
    access:[25/Feb/2014:15:21:40 -0800] conn=1 op=69 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="* aci aci"
    access:[25/Feb/2014:15:21:40 -0800] conn=1 op=70 MOD dn="cn=schema"
    access:[25/Feb/2014:15:36:05 -0800] conn=8 op=4 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="nsSchemaCSN"
    access:[25/Feb/2014:15:36:06 -0800] conn=8 op=5 MOD dn="cn=schema"
    access:[25/Feb/2014:15:36:13 -0800] conn=9 op=4 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="nsSchemaCSN"
    access:[25/Feb/2014:15:36:13 -0800] conn=9 op=5 MOD dn="cn=schema"
    access:[25/Feb/2014:15:36:24 -0800] conn=9 op=12 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="nsSchemaCSN"
    access:[25/Feb/2014:15:36:25 -0800] conn=9 op=13 MOD dn="cn=schema"
    access:[25/Feb/2014:15:36:33 -0800] conn=9 op=22 SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="nsSchemaCSN"
    access:[25/Feb/2014:15:36:33 -0800] conn=9 op=23 MOD dn="cn=schema"
    errors:[25/Feb/2014:15:36:06 -0800] schema - Attribute nsRoleScopeDN is not allowed in 'nsRoleDefinition' of the remote supplier schema
    errors:[25/Feb/2014:15:36:06 -0800] schema - Attribute winSyncDirectoryFilter is not allowed in 'nsDSWindowsReplicationAgreement' of the remote supplier schema
    errors:[25/Feb/2014:15:36:06 -0800] schema - Attribute winSyncWindowsFilter is not allowed in 'nsDSWindowsReplicationAgreement' of the remote supplier schema
    errors:[25/Feb/2014:15:36:06 -0800] schema - Attribute winSyncSubtreePair is not allowed in 'nsDSWindowsReplicationAgreement' of the remote supplier schema
    errors:[25/Feb/2014:15:36:06 -0800] schema - Local objectClasses must not be overwritten (set replication log for additional info)
    errors:[25/Feb/2014:15:36:13 -0800] schema - Attribute nsRoleScopeDN is not allowed in 'nsRoleDefinition' of the remote supplier schema
    errors:[25/Feb/2014:15:36:13 -0800] schema - Attribute winSyncDirectoryFilter is not allowed in 'nsDSWindowsReplicationAgreement' of the remote supplier schema
    errors:[25/Feb/2014:15:36:13 -0800] schema - Attribute winSyncWindowsFilter is not allowed in 'nsDSWindowsReplicationAgreement' of the remote supplier schema
    errors:[25/Feb/2014:15:36:13 -0800] schema - Attribute winSyncSubtreePair is not allowed in 'nsDSWindowsReplicationAgreement' of the remote supplier schema
    errors:[25/Feb/2014:15:36:14 -0800] schema - Local objectClasses must not be overwritten (set replication log for additional info)
    errors:[25/Feb/2014:15:36:25 -0800] schema - Attribute nsRoleScopeDN is not allowed in 'nsRoleDefinition' of the remote supplier schema
    errors:[25/Feb/2014:15:36:25 -0800] schema - Attribute winSyncDirectoryFilter is not allowed in 'nsDSWindowsReplicationAgreement' of the remote supplier schema
    errors:[25/Feb/2014:15:36:25 -0800] schema - Attribute winSyncWindowsFilter is not allowed in 'nsDSWindowsReplicationAgreement' of the remote supplier schema
    errors:[25/Feb/2014:15:36:25 -0800] schema - Attribute winSyncSubtreePair is not allowed in 'nsDSWindowsReplicationAgreement' of the remote supplier schema
    errors:[25/Feb/2014:15:36:25 -0800] schema - Local objectClasses must not be overwritten (set replication log for additional info)
    errors:[25/Feb/2014:15:36:33 -0800] schema - Attribute nsRoleScopeDN is not allowed in 'nsRoleDefinition' of the remote supplier schema
    errors:[25/Feb/2014:15:36:33 -0800] schema - Attribute winSyncDirectoryFilter is not allowed in 'nsDSWindowsReplicationAgreement' of the remote supplier schema
    errors:[25/Feb/2014:15:36:33 -0800] schema - Attribute winSyncWindowsFilter is not allowed in 'nsDSWindowsReplicationAgreement' of the remote supplier schema
    errors:[25/Feb/2014:15:36:33 -0800] schema - Attribute winSyncSubtreePair is not allowed in 'nsDSWindowsReplicationAgreement' of the remote supplier schema
    errors:[25/Feb/2014:15:36:33 -0800] schema - Local objectClasses must not be overwritten (set replication log for additional info)

These errors are reminiscent of the following two tickets:


thierry bordaz wrote:

Hi Noriko,

I know you started looking at https://fedorahosted.org/389/ticket/47721 (schema replication issue).
My understanding is that it is likely a nasty issue.
On one side we want to prevent a subset schema to overwrite a superset schema (ticket 47490) but on the other side we want a crafted schema to be propagated (like the mozillaObject) even if it is defined in a subset schema.
This ticket is possibly addressed by  ticket 496, but this is a large change.

Martin asked me about ticket 47721 as I am done now with an other ticket. Are you investigating it  ? Just to let you know, if you want I can take it over.

It'd be nice if you could take over this ticket. I've started investigating the issue, but it's pending now.

Before the vacation, you taught me how to set the replSchema config entries. I had to modify some more to make it run. You could see them in these dse.ldif files:
F19: /etc/dirsrv/slapd-vm-087/dse.ldif
F20: /etc/dirsrv/slapd-vm-042/dse.ldif

The issue is F20 has extra new schema (ours), on the other hand, F19 has custom schema added by IPA/CS. The custom schema on F19 are supposed to replicated to F20.

When I tested it, the behaviour is different by the modified timestamps; if F19 schema is newer vs. F20 schema is newer. So, we should test both cases.

What I was thinking was as you mentioned since there is no perfect solution for now, to let IPA/CS installer to add schemaUpdateObjectclassAccept & schemaUpdateAttributeAccept in cn=consumerUpdatePolicy & cn=supplierUpdatePolicy and by running consumer initialization from F19 to F20. If we could verify F20 gets the F19 custom schema on top of the F20 custom schema, we are good.

While testing it, I ran into an obvious bug. I think it was when F19 schema is newer (but this may not be opposite...), all the schema in F19 are sent and stored in the F20 99user.ldif. This should be fixed...

Thank you for offering to take over this ticket. Let me reassign this to you.

Here is the current status

  • the problem is reproducible (with both a lib389 test case or the ipa test case)

    On F19:
    
        ipa-server-install -p Secret123 -a Secret123
        ipa-replica-prepare vm-022.idm.lab.bos.redhat.com
    
        scp
        /var/lib/ipa/replica-info-vm-022.idm.lab.bos.redhat.com.gpg
        root@vm-022.idm.lab.bos.redhat.com:/var/lib/ipa/replica-info-vm-022.idm.lab.bos.redhat.com.gpg
    
    On F20:
    
        ipa-replica-install --setup-ca
        /var/lib/ipa/replica-info-vm-022.idm.lab.bos.redhat.com.gpg
        enable replication logs => shows the problematics oc/attributes
    
  • RC identified. The problem is complex because F20 schema contains extra definitions (at/oc) compare to F19, but also F19 contains extra definitions (custom ipa schema).

  • a Design is written/reviewed (http://directory.fedoraproject.org/wiki/Replication_of_custom_schema_%28ticket_47721%29). The fix would be to make a replica, learn the oc/at it ignores. It remains a problem if the replica already know the oc/at but need to update it (more attribute/syntax...)

  • a First fix allowed to update the schema from unknown definitions but it failed to update the schema file.

Here are the next steps

  • continue working on a fix

Here is the current status

  • I made a fix that could fix the issue.
    During a replication session, the consumer checks from the replicated schema the definitions (oc/at) that would extends its schema. Then it updates its schema with those definition. The selected definitions may create new oc/at or extend an existing one.

  • Unit tests are fine: a replica is 'master' branch , the other is '1.3.1'

Here are the next steps

  • backport the fix on F20 to confirm it fixes the issue
  • update the design (schema csn, update known oc/at with changed definition, implementation)
  • Check for regression with others schema fixes 47490, 47676, 47541 and 47573
  • review the fix

Here is the current status

Here are the next steps

  • would you test that fix to check if it fixes correctly this ticket
  • update the design
  • check for regression

I was finally able to test out my scenario using a Dogtag 19 Master and a Dogtag 20 Clone allowing DS to update the schema (using '''pki_clone_replicate_schema=True''' as we had been doing previously).

Everything worked like a charm!

Details:
Fedora 19 Master:
{{{
(1) yum-builddep 389-ds-base-1.3.2.17-20140401091755.fc17.src.rpm
(2) built the RPMS on my system
(3) installed:
389-ds-base-1.3.2.17-20140401091755.fc19.x86_64, and
389-ds-base-libs-1.3.2.17-20140401091755.fc19.x86_64
(4) configured a brand new DS instance
(5) installed, configured, and tested a Dogtag 10.0.7 CA master
using the latest packages
(created backup keys so that a clone could be created from them)
}}}
Fedora 20 Clone:
{{{
(1) yum-builddep 389-ds-base-1.3.2.17-20140401091755.fc17.src.rpm
(2) built the RPMS on my system
(3) installed:
389-ds-base-1.3.2.17-20140401091755.fc19.x86_64, and
389-ds-base-libs-1.3.2.17-20140401091755.fc19.x86_64
(4) configured a brand new DS instance
(5) installed, configured, and tested a Dogtag 10.1.1 CA clone
using the latest packages
(used the backup keys provided by the Dogtag 10.0.7 CA master)
}}}
I was able to perform all of the following tests successfully:
{{{
master EE request --> master Agent enroll
clone EE request --> clone Agent enroll
clone EE request --> master Agent enroll
master EE request --> clone Agent enroll
}}}

Here is the current status

  • The fix for 47721 contains two parts. First part is when the replica is acting as a consumer, the replica "learns" the supplier definitions that extends its own schema (new def or extended defs). The second part is when the replica is acting as a supplier, the replica "learns" the consumer definitions that extends its own schema.

  • I made debug fix that contained the first part only. It was successfully tested with 389-ds tests, IPA test case (ipa-server-install F19/ipa-replica-install F20), Dogtag tests above (including tests with F19 server / F20+fix replica).

  • I made debug fix that contained the two parts. It was successfully tested with 389-ds tests, IPA test case (ipa-server-install F19/ipa-replica-install F20), and ipa unit tests (./make-test). Not tested by DogTag.

  • The backport of the fix in 1.3.2, requires the backport of 47676 and 47541. Else the backport becomes complex.

{{{
a3674f8 Ticket 47676 : (cont.) Replication of the schema fails 'master branch' -> 1.2.11 or 1.3.1
7c2b666 Ticket 47676 : Replication of the schema fails 'master branch' -> 1.2.11 or 1.3.1
8e2ed0a Ticket 47541 - Fix Jenkins errors
0802e9c Ticket 47541 - Replication of the schema may overwrite consumer 'attributetypes' even
}}}

  • Design updated

here are the next steps

  • Clean up the debug fix, Sanity checking (CI+coverity)
  • Review

{{{

1545                    if (remote_schema_objectclasses_bervals) { 
1546                            ber_bvecfree(remote_schema_objectclasses_bervals); 
1547                    }

}}}
It is ok to pass a NULL to ber_bvecfree.

{{{

6380    schema_oc_compare_strict(struct objclass *oc_1, struct objclass *oc_2, char *description)

}}}
Should be const char *description

Otherwise, ack - including test

git merge ticket47721_oc_at
Updating 991984f..1446b3e
Fast-forward
dirsrvtests/tickets/ticket47721_test.py | 492 +++++++++++++++++++++++++++++++++++
ldap/servers/plugins/replication/repl5_connection.c | 191 ++++++++------
ldap/servers/slapd/proto-slap.h | 1 +
ldap/servers/slapd/schema.c | 1003 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------------
4 files changed, 1437 insertions(+), 250 deletions(-)

git push origin master
Counting objects: 24, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (13/13), done.
Writing objects: 100% (13/13), 14.00 KiB, done.
Total 13 (delta 9), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
991984f..1446b3e master -> master

commit 1446b3e
Author: Thierry bordaz (tbordaz) tbordaz@redhat.com
Date: Tue Mar 25 18:35:46 2014 +0100

git push origin master

Counting objects: 13, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 1.10 KiB, done.
Total 7 (delta 5), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
1446b3e..48253af master -> master

Oppss missed the fix for schema.c when applying the patch

git push origin master

Counting objects: 11, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 803 bytes, done.
Total 6 (delta 4), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
48253af..ab84ab8 master -> master

git push origin '''389-ds-base-1.3.2''' (core fix)

Counting objects: 24, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (12/12), done.
Writing objects: 100% (13/13), 14.27 KiB, done.
Total 13 (delta 9), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
d26a4a0..568f4a9 389-ds-base-1.3.2 -> 389-ds-base-1.3.2

commit 568f4a9
Author: Thierry bordaz (tbordaz) tbordaz@redhat.com
Date: Tue Mar 25 18:35:46 2014 +0100

git push origin '''389-ds-base-1.3.2''' (test case)

Counting objects: 10, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (7/7), 12.81 KiB, done.
Total 7 (delta 3), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
568f4a9..b5f38ae 389-ds-base-1.3.2 -> 389-ds-base-1.3.2

commit b5f38ae
Author: Thierry bordaz (tbordaz) tbordaz@redhat.com
Date: Fri Apr 25 11:26:54 2014 +0200

git push origin '''389-ds-base-1.3.2''' (review)

Counting objects: 11, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 818 bytes, done.
Total 6 (delta 4), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
b5f38ae..9c48a3a 389-ds-base-1.3.2 -> 389-ds-base-1.3.2

commit 9c48a3a
Author: Thierry bordaz (tbordaz) tbordaz@redhat.com
Date: Fri Apr 25 11:53:27 2014 +0200

Metadata Update from @rmeggins:
- Issue assigned to tbordaz
- Issue set to the milestone: 1.3.2.17

7 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/1055

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: Fixed)

3 years ago

Login to comment on this ticket.

Metadata