#48916 DS shuts down automatically if dnaThreshold is set to 0 in a MMR setup
Closed: wontfix None Opened 7 years ago by nhosoi.

Description of problem:
DS shuts down automatically if dnaThreshold is set to 0 in a MMR setup

How reproducible:
Always

Steps to Reproduce:
It's a MMR setup with 2 masters

1. Enabled the dna plugin on both instances

2. Added the required container entries for the dna plugin on both masters

3. Now on master1 which will basically transfer the next range to master2, I
added the dna plugin configuration entry like this:
dn: cn=Account UIDs,cn=Distributed Numeric Assignment
Plugin,cn=plugins,cn=config
objectClass: top
objectClass: dnaPluginConfig
cn: Account UIDs
dnatype: uidNumber
dnatype: gidNumber
dnafilter: (objectclass=posixAccount)
dnascope: ou=People,dc=example,dc=com
dnaNextValue: 1
dnaMaxValue: 50
dnasharedcfgdn: cn=Account UIDs,ou=Ranges,dc=example,dc=com
dnaThreshold: 0
dnaRangeRequestTimeout: 60
dnaMagicRegen: magic
dnaRemoteBindDN: uid=dnaAdmin,ou=People,dc=example,dc=com
dnaRemoteBindCred: secret123
dnaNextRange: 80-90

As can be seen in the above entry, I've set dnaThreshold to '0' and the
dnaNextRange to 80-90

4. Then on master2, I added dna plugin configuration entry like this,
dn: cn=Account UIDs,cn=Distributed Numeric Assignment
Plugin,cn=plugins,cn=config
objectClass: top
objectClass: dnaPluginConfig
cn: Account UIDs
dnatype: uidNumber
dnatype: gidNumber
dnafilter: (objectclass=posixAccount)
dnascope: ou=People,dc=example,dc=com
dnanextvalue: 61
dnaMaxValue: 70
dnasharedcfgdn: cn=Account UIDs,ou=Ranges,dc=example,dc=com
dnaThreshold: 2
dnaRangeRequestTimeout: 60
dnaMagicRegen: magic
dnaRemoteBindDN: uid=dnaAdmin,ou=People,dc=example,dc=com
dnaRemoteBindCred: secret123

master2 only has 10 numbers which it can allocate automatically for uidNumber
and gidNumber attributes

5. Then I added added the required replication configuration entries on both
masters to configure replication

6. Added 10 entries on master2 to exhaust its available range

7. Did an ldapsearch on master2, all the 10 entries were added and the
attributes uidNumber and gidNumber were
set accordingly by dna plugin, so far so good

8. Now I tried adding 11th entry on master2, It basically failed with this
error
ldap_add: Operations error (1)
additional info: Allocation of a new value for range cn=account
uids,cn=distributed numeric assignment plugin,cn=plugins,cn=config failed!
Unable to proceed.

9. Checked the error logs on master2,

10. Upon investing with status-dirsrv, I found that master1 was killed
[root@ds ~]# status-dirsrv mast1
? dirsrv@mast1.service - 389 Directory Server mast1.
   Loaded: loaded (/usr/lib/systemd/system/dirsrv@.service; enabled; vendor
preset: disabled)
   Active: failed (Result: signal) since Thu 2016-07-07 20:35:23 IST; 9min ago
 Main PID: 5852 (code=killed, signal=FPE)
   Status: "slapd started: Ready to process requests"

The status however here still shows Ready to process requests

11. Restarted master1 with start-dirsrv master1

12. Checked error logs on master1,

13. Did an ldapsearch on master1 and found that only some of the 10 entries
that were added on master2 were replicated to master1 and others were missing

Adds extra tracing to the module also.

The fix looks good. ack

One minor issue...

The added logging is great, but I think you should use "plugin" logging instead of "trace function call" logging. Trace function call logging has so much overhead, and "technically" this is not how it's supposed to be used anyway as it's meant for logging the entering/exiting of a function.

Replying to [comment:6 mreynolds]:

One minor issue...

The added logging is great, but I think you should use "plugin" logging instead of "trace function call" logging. Trace function call logging has so much overhead, and "technically" this is not how it's supposed to be used anyway as it's meant for logging the entering/exiting of a function.

+1

How about this SLAPI_LOG_CONFIG level? I'd think this is not set, by default. Better log always with SLAPI_LOG_FATAL to inform it to he administrator?
{{{
1249 slapi_log_error(SLAPI_LOG_CONFIG, DNA_PLUGIN_SUBSYSTEM,
1250 "----------> %s too low, setting to [%s]\n", DNA_THRESHOLD, value);
}}}
But in that case, you need to repeat the value is for DNA_THRESHOLD since the previous message is at the SLAPI_LOG_CONFIG level... :)

I updated these where it makes sense, I left the --x bailing messages as trace, since they pair with the other trace messages.

Looks good! You have my ack.

commit 05ebb6d
Compressing objects: 100% (12/12), done.
Writing objects: 100% (12/12), 4.09 KiB | 0 bytes/s, done.
Total 12 (delta 9), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
f593ae7..05ebb6d master -> master

Metadata Update from @nhosoi:
- Issue assigned to firstyear
- Issue set to the milestone: 1.3.5.10

7 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/1975

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: Fixed)

3 years ago

Login to comment on this ticket.

Metadata