Learn more about these different git repos.
Other Git URLs
When using sssd in a large LDAP environments, particularly when very large groups are involved (>10k members), memberof can consume quite a lot of CPU time when updating the cache. For example, 'getent group <very_large_group>' may take minutes because of memberof updates. During the memberof operations, sssd is non-functional and other tasks (e.g. logins) will hang until memberof is finished.
I have seen this take up to 7 minutes on a real system, where several large groups (the biggest >36k members) were involved (rfc2307, no nesting). During that time, ssh logins would hang and eventually time out.
One could possibly do some major optimizations if memberof is made aware of whether nesting is involved or not. I think this should be investigated in addition to general optimizations.
Fields changed
milestone: NEEDS_TRIAGE => SSSD 1.6.1
owner: somebody => jzeleny
milestone: SSSD 1.6.1 => SSSD 1.6.2 rhbz: =>
status: new => assigned
milestone: SSSD 1.6.2 => SSSD 1.7.0
milestone: SSSD 1.7.0 => SSSD 1.9.0
"Nice to have" for 1.9.
blockedby: => blocking: =>
So far we identified no advantage in doing this.
feature_milestone: => milestone: SSSD 1.9.0 => SSSD Deferred
rhbz: => 0
This was solved by not updating the groups in the first place, so we can close this ticket.
changelog: => design: => design_review: => 0 fedora_test_page: => mark: => 0 review: => 1 selected: => sensitive: => 0
resolution: => wontfix status: assigned => closed
Metadata Update from @trondham: - Issue assigned to jzeleny - Issue set to the milestone: SSSD Patches welcome
SSSD is moving from Pagure to Github. This means that new issues and pull requests will be accepted only in SSSD's github repository.
This issue has been cloned to Github and is available here: - https://github.com/SSSD/sssd/issues/1925
If you want to receive further updates on the issue, please navigate to the github issue and click on subscribe button.
subscribe
Thank you for understanding. We apologize for all inconvenience.
Login to comment on this ticket.