#1600 The sssd_nss process grows the memory consumption over time
Closed: Fixed None Opened 11 years ago by jhrozek.

https://bugzilla.redhat.com/show_bug.cgi?id=869443 (Red Hat Enterprise Linux 6)

Description of problem:
The requests that the responders send to the Data Providers are allocated on
the global context to ensure that even if the client disconnects, there is
someone to read the reply. However, we forgot to free the structure that
represents the request, which meant that the sssd_nss process grew over time.

Version-Release number of selected component (if applicable):
1.9.2-4

How reproducible:
quite hard

Steps to Reproduce:
1. set a very low cache timeout
2. run account requests in parallel
3. observe the sssd_nss process growing

Actual results:
sssd_nss process is growing

Expected results:
the consumption should stay pretty much the same

Additional info:
This is not easily reproducable, but apart from running many requests and
watching the consumption grow, a quicker, but more involved way might be to
check with the gdb that no tevent_req structures are allocated on top of the
rctx after a request finishes. Please let me know which approach is preferable
for QE.

Fields changed

blockedby: =>
blocking: =>
coverity: =>
design: =>
design_review: => 0
feature_milestone: =>
fedora_test_page: =>
owner: somebody => jhrozek
patch: 0 => 1
testsupdated: => 0

Fields changed

milestone: NEEDS_TRIAGE => SSSD 1.9.3

resolution: => fixed
status: new => closed

Metadata Update from @jhrozek:
- Issue assigned to jhrozek
- Issue set to the milestone: SSSD 1.9.3

7 years ago

SSSD is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in SSSD's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/SSSD/sssd/issues/2642

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Login to comment on this ticket.

Metadata