#2886 sssd-nss segfault on restart
Closed: Invalid None Opened 8 years ago by jhrozek.

Ticket was cloned from Red Hat Bugzilla (product Red Hat Enterprise Linux 6): Bug 1283769

Description of problem:

My system is very busy copied a large (3+TB) file from one sata disk to a USB
disk.  sssd is being affected:

Nov 19 09:27:24 saga sssd: Killing service [default], not responding to pings!
Nov 19 09:28:06 saga sssd[be[default]]: Shutting down
Nov 19 09:28:06 saga sssd[be[default]]: Starting up
Nov 19 10:19:06 saga sssd: Killing service [default], not responding to pings!
Nov 19 10:20:06 saga sssd: [default][10121] is not responding to SIGTERM.
Sending SIGKILL.
Nov 19 10:20:06 saga sssd[be[default]]: Starting up
Nov 19 10:20:11 saga kernel: sssd_nss[16928]: segfault at 94 ip
00007f29e6a34fbc sp 00007ffd00ed24a0 error 4 in
libdbus-1.so.3.4.0[7f29e6a10000+40000]
Nov 19 10:20:12 saga abrt[344]: Saved core dump of pid 16928
(/usr/libexec/sssd/sssd_nss) to /var/spool/abrt/ccpp-2015-11-19-10:20:11-16928
(1462272 bytes)
Nov 19 10:20:12 saga sssd[nss]: Starting up
Nov 19 10:41:06 saga sssd: Killing service [default], not responding to pings!
Nov 19 10:41:42 saga sssd: Killing service [nss], not responding to pings!
Nov 19 10:42:06 saga sssd: [default][341] is not responding to SIGTERM. Sending
SIGKILL.
Nov 19 10:42:06 saga sssd[be[default]]: Starting up
Nov 19 10:42:12 saga sssd[nss]: Shutting down
Nov 19 10:42:12 saga sssd[nss]: Starting up
Nov 19 11:07:32 saga sssd: Killing service [nss], not responding to pings!
Nov 19 11:09:12 saga sssd: [nss][9468] is not responding to SIGTERM. Sending
SIGKILL.
Nov 19 11:09:12 saga sssd[nss]: Starting up

# cat sssd.log
(Thu Nov 19 10:20:06 2015) [sssd] [mt_svc_sigkill] (0x0010): [default][10121]
is not responding to SIGTERM. Sending SIGKILL.
(Thu Nov 19 10:42:06 2015) [sssd] [mt_svc_sigkill] (0x0010): [default][341] is
not responding to SIGTERM. Sending SIGKILL.
(Thu Nov 19 11:08:32 2015) [sssd] [mt_svc_sigkill] (0x0010): [nss][9468] is not
responding to SIGTERM. Sending SIGKILL.

# cat sssd_nss.log
(Thu Nov 19 11:09:47 2015) [sssd[nss]] [dp_id_callback] (0x0010): The Monitor
returned an error [org.freedesktop.DBus.Error.NoReply]
(Thu Nov 19 11:09:47 2015) [sssd[nss]] [id_callback] (0x0010): The Monitor
returned an error [org.freedesktop.DBus.Error.NoReply]

Program terminated with signal 11, Segmentation fault.
#0  dbus_watch_handle (watch=0x90, flags=2) at dbus-watch.c:650
650       if (watch->fd < 0 || watch->flags == 0)
(gdb) bt
#0  dbus_watch_handle (watch=0x90, flags=2) at dbus-watch.c:650
#1  0x00007f29e70bbdbc in sbus_watch_handler (ev=<value optimized out>,
    fde=<value optimized out>, flags=<value optimized out>, data=<value
optimized out>)
    at src/sbus/sssd_dbus_common.c:94
#2  0x00007f29e39e7ebe in epoll_event_loop (ev=<value optimized out>,
    location=<value optimized out>) at ../tevent_epoll.c:736
#3  epoll_event_loop_once (ev=<value optimized out>, location=<value optimized
out>)
    at ../tevent_epoll.c:931
#4  0x00007f29e39e62e6 in std_event_loop_once (ev=0xe1f3e0,
    location=0x7f29e70dae80 "src/util/server.c:668") at
../tevent_standard.c:112
#5  0x00007f29e39e249d in _tevent_loop_once (ev=0xe1f3e0,
    location=0x7f29e70dae80 "src/util/server.c:668") at ../tevent.c:530
#6  0x00007f29e39e251b in tevent_common_loop_wait (ev=0xe1f3e0,
    location=0x7f29e70dae80 "src/util/server.c:668") at ../tevent.c:634
#7  0x00007f29e39e6256 in std_event_loop_wait (ev=0xe1f3e0,
    location=0x7f29e70dae80 "src/util/server.c:668") at
../tevent_standard.c:138
#8  0x00007f29e70c28d3 in server_loop (main_ctx=0xe20750) at
src/util/server.c:668
#9  0x0000000000405fc8 in main (argc=6, argv=<value optimized out>)
    at src/responder/nss/nsssrv.c:610
(gdb) print watch
$1 = (DBusWatch *) 0x90
(gdb) print *watch
Cannot access memory at address 0x90
(gdb) up
#1  0x00007f29e70bbdbc in sbus_watch_handler (ev=<value optimized out>,
    fde=<value optimized out>, flags=<value optimized out>, data=<value
optimized out>)
    at src/sbus/sssd_dbus_common.c:94
94                  dbus_watch_handle(watch->dbus_write_watch,
DBUS_WATCH_WRITABLE);
(gdb) print watch
$2 = (struct sbus_watch_ctx *) 0xe32a40
(gdb) print *watch
$3 = {prev = 0x0, next = 0x0, conn = 0xe299f0, fde = 0xe2c300, fd = 13,
dbus_read_watch = 0x0,
  dbus_write_watch = 0x90}

Version-Release number of selected component (if applicable):
sssd-1.12.4-47.el6_7.4.x86_64

Fields changed

blockedby: =>
blocking: =>
changelog: =>
coverity: =>
design: =>
design_review: => 0
feature_milestone: =>
fedora_test_page: =>
mark: no => 0
milestone: NEEDS_TRIAGE => SSSD 1.13.3
review: True => 0
selected: =>
testsupdated: => 0

Fields changed

cc: => orion

Still needs work and ideally a better way to reproduce, so I'm moving the ticket out of 1.13.3 before the upstream release..

milestone: SSSD 1.13.3 => SSSD 1.13.4

This will be (hopefully) mitigated by some changes being worked on
- Simo rewrote the watchdog to be in-process
- the cache writes should be less frequent in 1.14 as well
- Pavel is changing the requests talloc hierarchy

Because of the two above and because we don't have a way to reproduce this problem, I'm marking this bug as minor and moving to a release further away. I would prefer to see if we still have issues after 1.14 changes.

milestone: SSSD 1.13.4 => SSSD 1.15 beta

Also linked to https://bugzilla.redhat.com/show_bug.cgi?id=1305344 (sorry, our internal cloning tool has some issues at the moment..)

I suspect this is solved by Pavel's new talloc hierarchy in providers, so I suggest we close.

review: 0 => 1
selected: => Not need

Please either reopen or open a new bug if you are able to reproduce this bug with the 1.14 branch where the new request hierarchy was introduced.

resolution: => worksforme
status: new => closed

Metadata Update from @jhrozek:
- Issue set to the milestone: SSSD Future releases (no date set yet)

7 years ago

SSSD is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in SSSD's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/SSSD/sssd/issues/3927

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Login to comment on this ticket.

Metadata