#47797 DB deadlock when two threads (on separated backend) try to record changes in retroCL
Closed: wontfix None Opened 9 years ago by tbordaz.

The problem is not systematic, it can be reproduced on a VM running freeipa unit tests.
I reproduced it one time out of 4.

The problem is reproduced while testing fix https://fedorahosted.org/389/ticket/47787 on 1.3.2 with those backports:

85106f0 Ticket 47721 - Schema Replication Issue (follow up)
96ab39f Ticket 47721 - Schema Replication Issue (follow up + cleanup)
6ebea73 Ticket 47721 - Schema Replication Issue
30f47c5 Ticket 47676 : (cont.) Replication of the schema fails 'master branch' -> 1.2.11 or 1.3.1
9558f55 Ticket 47676 : Replication of the schema fails 'master branch' -> 1.2.11 or 1.3.1
0ba131b Ticket 47541 - Fix Jenkins errors
adce8c6 Ticket 47541 - Replication of the schema may overwrite consumer 'attributetypes' even                 if consumer definition
0d344e3 Bump version 1.3.2.17
4799be9 Ticket 47787: A replicated MOD fails (Unwilling to perform) if it targets a tombstone

Now looking at the deadlock reason, the problem looks not related to any of those fixes.

The scenario of the hang is related to Thread 4 and Thread 5.
Thread 4 applies a DEL on 'userRoot' backend and update userRoot/objectclass index. To do this it acquires in WRITE the page '4' of this db. Then it tries to record the DEL in the retro changelog and wait for Thread 5 that owns the RETROCL lock.

Thread 5 applies a MOD on 'ipa' backend. It adds the MOD in the retroCL and acquired the lock. While adding the entry in the changelog, it does an internal search on 'userRoot' using 'objectclass' in filter component. It needs to read the page 4 of the userRoot/objectclass index and hang waiting for Thread 4

Thread 5 is MOD ("ou=ca,ou=requests,o=ipaca")
        In be_txn_post op write_replog_db -> Acquire retrocl_internal_lock
                add: "changenumber=886,cn=changelog"
                        search("cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com", one_level, "(member=changenumber=886,cn=changelog)")
                                key: 'objectclass=referral'
8000595d dd= 0 locks held 18   write locks 14   pid/thread 10992/139941434517248 flags 0    priority 100
8000595d READ          1 WAIT    userRoot/objectclass.db   page          4
...


Thread 4 is DEL ("fqdn=ipatestcert.idm.lab.bos.redhat.com,cn=computers,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com")
8000595b dd= 1 locks held 62   write locks 31   pid/thread 10992/139941426124544 flags 0    priority 100
...
8000595b WRITE        17 HELD    userRoot/objectclass.db   page          4
8000595b READ          8 HELD    userRoot/objectclass.db   page          4
...

        In be_txn_post of write_replog_db -> Wait for retrocl_internal_lock

Thread 4 is {{{ Thread 139941426124544 (db_stat theadId) Thread 4 (Thread 0x7f46a6fe5700 (LWP 11034)): #0 0x00007f46de2b859d in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x00007f46de2b41af in _L_lock_1026 () from /lib64/libpthread.so.0 #2 0x00007f46de2b4151 in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x00007f46de90c329 in PR_Lock () from /lib64/libnspr4.so #4 0x00007f46d3691734 in write_replog_db (newsuperior=0x0, modrdn_mods=0x0, newrdn=0x0, log_e=0x0, curtime=1399376225, flag=0, log_m=0x0, dn=0x7f46c42da470 "fqdn=ipatestcert.idm.lab.bos.redhat.com,cn=computers,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com", optype=3, pb=0x7f46a6fe4ae0) at ldap/servers/plugins/retrocl/retrocl_po.c:187 #5 retrocl_postob (pb=0x7f46a6fe4ae0, optype=3) at ldap/servers/plugins/retrocl/retrocl_po.c:668 #6 0x00007f46e053c515 in plugin_call_func (list=0x7f46e2be44b0, operation=operation@entry=563, pb=pb@entry=0x7f46a6fe4ae0, call_one=call_one@entry=0) at ldap/servers/slapd/plugin.c:1489 #7 0x00007f46e053c6c8 in plugin_call_list (pb=0x7f46a6fe4ae0, operation=563, list=<optimized out>) at ldap/servers/slapd/plugin.c:1451 #8 plugin_call_plugins (pb=pb@entry=0x7f46a6fe4ae0, whichfunction=whichfunction@entry=563) at ldap/servers/slapd/plugin.c:413 #9 0x00007f46d4e053df in ldbm_back_delete (pb=0x7f46a6fe4ae0) at ldap/servers/slapd/back-ldbm/ldbm_delete.c:1113 #10 0x00007f46e04f4670 in op_shared_delete (pb=pb@entry=0x7f46a6fe4ae0) at ldap/servers/slapd/delete.c:364 #11 0x00007f46e04f4933 in do_delete (pb=pb@entry=0x7f46a6fe4ae0) at ldap/servers/slapd/delete.c:128 #12 0x00007f46e0a02dee in connection_dispatch_operation (pb=0x7f46a6fe4ae0, op=0x7f46e3129bc0, conn=0x7f46cc7c6dd0) at ldap/servers/slapd/connection.c:650 #13 connection_threadmain () at ldap/servers/slapd/connection.c:2534 #14 0x00007f46de911e1b in _pt_root () from /lib64/libnspr4.so #15 0x00007f46de2b1f33 in start_thread () from /lib64/libpthread.so.0 #16 0x00007f46ddfdfded in clone () from /lib64/libc.so.6 }}} Thread 5 is {{{ Thread 139941434517248 Thread 5 (Thread 0x7f46a77e6700 (LWP 11033)): #0 0x00007f46de2b5d20 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f46d81ac1c3 in __db_hybrid_mutex_suspend () from /lib64/libdb-5.3.so #2 0x00007f46d81ab5a8 in __db_tas_mutex_lock () from /lib64/libdb-5.3.so #3 0x00007f46d8255fda in __lock_get_internal () from /lib64/libdb-5.3.so #4 0x00007f46d8256ac0 in __lock_get () from /lib64/libdb-5.3.so #5 0x00007f46d82826da in __db_lget () from /lib64/libdb-5.3.so #6 0x00007f46d81c94a7 in __bam_search () from /lib64/libdb-5.3.so #7 0x00007f46d81b4126 in __bamc_search () from /lib64/libdb-5.3.so #8 0x00007f46d81b5bdf in __bamc_get () from /lib64/libdb-5.3.so #9 0x00007f46d826f156 in __dbc_iget () from /lib64/libdb-5.3.so #10 0x00007f46d827e0b4 in __dbc_get_pp () from /lib64/libdb-5.3.so #11 0x00007f46d4de5530 in idl_new_fetch (be=0x7f46e2c0cf40, db=<optimized out>, inkey=0x7f46a77d18b0, txn=0x7f46e3035ad0, a=0x7f46e2df9290, flag_err=0x7f46a77d855c, allidslimit=100000) at ldap/servers/slapd/back-ldbm/idl_new.c:231 #12 0x00007f46d4de51b5 in idl_fetch_ext (be=be@entry=0x7f46e2c0cf40, db=<optimized out>, key=key@entry=0x7f46a77d18b0, txn=txn@entry=0x7f46e3035ad0, a=<optimized out>, err=err@entry=0x7f46a77d855c, allidslimit=allidslimit@entry=100000) at ldap/servers/slapd/back-ldbm/idl_shim.c:130 #13 0x00007f46d4df36e6 in index_read_ext_allids (pb=pb@entry=0x7f46e3239830, be=be@entry=0x7f46e2c0cf40, type=type@entry=0x7f46e3216750 "objectclass", indextype=indextype@entry=0x7f46d4e36b3f "eq", val=<optimized out>, txn=txn@entry=0x7f46a77d5b60, err=err@entry=0x7f46a77d855c, unindexed=unindexed@entry=0x7f46a77d5b54, allidslimit=allidslimit@entry=100000) at ldap/servers/slapd/back-ldbm/index.c:1055 #14 0x00007f46d4dde2fe in keys2idl (pb=pb@entry=0x7f46e3239830, be=be@entry=0x7f46e2c0cf40, type=0x7f46e3216750 "objectclass", indextype=indextype@entry=0x7f46d4e36b3f "eq", ivals=<optimized out>, err=err@entry=0x7f46a77d855c, unindexed=unindexed@entry=0x7f46a77d5b54, txn=txn@entry=0x7f46a77d5b60, allidslimit=allidslimit@entry=100000) at ldap/servers/slapd/back-ldbm/filterindex.c:1003 #15 0x00007f46d4ddea53 in ava_candidates (pb=pb@entry=0x7f46e3239830, be=be@entry=0x7f46e2c0cf40, f=f@entry=0x7f46e32a9600, ftype=<optimized out>, err=0x7f46a77d855c, allidslimit=100000, range=0, nextf=0x0) at ldap/servers/slapd/back-ldbm/filterindex.c:317 #16 0x00007f46d4ddf0ba in filter_candidates_ext (pb=pb@entry=0x7f46e3239830, be=be@entry=0x7f46e2c0cf40, base=base@entry=0x7f46e323c600 "cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com", f=f@entry=0x7f46e32a9600, nextf=nextf@entry=0x0, range=range@entry=0, err=err@entry=0x7f46a77d855c, allidslimit=allidslimit@entry=100000) at ldap/servers/slapd/back-ldbm/filterindex.c:140 #17 0x00007f46d4de001b in list_candidates (pb=pb@entry=0x7f46e3239830, be=be@entry=0x7f46e2c0cf40, base=base@entry=0x7f46e323c600 "cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com", flist=flist@entry=0x7f46e3261f30, ftype=<optimized out>, err=0x7f46a77d855c, allidslimit=100000) at ldap/servers/slapd/back-ldbm/filterindex.c:837 #18 0x00007f46d4ddef80 in filter_candidates_ext (pb=pb@entry=0x7f46e3239830, be=be@entry=0x7f46e2c0cf40, base=base@entry=0x7f46e323c600 "cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com", f=f@entry=0x7f46e3261f30, nextf=nextf@entry=0x0, range=range@entry=0, err=err@entry=0x7f46a77d855c, allidslimit=allidslimit@entry=100000) at ldap/servers/slapd/back-ldbm/filterindex.c:173 #19 0x00007f46d4de001b in list_candidates (pb=pb@entry=0x7f46e3239830, be=be@entry=0x7f46e2c0cf40, base=base@entry=0x7f46e323c600 "cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com", flist=flist@entry=0x7f46e2f116b0, ftype=<optimized out>, err=0x7f46a77d855c, allidslimit=100000) at ldap/servers/slapd/back-ldbm/filterindex.c:837 #20 0x00007f46d4ddef80 in filter_candidates_ext (pb=pb@entry=0x7f46e3239830, be=be@entry=0x7f46e2c0cf40, base=base@entry=0x7f46e323c600 "cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com", f=0x7f46e2f116b0, nextf=nextf@entry=0x0, range=range@entry=0, err=err@entry=0x7f46a77d855c, allidslimit=100000, allidslimit@entry=0) at ldap/servers/slapd/back-ldbm/filterindex.c:173 #21 0x00007f46d4de058a in filter_candidates (pb=pb@entry=0x7f46e3239830, be=be@entry=0x7f46e2c0cf40, base=base@entry=0x7f46e323c600 "cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com", f=<optimized out>, nextf=nextf@entry=0x0, range=range@entry=0, err=err@entry=0x7f46a77d855c) at ldap/servers/slapd/back-ldbm/filterindex.c:204 #22 0x00007f46d4e18dea in onelevel_candidates (err=0x7f46a77d855c, lookup_returned_allidsp=0x7f46a77d854c, managedsait=<optimized out>, filter=<optimized out>, e=0x7f46b4087a00, base=0x7f46e323c600 "cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com", be=0x7f46e2c0cf40, pb=0x7f46e3239830) at ldap/servers/slapd/back-ldbm/ldbm_search.c:1120 #23 build_candidate_list (candidates=0x7f46a77d8588, lookup_returned_allidsp=0x7f46a77d854c, scope=<optimized out>, base=0x7f46e323c600 "cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com", e=<optimized out>, be=0x7f46e2c0cf40, pb=0x7f46e3239830) at ldap/servers/slapd/back-ldbm/ldbm_search.c:987 #24 ldbm_back_search (pb=0x7f46e3239830) at ldap/servers/slapd/back-ldbm/ldbm_search.c:661 #25 0x00007f46e0533040 in op_shared_search (pb=pb@entry=0x7f46e3239830, send_result=send_result@entry=1) at ldap/servers/slapd/opshared.c:803 #26 0x00007f46e054097e in search_internal_callback_pb (pb=0x7f46e3239830, callback_data=<optimized out>, prc=0x0, psec=0x7f46d3063080 <backend_shr_note_entry_sdn_cb>, prec=0x0) at ldap/servers/slapd/plugin_internal_op.c:812 #27 0x00007f46d3064c01 in backend_shr_update_references_cb () from /usr/lib64/dirsrv/plugins/schemacompat-plugin.so #28 0x00007f46d307217f in map_data_foreach_map () from /usr/lib64/dirsrv/plugins/schemacompat-plugin.so #29 0x00007f46d3062a2b in backend_shr_update_references () from /usr/lib64/dirsrv/plugins/schemacompat-plugin.so #30 0x00007f46d3063b86 in backend_shr_add_cb.part.13 () from /usr/lib64/dirsrv/plugins/schemacompat-plugin.so #31 0x00007f46d3063ce1 in backend_shr_betxn_post_add_cb () from /usr/lib64/dirsrv/plugins/schemacompat-plugin.so #32 0x00007f46e053c515 in plugin_call_func (list=0x7f46e2bef5a0, operation=operation@entry=560, pb=pb@entry=0x7f46e326c960, call_one=call_one@entry=0) at ldap/servers/slapd/plugin.c:1489 #33 0x00007f46e053c6c8 in plugin_call_list (pb=0x7f46e326c960, operation=560, list=<optimized out>) at ldap/servers/slapd/plugin.c:1451 #34 plugin_call_plugins (pb=pb@entry=0x7f46e326c960, whichfunction=whichfunction@entry=560) at ldap/servers/slapd/plugin.c:413 #35 0x00007f46d4df9b10 in ldbm_back_add (pb=0x7f46e326c960) at ldap/servers/slapd/back-ldbm/ldbm_add.c:1035 #36 0x00007f46e04e757a in op_shared_add (pb=pb@entry=0x7f46e326c960) at ldap/servers/slapd/add.c:735 #37 0x00007f46e04e7dd3 in add_internal_pb (pb=pb@entry=0x7f46e326c960) at ldap/servers/slapd/add.c:434 #38 0x00007f46e04e8aa3 in slapi_add_internal_pb (pb=pb@entry=0x7f46e326c960) at ldap/servers/slapd/add.c:356 #39 0x00007f46d3691c81 in write_replog_db (newsuperior=0x0, modrdn_mods=0x0, newrdn=0x0, log_e=0x0, curtime=1399376225, flag=0, log_m=0x7f46e30364c0, dn=0x7f46e32227c0 "ou=ca,ou=requests,o=ipaca", optype=<optimized out>, pb=<optimized out>) at ldap/servers/plugins/retrocl/retrocl_po.c:371 #40 retrocl_postob (pb=<optimized out>, optype=<optimized out>) at ldap/servers/plugins/retrocl/retrocl_po.c:668 #41 0x00007f46e053c515 in plugin_call_func (list=0x7f46e2be44b0, operation=operation@entry=561, pb=pb@entry=0x7f46a77e5ae0, call_one=call_one@entry=0) at ldap/servers/slapd/plugin.c:1489 #42 0x00007f46e053c6c8 in plugin_call_list (pb=0x7f46a77e5ae0, operation=561, list=<optimized out>) at ldap/servers/slapd/plugin.c:1451 #43 plugin_call_plugins (pb=pb@entry=0x7f46a77e5ae0, whichfunction=whichfunction@entry=561) at ldap/servers/slapd/plugin.c:413 #44 0x00007f46d4e1289e in ldbm_back_modify (pb=<optimized out>) at ldap/servers/slapd/back-ldbm/ldbm_modify.c:823 #45 0x00007f46e052c601 in op_shared_modify (pb=pb@entry=0x7f46a77e5ae0, pw_change=pw_change@entry=0, old_pw=0x0) at ldap/servers/slapd/modify.c:1081 #46 0x00007f46e052d953 in do_modify (pb=pb@entry=0x7f46a77e5ae0) at ldap/servers/slapd/modify.c:419 #47 0x00007f46e0a02dd1 in connection_dispatch_operation (pb=0x7f46a77e5ae0, op=0x7f46e3213700, conn=0x7f46cc7c3950) at ldap/servers/slapd/connection.c:660 #48 connection_threadmain () at ldap/servers/slapd/connection.c:2534 #49 0x00007f46de911e1b in _pt_root () from /lib64/libnspr4.so #50 0x00007f46de2b1f33 in start_thread () from /lib64/libpthread.so.0 #51 0x00007f46ddfdfded in clone () from /lib64/libc.so.6 }}}

The test case is described in https://fedorahosted.org/freeipa/ticket/4279#comment:7
The hanging instance is F20 replica

Ah, so retrocl can be called recursively. You can't use a simple PRLock because it is not re-entrant. You will need to use a PRMonitor, or use a PRRWLock. I suggest PRMonitor unless you require separate read/write locking.

Hello Rich,

I am not sure it is called recursively. Here Thread 5 called retrocl_postob in MOD (ipaca backend) post op, then while adding the entry in the changelog this is the schemacompat-plugin that does an internal search on 'userRoot'.

Forgot to mention: it the DB transaction are done with DB_TXN_NOWAIT (default behavior), the Thread 5 is looping on a cursor->c_get with DB_DEADLOCK and the CPU is high. If the transaction is begin with flags=0 (wait), then the Thread 5 hanging on the lock and CPU is low.

In thread 4 - who is holding this lock?
{{{
#4 0x00007f46d3691734 in write_replog_db (newsuperior=0x0, modrdn_mods=0x0, newrdn=0x0, log_e=0x0, curtime=1399376225, flag=0, log_m=0x0, dn=0x7f46c42da470 "fqdn=ipatestcert.idm.lab.bos.redhat.com,cn=computers,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com", optype=3, pb=0x7f46a6fe4ae0) at ldap/servers/plugins/retrocl/retrocl_po.c:187
}}}

THis is the thread 5 that is already adding an entry in the changelog. Thread 5 hold retrocl_internal_lock :

{{{
Thread 5
...
#36 0x00007f46e04e757a in op_shared_add (pb=pb@entry=0x7f46e326c960) at ldap/servers/slapd/add.c:735
#37 0x00007f46e04e7dd3 in add_internal_pb (pb=pb@entry=0x7f46e326c960) at ldap/servers/slapd/add.c:434
#38 0x00007f46e04e8aa3 in slapi_add_internal_pb (pb=pb@entry=0x7f46e326c960) at ldap/servers/slapd/add.c:356
#39 0x00007f46d3691c81 in write_replog_db (newsuperior=0x0, modrdn_mods=0x0, newrdn=0x0, log_e=0x0, curtime=1399376225, flag=0, log_m=0x7f46e30364c0, dn=0x7f46e32227c0 "ou=ca,ou=requests,o=ipaca", optype=<optimized out>, pb=<optimized out>) at ldap/servers/plugins/retrocl/retrocl_po.c:371
#40 retrocl_postob (pb=<optimized out>, optype=<optimized out>) at ldap/servers/plugins/retrocl/retrocl_po.c:668
...

}}}

Testing a fix suggested by Rich. (catch DB_DEADLOCK, release cursor, abort txn, then returns a failure and let caller to retry).

The fix looks successful to prevent the hang. The hang use to occur one time out 3-4 run of freeipa unit tests. With the fix I was unable to reproduce the hang in 10 run.

Note, that this fix triggers the failure of the add_internal_pb in case of DB_DEADLOCK. This reveal a problem (https://fedorahosted.org/389/ticket/47802) if the caller of add_internal_pb does not test the failure.

The fix looks good to me.
As you see in this declaration, entryrdn_get_elem has a sibling _entryrdn_get_tombstone_elem. Do we want to apply the similar change to the function, too?
{{{
116 static int _entryrdn_get_elem(DBC cursor, DBT key, DBT data, const char comp_key, rdn_elem elem);
116 static int _entryrdn_get_elem(DBC cursor, DBT key, DBT data, const char comp_key, rdn_elem
elem, DB_TXN db_txn);
117 117 static int _entryrdn_get_tombstone_elem(DBC
cursor, Slapi_RDN srdn, DBT key, const char comp_key, rdn_elem *elem);
}}}
And does which 389-ds-base version need this fix? Probably, 1.3.2?

Hi Noriko, thanks for looking at the fix. Yes you are correct, fixing _entryrdn_get_tombstone_elem as well is a good idea and looks not complex. Note that there is no test case for _entryrdn_get_tombstone_elem, freeipa unit test only reproduce hang with normal nodes (not tombstone).

I will rework the fix

Oppss missed the second part of your update.

It looks like the bug exists at least in master, 1.3.2, 1.3.1 and 1.2.11

I reproduced and tested the fix in 1.3.2, so yes the bug exists in 1.3.2 and I believe it worth backporting it into that version.
For previous versions, we may backport on demand. The signature of the hang being very clear, it will be easy to determine that we hit that bug.

Minor nitpick: there are indentation issues in your patch. Tabs were used in places where the surrounding code was using 4 space indentations.

'''Fix in master'''

git merge ticket_47797
Updating 708a56b..0dd8b68
Fast-forward
ldap/servers/slapd/back-ldbm/ldbm_entryrdn.c | 263 +++++++++++++++++++++++++++++++++++++++++++++++++------------------------------
1 file changed, 163 insertions(+), 100 deletions(-)

git push origin master
Counting objects: 13, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 2.42 KiB, done.
Total 7 (delta 5), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
708a56b..85016a8 master -> master

commit 85016a8
Author: Thierry bordaz (tbordaz) tbordaz@redhat.com
Date: Thu Jul 10 14:05:16 2014 +0200

'''Fix in 389-ds-base-1.3.2'''

git push origin 389-ds-base-1.3.2
Counting objects: 13, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 2.31 KiB, done.
Total 7 (delta 5), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
513788c..a722b2e 389-ds-base-1.3.2 -> 389-ds-base-1.3.2

commit a722b2e
Author: Thierry bordaz (tbordaz) tbordaz@redhat.com
Date: Thu Jul 10 14:05:16 2014 +0200

'''Fix in master (indentation of the fix)'''

git merge ticket_47797
Updating 85016a8..0bfe1ee
Fast-forward
ldap/servers/slapd/back-ldbm/ldbm_entryrdn.c | 38 +++++++++++++++++++-------------------
1 file changed, 19 insertions(+), 19 deletions(-)

git push origin master
Counting objects: 13, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 929 bytes, done.
Total 7 (delta 5), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
85016a8..0bfe1ee master -> master

commit 0bfe1ee
Author: Thierry bordaz (tbordaz) tbordaz@redhat.com
Date: Fri Jul 11 15:24:53 2014 +0200

'''Fix in 389-ds-base-1.3.2 (indentation of the fix)'''

git push origin 389-ds-base-1.3.2
Counting objects: 13, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 911 bytes, done.
Total 7 (delta 5), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
a722b2e..107722f 389-ds-base-1.3.2 -> 389-ds-base-1.3.2

commit 107722f
Author: Thierry bordaz (tbordaz) tbordaz@redhat.com
Date: Fri Jul 11 15:24:53 2014 +0200

Metadata Update from @tbordaz:
- Issue assigned to tbordaz
- Issue set to the milestone: 1.3.2.20

7 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/1128

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: Fixed)

3 years ago

Login to comment on this ticket.

Metadata