Ticket was cloned from Red Hat Bugzilla (product Red Hat Enterprise Linux 6): Bug 1402012
Please note that this Bug is private and may not be accessible as it contains confidential Red Hat customer information.
Description of problem: When importing big LDIF file with duplicated DNs, it throws "unable to flush: No such file or directory" error in the error logs. I encountered this issue while verifying https://bugzilla.redhat.com/show_bug.cgi?id=1368209. Version-Release number of selected component (if applicable): 389-ds-base-1.2.11.15-85 How reproducible: Consistently with big LDIF files with duplicate DNs. Steps to Reproduce: 1. Install latest 389-ds-base 2. Create an instance and few entries to suffix "dc=importest,dc=com" 3. Import the LDIF(attached in the bz) file using ldif2db.pl script. /usr/lib64/dirsrv/slapd-Inst1/ldif2db.pl -D "cn=Directory Manager" -w Secret123 -n importest1121 -s "dc=importest,dc=com" -i /var/lib/dirsrv/slapd-Inst1/ldif/MyNew02_01.ldif 4. Check the output of error logs when online import is running. tail -f /var/log/dirsrv/slapd-Inst1/errors DB errors observed on the error logs: libdb: importest1121/uid.db4: unable to flush: No such file or directory libdb: importest1121/sn.db4: unable to flush: No such file or directory Actual results: [06/Dec/2016:08:32:56 -0500] - Bringing importest1121 offline... [06/Dec/2016:08:32:56 -0500] - WARNING: Import is running with nsslapd-db-private-import-mem on; No other process is allowed to access the database [06/Dec/2016:08:32:56 -0500] - import importest1121: Beginning import job... [06/Dec/2016:08:32:56 -0500] - import importest1121: Index buffering enabled with bucket size 19 [06/Dec/2016:08:32:56 -0500] - import importest1121: Processing file "/var/lib/dirsrv/slapd-Inst1/ldif/MyNew02_01.ldif" [06/Dec/2016:08:33:16 -0500] - import importest1121: Processed 40800 entries -- average rate 2040.0/sec, recent rate 2040.0/sec, hit ratio 0% [06/Dec/2016:08:33:37 -0500] - import importest1121: Processed 81353 entries -- average rate 2033.8/sec, recent rate 2033.8/sec, hit ratio 100% [06/Dec/2016:08:33:47 -0500] entryrdn-index - _entryrdn_insert_key: Same DN (dn: ou=Netscape Servers,dc=importest,dc=com) is already in the entryrdn file with different ID 160. Expected ID is 100315. [06/Dec/2016:08:33:47 -0500] - import importest1121: Duplicated DN detected: "ou=Netscape Servers,dc=importest,dc=com": Entry ID: (100315) [06/Dec/2016:08:33:47 -0500] - import importest1121: Aborting all Import threads... [06/Dec/2016:08:33:52 -0500] - import importest1121: Import threads aborted. [06/Dec/2016:08:33:52 -0500] - import importest1121: Closing files... [06/Dec/2016:08:33:53 -0500] - libdb: importest1121/nsuniqueid.db4: unable to flush: No such file or directory [06/Dec/2016:08:33:53 -0500] - libdb: importest1121/parentid.db4: unable to flush: No such file or directory [06/Dec/2016:08:33:53 -0500] - libdb: importest1121/cn.db4: unable to flush: No such file or directory [06/Dec/2016:08:33:53 -0500] - libdb: importest1121/givenName.db4: unable to flush: No such file or directory [06/Dec/2016:08:33:53 -0500] - libdb: importest1121/entryrdn.db4: unable to flush: No such file or directory [06/Dec/2016:08:33:53 -0500] - libdb: importest1121/uid.db4: unable to flush: No such file or directory [06/Dec/2016:08:33:53 -0500] - libdb: importest1121/telephoneNumber.db4: unable to flush: No such file or directory [06/Dec/2016:08:33:53 -0500] - libdb: importest1121/mail.db4: unable to flush: No such file or directory [06/Dec/2016:08:33:53 -0500] - libdb: importest1121/sn.db4: unable to flush: No such file or directory [06/Dec/2016:08:33:53 -0500] - libdb: importest1121/id2entry.db4: unable to flush: No such file or directory [06/Dec/2016:08:33:53 -0500] - libdb: importest1121/seeAlso.db4: unable to flush: No such file or directory [06/Dec/2016:08:33:53 -0500] - libdb: importest1121/objectclass.db4: unable to flush: No such file or directory [06/Dec/2016:08:33:53 -0500] - import importest1121: Import failed. Expected results: More meaningful error messages. Additional info:
attachment 0001-Ticket-49071-Import-with-duplicate-DNs-throws-unexpe.patch
c3a940c..64b1ebf master -> master commit 64b1ebf Author: Mark Reynolds mreynolds@redhat.com Date: Mon Dec 19 12:26:59 2016 -0500
a4a3453..8ab4be5 389-ds-base-1.3.5 -> 389-ds-base-1.3.5 commit 8ab4be5
9f9dcdd..d2f46f5 389-ds-base-1.3.4 -> 389-ds-base-1.3.4 commit d2f46f5
f4b2a54..934c560 389-ds-base-1.2.11 -> 389-ds-base-1.2.11 commit 934c560
Linked to Bugzilla bug: https://bugzilla.redhat.com/show_bug.cgi?id=1406101
Metadata Update from @nhosoi: - Issue set to the milestone: 1.2.11.33
Test Case Added -
<img alt="0001-Description-Added-test-case-to-test-ticket-49071.patch" src="/389-ds-base/issue/raw/files/b24e2f91e6626f43ddbc8d8e346732e2abb0854e8cddfc127a86c69555bd8053-0001-Description-Added-test-case-to-test-ticket-49071.patch" />
Thank you, very good test case. Few small issues that can improve it: - commit message as we discussed in mail-list (I know you know it, I am writting it just for being consistent); - this part:
ldif_file = ldif_dir + '/data.ldif' fd = open(ldif_file, "w") fd.write(l) fd.close()
Can be improved to:
ldif_file = os.path.join(ldif_dir, 'data.ldif') with open(ldif_file, "w") as fd: fd.write(l)
Metadata Update from @spichugi: - Issue close_status updated to: None (was: Fixed)
Great review @spichugi , agree with your comments. Especially the with open() syntax, because it makes resource cleanup so much nicer.
Thanks!
Updated patch with comments, thanks @spichugi
<img alt="0001-Issue-49071-Test-Case-to-verify-bug-1406101.patch" src="/389-ds-base/issue/raw/files/1e2cd8612e8eec9843fc5ef2e3122aa29f37e1fd6404210aad5199e550055550-0001-Issue-49071-Test-Case-to-verify-bug-1406101.patch" />
<img alt="0001-Issue-49071-Add-test-case-to-tickets.patch" src="/389-ds-base/issue/raw/files/b5f08215e18cc3b9b4a7c9620e5746da835927e475c35df0a42bb93286235d66-0001-Issue-49071-Add-test-case-to-tickets.patch" />
Please consider the last patch. Thanks.
Merged, thanks!
To ssh://pagure.io/389-ds-base.git 0762e39..97b15b9 master -> master commit 97b15b9 Author: Amita Sharma amsharma@redhat.com Date: Fri May 5 18:08:37 2017 +0530
Commit f5714c1 relates to this ticket
389-ds-base is moving from Pagure to Github. This means that new issues and pull requests will be accepted only in 389-ds-base's github repository.
This issue has been cloned to Github and is available here: - https://github.com/389ds/389-ds-base/issues/2130
If you want to receive further updates on the issue, please navigate to the github issue and click on subscribe button.
subscribe
Thank you for understanding. We apologize for all inconvenience.
Metadata Update from @spichugi: - Issue close_status updated to: wontfix
Login to comment on this ticket.