#48921 CI Tests - add stress/replication tests
Closed: wontfix None Opened 7 years ago by mreynolds.

There are several scripts that have been created to stress replication and connection load. These scripts needs to be finalized and added to the source.


Happy to ack, but with one disclaimer. In python, the python ldap module has a global lock, so your threads actually don't ever run in parallel. They only run serially over the lock of ldap operations.

So we should re-examine this in the future to make the test able to "stress" the server more.

Build: '''389-ds-base-1.3.5.11-20160731192447.fc24.x86_64'''

These tests are PASS and the code looks good:
'''dirsrvtests/tests/stress/replication/mmr_01_4m-2h-4c_test.py
dirsrvtests/tests/stress/replication/mmr_01_4m_test.py'''

But another one just hangs for a half an hour with the next output:
'''dirsrvtests/tests/stress/reliabilty/reliab_conn_test.py'''
{{{
[root@dell-per210-01 ds]# py.test -s -v dirsrvtests/tests/stress/reliabilty/reliab_conn_test.py
============================================================================================================= test session starts =============================================================================================================
platform linux2 -- Python 2.7.12, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- /usr/bin/python2
cachedir: dirsrvtests/tests/stress/reliabilty/.cache
rootdir: /export/ds/dirsrvtests/tests/stress/reliabilty, inifile:
collected 1 items

dirsrvtests/tests/stress/reliabilty/reliab_conn_test.py::test_connection_load OK group dirsrv exists
OK user dirsrv exists
INFO:stress.reliabilty.reliab_conn_test:Initializing setup...
INFO:stress.reliabilty.reliab_conn_test:Launching Bind-Only Connection threads...
INFO:stress.reliabilty.reliab_conn_test:Launching Idle Connection threads...
INFO:stress.reliabilty.reliab_conn_test:Launching Long Connection threads...
INFO:stress.reliabilty.reliab_conn_test:Waiting for threads to finish...
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
}}}

Is it okay? Should these threads hang for so long? What was your plan for the execution time of this test case?

And these errors... Anyway, audit log contains a lot of lines (success searches), so may be it is okay and how it suppose to be and we need to wait for more time.

Replying to [comment:3 spichugi]:
dirsrvtests/tests/stress/reliabilty, inifile:

collected 1 items

dirsrvtests/tests/stress/reliabilty/reliab_conn_test.py::test_connection_load OK group dirsrv exists
OK user dirsrv exists
INFO:stress.reliabilty.reliab_conn_test:Initializing setup...
INFO:stress.reliabilty.reliab_conn_test:Launching Bind-Only Connection threads...
INFO:stress.reliabilty.reliab_conn_test:Launching Idle Connection threads...
INFO:stress.reliabilty.reliab_conn_test:Launching Long Connection threads...
INFO:stress.reliabilty.reliab_conn_test:Waiting for threads to finish...
ERROR:stress.reliabilty.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
y.reliab_conn_test:IdleConn exiting thread: {'desc': "Can't contact LDAP server"}
}}}

Is it okay? Should these threads hang for so long? What was your plan for the execution time of this test case?

It's not hanging that I know of, just waiting for all the threads to finally finish. The test is supposed to run for hours. In fact there are probably 100 threads running - some seem to hit this error for an unknown reason and then exit. It's not a FD issue, but it just seems to happen. The test continues to run though if you watch the access log.

Also, this test was just designed to put some load on nunc-stans. I just wanted to get it into the code base, and it can continued to be developed. Really I just want to start building up the stress tests, and while this test is perhaps in its infancy, it is still moving the tests in the right direction.

3e3dff8..8d4000b master -> master
commit 8d4000b
Author: Mark Reynolds mreynolds@redhat.com
Date: Mon Aug 1 09:52:49 2016 -0400

cd55c8f..480d79d master -> master
commit 480d79d
Author: Mark Reynolds mreynolds@redhat.com
Date: Fri Sep 9 10:18:52 2016 -0400

fc1310e..6e6e6d7 389-ds-base-1.3.5 -> 389-ds-base-1.3.5
commit 6e6e6d7

Metadata Update from @mreynolds:
- Issue assigned to mreynolds
- Issue set to the milestone: 0.0 NEEDS_TRIAGE

7 years ago

Metadata Update from @vashirov:
- Issue set to the milestone: None (was: 0.0 NEEDS_TRIAGE)

4 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/1980

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: Fixed)

3 years ago

Login to comment on this ticket.

Metadata