#48085 CI test - replication
Closed: wontfix 4 years ago by mreynolds. Opened 9 years ago by nhosoi.

No Description Provided


'''Replication suites:''' [[BR]]
singlemaster_test.py - I plan to start from that [[BR]]
acceptance_test.py [[BR]]
chainonupdate_test.py [[BR]]
changelog5_test.py
cleanallruv_test.py
changelog_encr_test.py
fourwaymmr_test.py
fractional_test.py
ruvstore_test.py
state_test.py
vlv_test.py

I'm not sure what this test is testing, but the python looks okay to me.

Replying to [comment:3 firstyear]:

I'm not sure what this test is testing, but the python looks okay to me.

Hi, William. It is part of TET to lib389 porting. I've put the info about test cases to doc strings. The first test suite was developed in 2008 and tries to make a crash. The second one checks agreement values.

To ssh://git.fedorahosted.org/git/389/ds.git
c5a67ce..17f3bef master -> master
commit 17f3bef
Author: Simon Pichugin spichugi@redhat.com
Date: Mon Jan 23 11:32:01 2017 +0100

Ok... I've found an issue. Both test cases have the same name. I will fix it with my next patch.
I am starting to port MMR test suite.

To ssh://git.fedorahosted.org/git/389/ds.git
61953fc..5cd0a03 master -> master
commit 5cd0a03
Author: Simon Pichugin spichugi@redhat.com
Date: Wed Feb 1 14:43:59 2017 +0100

Metadata Update from @spichugi:
- Issue assigned to spichugi
- Issue set to the milestone: CI test 1.0

7 years ago

Metadata Update from @spichugi:
- Custom field reviewstatus adjusted to new
- Issue close_status updated to: None

7 years ago

Metadata Update from @spichugi:
- Issue close_status updated to: None

7 years ago

Metadata Update from @spichugi:
- Custom field reviewstatus adjusted to review (was: new)

7 years ago

Metadata Update from @firstyear:
- Custom field reviewstatus adjusted to ack (was: review)

7 years ago

To ssh://pagure.io/389-ds-base.git
7dc991b..bdbf4ef master -> master
commit bdbf4ef
Author: Simon Pichugin spichugi@redhat.com
Date: Sun Mar 19 22:02:29 2017 +0100

Metadata Update from @sramling:
- Custom field reviewstatus adjusted to review (was: ack)

6 years ago

@sramling I don't get what you are doing in 127 +def _check_changelog_ldif(topo, change_log):. You define attr_list, then recreate it, modify it. It seems quite confusing. I think you are trying to build a parser that should have a set of attrs per ldif.

You could consider doing something more like:

content = None
with open('cldump.ldif', 'r') as f:
    content = f.read()

for entry_ldif = content.split('\n\n'):
    # Now use the python-ldap ldif module to parse to an entry and check the attrs

This could be cleaner and easier to understand.

@firstyear , sorry about that. There could be multiple occurrences of add, modify or delete. However, i wanted to print only the unique set of values found.

Please, use '()' for tuples.

42 +CHANGE_TYPES = {'add', 'modify', 'modrdn', 'delete'}

There is a lot of info about replicas.enable/disable() functions in guidelines. Please, use it.

55 +def replica_changelog(topo, request):

It is too general. If you'll see this function in the code you wouldn't be able to tell what will happen. At least name it 'perform_ldap_operation'. And specify the rest in the docstring

76 +def ldap_operations(topo):

But in the function you do not only add but also rename, replace attr, delete, etc.
And no need to mention UserAccounts, it's our default way.

77 +    """Add test users using UserAccounts"""

Empty lists in Python is False. Not need for 'len() != 0'.

108 +    if len(os.listdir(changelog_dir)) != 0:

Instead of 'change_log' variable, 'changelog_ldif' will make more sense.

I agree with William, that it is not so readable. But to build a new parser class (it's how you use LDIFParser) for the purpose of checking just modify operation - may be overkill.
I'd say, Sankar, you can simplify your logic with William's example. Just check each entry for the exact thing and that's it.

I will review test_verify_changelog_offline_backup and test_verify_changelog_online_backup, when they will pass (or we'll have the Bugzilla with a filed bug).

Ok, it looks better. And we can improve it more.

Lines 37 and 42 (which is not ident properly) are over 100 chars. Please, split them in two. For _constants you can use: from lib389._constants import *.

Don't forget to use PyCharm features to make the formatting for you.

You can make it one line:

139 +    if not os.stat(changelog_ldif).st_size > 0:
140 +        log.fatal('Changelog file-{} has no contents'.format(changelog_ldif))
141 +        assert False

# It will print the content of all variables that are mentioned there
assert os.stat(changelog_ldif).st_size > 0, 'Changelog file has no contents'

The next part is readable but 'elif' stuff can be better. As I understand, first 'elif' will be checked and then the second will be checked in the next turn. Comment, please, each 'elif' logic.

Also, you don't set change_type_flag back to False after line 158.

And here, you just log, but I think we need to have something here. Maybe 'count += 1'?

157 +                if attr in ATTRIBUTES:
158 +                    log.info('Expected attribute present-{}'.format(attr))

And test_verify_changelog_offline_backup and test_verify_changelog_online_backup do still fail.

The tests test_verify_changelog_offline_backup and test_verify_changelog_online_backup are passing. It was initially, the flow of the tests was not correct. I am yet to send out a patch to developers to check if that scenario is also valid. if yes, I will include it in the patch later.

I used Pycharm for formatting my test scripts. But, I am not sure there was still a problem with the chars.

First, about changelog5 patch.

After some time with reading your patch, I've started thinking, why don't you use topology_m2? It will be more reall replication topology and you won't need to create the replica, it will be created.

Imports should be grouped in the following order:

 # 1. standard library imports
 # 2. related third party imports
 # 3. local application/library specific imports
 32 +from lib389._constants import *

As I've mentioned before, split line 41 into two lines with proper indentation.

Ok, you've made _check_changelog_ldif() even more difficult to read with the new patch...
You have this ldif. It is not standart ldif:

changetype: modify
replgen: 59ba2063000000010000
csn: 59ba2063000200010000
nsuniqueid: 3a53740f-991511e7-a8cdb350-ab2f4af1
dn: uid=replusr,ou=People,dc=example,dc=com
change:: cmVwbG

changetype: modrdn
replgen: 59ba2063000000010000
csn: 59ba2063000300010000
nsuniqueid: 3a53740f-991511e7-a8cdb350-ab2f4af1
dn: uid=replusr,ou=People,dc=example,dc=com
newrdn: uid=cl5usr
deleteoldrdn: false
change:: cmV

So what you can do is to use William's code example and then take the first line from each entry and just check it (probably, you can also check the DN). They should be in exact order, so there is no need for this complex 'elif' magic. Just check them using lists.

You can use three arguments in os.path.join. No need for String.format():

228 +    backup_checkdir = os.path.join(backup_dir, '.repl_changelog_backup/{}'.format(DEFAULT_CHANGELOG_DB))

You have the same first line docstring in every test case. First line should describe it in more details.

Now, about ruvstore patch.

The same issue with imports and topology_m2 function.

You can get replica DN from the Replica object:

40 +DN_REPLICA = '{},cn="{}",{}'.format(RDN_REPLICA, DEFAULT_SUFFIX, DN_MAPPING_TREE)

def _compare_memoryruv_and_databaseruv() - You can get all of this objects from Replicas. Please, use the guidelines and the documentation.

Should be one blank line

110 +
111 +

I think, in "def test_ruv_entry_backup()", it will be better to check for the exact entry and not for the hard coded list of string values.
What I mean is to get the entry in string format from the live instance and assert it in the exported ldif file.

You can use one global dict for this.

86 +    user_properties = {
167 +    user_properties = {

The actual test case(cl5) was supposed to check, create replication for a standalone instance and check if replication like feature is working well on that instance. Hope, I am making it clear. If its otherwise, please add a comment about whether it should topology_m2 or topology_st, before I make the next patch. @vashirov please add your opinion as well.

For changelog_test.py:

Please, group imports as I've said before. It is a bad practice to put all imports in one line (one line, one module)

 # 1. standard library imports
 # 2. related third party imports
 # 3. local application/library specific imports

Minor issue, but still. It looks a bit redundant. You have CHANGETYPES name and it is enough to understand that it is changetypes, no need to put it in every string (and in the code, you can concatenate string):

 42 +CHANGETYPES = ['changetype: add', 'changetype: modify', 'changetype: modrdn',
 43 +               'changetype: delete']

Regards - _check_changelog_ldif. As I've mentioned before, we are better to check the order of operations also. Python object 'list' is great for that purpose. I've described the logic in the previous message.

The purpose of extracting content like this is to avoid opening file for too long. So there is no need any indentation after the line 124 (and I think it should be 'content'):

123 +    with open(changelog_ldif, 'r') as fh:
124 +        contents = fh.read()

As I've mentioned in the previous message, please, fix docstrings:

149 +    """Check if changelog is logging ldap operations
172 +    """Check if changelog is logging ldap operations
221 +    """Check if changelog is logging ldap operations

We don't need ':feature:' anymore, so you can remove it.

To avoid misunderstanding, please, name it as 'replicas'

83 +    replica = Replicas(topo.ms["master1"])

You can get it directly using Replica object methods (get_attr_val_utf8 probably). Please, check the guide or code for lib389.replica.Replica.get_attr_val_utf8:

89 +    entry = topo.ms["master1"].search_s(replica_dn, ldap.SCOPE_BASE, 'objectClass=*', ['nsds50ruv'])

There is no change or comment regarding the thing I've mentioned in the previous comment:

I think, in "def test_ruv_entry_backup()", it will be better to check for the exact entry and not for the hard coded list of string values.
What I mean is to get the entry in string format from the live instance and assert it in the exported ldif file.

Minor issue, but still. It looks a bit redundant. You have CHANGETYPES name and it is enough to understand that it is changetypes, no need to put it in every string (and in the code, you can concatenate string):
42 +CHANGETYPES = ['changetype: add', 'changetype: modify', 'changetype: modrdn',
43 + 'changetype: delete']

In fact, ldif module contains the list of valid operations:
ldif.valid_changetype_dict.keys()

>>> from ldif import CHANGE_TYPES
>>> CHANGE_TYPES
['add', 'delete', 'modify', 'modrdn'] 

So I think it's better to stick to it. Another point is that you do this comparison:

127 +        for changetype in CHANGETYPES:
128 +            if changetype in contents:
129 +                log.info('Found ldap operation-{}'.format(changetype))
130 +            else:
131 +                log.fatal('Missing ldap operation-{}'.format(changetype))
132 +                assert False

But if you have
description: changetype: add
it will also be considered as operation.

So I'd like to propose the following:
1. Read the file
2. Go through each line split by '\n\n\' (to separate each ldif entry)
3. For each line in entry split by '\n' do a check if it starts with 'changetype', then add it to set of operations. If this operation already does exist, set won't be changed.
4. And, after the loop, compare two sets of operations: the one found in the file and the set of valid operations.

Regards - _check_changelog_ldif. As I've mentioned before, we are better to check the order of operations also. Python object 'list' is great for that purpose. I've described the logic in the previous message.

I don't think we need to do that here. The point is to check the size of the file and if it contains the right set of operations. If the order is not correct, it won't be imported by the server.

There is no change or comment regarding the thing I've mentioned in the previous comment:

I think, in "def test_ruv_entry_backup()", it will be better to check for the exact entry and not for the hard coded list of string values.
What I mean is to get the entry in string format from the live instance and assert it in the exported ldif file.

I'm looking at the original test and it does similar thing: it extracts the RUV from the dump and then compares parts of the expected RUV entry with the extracted RUV entry. I think it's ok to not construct an entry, but the matching should be done better. Currently the code is checking just the mere presence of the list element in the entry, while it should be checking at least that the line is starting with the same string as the element contains.

@vashirov thanks for the commets! I completely agree with you. Sure, let's skip 'check for order' part.

I don't think we need to do that here. The point is to check the size of the file and if it contains the right set of operations. If the order is not correct, it won't be imported by the server.

To avoid misunderstanding, please, name it as 'replicas'
83 + replica = Replicas(topo.ms["master1"])

You can get it directly using Replica object methods (get_attr_val_utf8 probably). Please, check the guide or code for lib389.replica.Replica.get_attr_val_utf8:
89 + entry = topo.ms["master1"].search_s(replica_dn, ldap.SCOPE_BASE, 'objectClass=*', ['nsds50ruv'])

There is no change or comment regarding the thing I've mentioned in the previous comment:

I am not sure how to use this method??? There are no examples given, as far as I checked.

I think, in "def test_ruv_entry_backup()", it will be better to check for the exact entry and not for the hard coded list of string values.
What I mean is to get the entry in string format from the live instance and assert it in the exported ldif file.

To avoid misunderstanding, please, name it as 'replicas'
83 + replica = Replicas(topo.ms["master1"])
You can get it directly using Replica object methods (get_attr_val_utf8 probably). Please, check the guide or code for lib389.replica.Replica.get_attr_val_utf8:
89 + entry = topo.ms["master1"].search_s(replica_dn, ldap.SCOPE_BASE, 'objectClass=*', ['nsds50ruv'])
There is no change or comment regarding the thing I've mentioned in the previous comment:

I am not sure how to use this method??? There are no examples given, as far as I checked.

We still have our downstream guidelines and it has the example. Also, I think documentation is pretty understandable even without any example. And there is a link to the source. You can check the logic behind it (that is pretty simple and, basically, it's a wrapper around your code):

get_attr_val_utf8(key)[source]
Get a single attribute value from the entry in utf8 type

Parameters: key (str) – An attribute name
Returns:    A single bytes value
Raises: ValueError - if instance is offline

Ack for changelog_test.py
This _check_changelog_ldif implementation is much smoother. :) Thanks!

commit 2bfafc2
Author: Sankar Ramalingam sramling@redhat.com
Date: Mon Sep 18 22:30:13 2017 +0530

Unlike cl5 tests, where changelog ldif wasn't standard and we couldn't use ldif module, here we should. Currently you're going through each line in each entry. In the old test we were looking for nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff and then parsing only this entry. I suggest you to use LDIFParser class from ldif module and construct your own handle() method that would do the required check on entry content. There is an example in [1].

[1] https://www.python-ldap.org/doc/html/ldif.html

@vashirov , Thanks for your review. I wrote this line of code since you suggested me to compare if the line starts with the expected "string". In fact, I had an email conversation and x-chat to check if this is correct. In the example, I used "re.search" to check if the line is starting with expected "dn" and then proceed with checking the entry for expected attributes. However, you asked me to use "line.startswith()", which won't work unless you read line by line. My python knowledge is very limited.


with open('standalone1.ldif') as fh:
content = fh.read()
for entry_ldif in content.split('\n\n'):
if (re.search(r'(^)' + RUV_LIST[0], entry_ldif, re.M)):
for line in entry_ldif.split('\n'):
print line

Now, I need to dig through the LDIFParser example to check how this can be implemented.

@spichugi , when you get time, please review the patch for ruvstore tests.

@spichugi , when you get time, please review the patch for ruvstore tests.

You had a conversation with @vashirov in the previous comments. If he is okay with your changes, I'll merge it. I am okay with your code.

@spichugi which comment? I fixed the code where @vashirov asked me use LDIFParser. Otherwise, I guess no other changes required. I also see that, you have deleted your previous comment about "check attributes for validity".

@vashirov clarified in the IRC chat that the changes you proposed should be done from unit testing, I mean checking the attribute values for each attribute in RUV entry.

Metadata Update from @vashirov:
- Custom field reviewstatus adjusted to ack (was: review)

6 years ago

commit 6d9e91f
Author: Sankar Ramalingam sramling@redhat.com
Date: Wed Sep 20 17:39:00 2017 +0530

Metadata Update from @spichugi:
- Custom field reviewstatus adjusted to review (was: ack)

6 years ago

Currently, MMR SSL setup doesn't work properly. More info can be found here - https://pagure.io/lib389/issue/84#comment-468230
If you have some other tasks, I'd recommend to switch to them and come back to this task later.

This patch is not reviewed, since there was a blocker for SSL setup from lib389 - https://pagure.io/389-ds-base/issue/48085#comment-468244

We have a method for this and it's documented
- https://pagure.io/lib389/blob/master/f/lib389/_mapped_object.py#_280
So there is no need to write your function - _perform_ldap_operation.

Will you use def _check_user_oper_attrs_exist in other tests? If you won't, then I don't see a reason to encapsulate it.

TEST_USER_PROPERTIES doesn't have 'description' attr now but it can be changed one day. Then your test will be wrong.

In the docstring:
- Add/replace/delete = modify

I'll check the rest of the test logic when you'll upload the patch that you'd like to push. (or maybe @vashirov will add something here)

For encrypted changelog.
Functions enable_changelog_encryption_m1 and enable_changelog_encryption_m2 don't need to be fixtures and can be combined in one function, that takes instance and encryption algorithm as parameters.

156 +    for files in os.listdir(changelog_dbdir):

I think it should be file, not files.

Encryption changelog tests.

The tests for changelog (and attribute) encryption should also cover the certificate renewal scenario

Encryption changelog tests.
The tests for changelog (and attribute) encryption should also cover the certificate renewal scenario

Currently facing issues with SSL setup from lib389. I tried backing up changelog file when changelog encryption is configured, but it failed - https://bugzilla.redhat.com/show_bug.cgi?id=1496224

 48 +BINVALUE1 = bin(0xdeadbeef1)

bin() returns a string, not a binary representation. If you want the latter, see https://docs.python.org/3/library/binascii.html#binascii.unhexlify

 69 +    if oper_attr:

You always pass oper_attr as a parameter to this function, so this condition is always true. Is it needed?

48 +BINVALUE1 = bin(0xdeadbeef1)

I changed it to a string as per the discussion we had

bin() returns a string, not a binary representation. If you want the latter, see https://docs.python.org/3/library/binascii.html#binascii.unhexlify
69 + if oper_attr:

You always pass oper_attr as a parameter to this function, so this condition is always true. Is it needed?

No, some of the operations are expected to fail which won't have any operational attribute for that failed operation. For eg: must attribute cannot be deleted. It will throw Objectclass violation error. The same goes for single valued attribute which won't let you add two attributes.

For Encryption changelog tests:

You can leave lin 84 comment for now (so SSL will be enabled, but the connection will go through ldap://), but you need to change line 85:

84 +        #master.sslport = secure_port
85 +        master.restart()

Line 139 - typo in 'modify'

Test case test_encrypt_m1_unhashed_user_passw_m2user wouldn't work properly withou test_encrypt_m1_unhashed_user_passw_m1user, because you _enable_changelog_encryption in it (and not in test_encrypt_m1_unhashed_user_passw_m2user). Each test case should be able to work standalone. (the same is true for other pair of test cases)

Also, I think all four test cases can be parametrized. The only difference I see is the changelog_encryption value and order of masters.

@spichugi I fixed the above changes. Adding a new patch. I understand, this can be parametrized into a single test case, but, we will end up complicating the tests. Also, there are varying combinations test we need to test, so, I doubt it will be straight forward.

0001-Encryption-changelog-tests.patch

For mmr state tests:

In _check_user_oper_attrs function. As I've proposed in IRC, you can use get_attr_vals_utf8 method here. Something like this:

somelist = tuser.get_attr_vals_utf8('nscpentrywsi')

In each test function, you have the same attr_name parameter. Why parametrize it?

Also, I think, it makes more sense to remove parametrization and use (attr_value, oper_type, exp_values, oper_attr) in 'for loop'. I think like this because:

  • You can't run each test separately, with your code, it will fail:

    py.test -v dirsrvtests/tests/suites/replication/state_test.py::test_check_desc_attr_state[description-Test1usr4-ldap.MOD_DELETE-exp_values4-vdcsn]

  • The docstrings will look confusing in the Polarion.
    test_check_desc_attr_state[description-Test1usr4-ldap.MOD_DELETE-exp_values4-vdcsn] docstring is 1-16 steps, while actually, the execution happens only for 14-16 steps.

Why do you have BINVALUE as constant, while other attr values are just strings in parametrization. I think, it is okay to have BINVALUE as strings too (of course, if you won't decide to make it really binary).

@spichugi I fixed the above changes. Adding a new patch. I understand, this can be parametrized into a single test case, but, we will end up complicating the tests. Also, there are varying combinations test we need to test, so, I doubt it will be straight forward.

It is just (encryption_algorith, master_add, master_modify) lists. What complications do you mean?

It helps to remove a lot of redundant code though.

With respect to mmrepl state tests.
I don't see implementation for 'nscspentrywsi' in lib389 code.

  somelist = tuser.get_attr_vals_utf8('nscpentrywsi')

dirsrvtests/tests/suites/replication/state_test.py:117:


self = <lib389.idm.user.UserAccount object at 0x7fd727d9cf90>, key = 'nscpentrywsi'

def get_attr_vals_utf8(self, key):
  return ensure_list_str(self.get_attrs_val(key))

E TypeError: 'NoneType' object is not callable

These are the total attributes present for user.

INFO:dirsrvtests.tests.suites.replication.state_test:Add user: state1test
INFO:dirsrvtests.tests.suites.replication.state_test:Check if list of description attrs present for: state1test
['top', 'account', 'posixaccount', 'inetOrgPerson', 'organizationalPerson', 'inetUser', 'person']
{'description': ['Test1usr1'], 'cn': ['state1test'], 'objectclass': ['top', 'account', 'posixaccount', 'inetOrgPerson', 'organizationalPerson', 'inetUser', 'person'], 'userpassword': ['{SSHA512}s6Oe0fOwzWMDh8iM59eNZNMRdsU0JlCreFr8Wi09DjwPmLYkFcYLBITQPBIMhy2PaeLk87TXt2BZ87qL4auoBks7d2XH+8QJ'], 'homedirectory': ['/home/testuser'], 'uidnumber': ['1001'], 'nsuniqueid': ['2bd38490-a40a11e7-a70ae7bd-416c27d1'], 'modifytimestamp': ['20170928050223Z'], 'createtimestamp': ['20170928050223Z'], 'gidnumber': ['2001'], 'modifiersname': ['cn=directory manager'], 'entryid': ['12'], 'sn': ['state1usr'], 'parentid': ['4'], 'entrydn': ['uid=state1test,ou=people,dc=example,dc=com'], 'creatorsname': ['cn=directory manager'], 'uid': ['state1test']}

I also see that its implemented as "+, *" for ldapsearch attributes. This won't return list of attributes with nscspentrywsi.

tuser.get_attr_vals_utf8('nscpentrywsi')

I've checked the get_attr_vals_utf8 method with your code and it is all PASS.
get_attr_vals_utf8 is just a wrapper around the search. it returns a list of values like this:

['dn: uid=testJpeg1usr,ou=People,dc=example,dc=com',
 'modifyTimestamp;adcsn-59cdd441001d00010002;vucsn-59cdd441001d00010002: 20170929050400Z',
 'modifiersName;adcsn-59cdd441001d00010001;vucsn-59cdd441001d00010001: cn=directory manager',
 'uid;vucsn-59cdd441001700010000;mdcsn-59cdd441001700010000: testJpeg1usr',
 'objectClass;vucsn-59cdd441001700010000: top',
 'objectClass;vucsn-59cdd441001700010000: account',
 'objectClass;vucsn-59cdd441001700010000: posixaccount',
 'objectClass;vucsn-59cdd441001700010000: inetOrgPerson',
 'objectClass;vucsn-59cdd441001700010000: organizationalPerson',
 'objectClass;vucsn-59cdd441001700010000: inetUser',
 'objectClass;vucsn-59cdd441001700010000: person',
 'uidNumber;vucsn-59cdd441001700010000: 1001',
 'gidNumber;vucsn-59cdd441001700010000: 2001',
 'sn;vucsn-59cdd441001700010000: state1usr',
 ......
 'jpegPhoto;adcsn-59cdd441001b00010000;vucsn-59cdd441001b00010000;vdcsn-59cdd441001d00010000;deletedattribute;deleted: thedeadbeef2']

The docstrings will look confusing in the Polarion.
test_check_desc_attr_state[description-Test1usr4-ldap.MOD_DELETE-exp_values4-vdcsn] docstring is 1-16 steps, while actually, the execution happens only for 14-16 steps.

Execution happens for every 3 to 4 steps.

Execution happens for every 3 to 4 steps.

So, we have two issues with these parametrized test cases. I've described them before:

  • You can't run each test separately. With your code, it will fail:

    py.test -v dirsrvtests/tests/suites/replication/state_test.py::test_check_desc_attr_state[description-Test1usr4-ldap.MOD_DELETE-exp_values4-vdcsn]
    
  • And as per discussion in "Docstrings for Polarion - format for parametrization" mail thread, we need to combine steps if they are parametrized. I've described there your case for example, so you should have clear idea how it should look. Please check it and ping me if you have questions.

Mmrepl state tests: @spichugi , please check if this looks fine.
Changing the docstrings format and parametrize tests with one test.

Do I understand right that it is not the final set of tests? (because, for example, there is no test for 'mdcsn')

Looking at your commit message. You lack some things that were there before:

Adding replication state tests to check attribute value
and operational attributes after each operation. Each set of tests
will check for adding and deleting attribute values. The tests cover
multi-valued, single-valued, binary-type and sub-type attributes.

Your test covers multi-valued and single-valued only.
So you either need to fix commit message, either add the tests for binary-type and sub-type attributes.

Another things:

82 +            2. Make modify operation and try add new cn, sn attribute values.
83 +               Check if two cn, sn values exist and operational attribute vucsn exist

It is two operations and in the code you have them as separate lines. I think the docstrings should follow the pattern.

107 +    tuser.set(attr_name, attr_values[2], ldap.MOD_ADD)
111 +    tuser.set(attr_name, attr_values[3], ldap.MOD_REPLACE)
116 +        tuser.set(attr_name, attr_values[3:], ldap.MOD_DELETE)

We have corresponding methods in the DSLdapObject docs as you know (you've already used them), please use tuser.add, tuser.set and tuser.remove accordingly.

This looks good to me :) ack

commit e62a443
Author: Simon Pichugin spichugi@redhat.com
Date: Tue Feb 6 16:33:59 2018 +0100

Metadata Update from @spichugi:
- Assignee reset

4 years ago

Metadata Update from @mreynolds:
- Issue close_status updated to: fixed
- Issue status updated to: Closed (was: Open)

4 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/1416

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: fixed)

3 years ago

Login to comment on this ticket.

Metadata
Attachments 25