You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Readability:

1. Please add module level doc strings. Add some explanation through these doc strings as what a particular test module covers if possible give specific examples. Please raise an issue while doing review if proper code comments\doc strings are not followed. This way, it’s easy to generate and use documentation for the code using known document generation tools.
'''
@Dates: Add Dates viz., module creation date, modified date
@version: Add version variable
@Author: Add Author Variables. Mention NA if not to be used
@CopyRight: proper copy right strings. Mention NA if not to be used
@Dependencies: Dependencies etc.
@Desc:
@Example:
'''

2. Add Doc Strings to each test\setup\teardown as steps\workflow. EX: Under “setUpClass” class method, add doc strings as below ( generic example ). Currently, class level setup does lot of operations, it’s confusing to follow them without a workflow as below.
EX: ‘’’
Step 1: Creating DB object.
‘’’
‘’’
Step2 : Creating ssh object.
‘’’
‘’’
Step3 : Create work flow Add these steps as part of doc strings for each step.
‘’’
The above will enhance readability, maintainable as well.

3. Remove Services class from test module altogether. Currently, code and data are mixed together. This code is repetitive under available each test modules. Instead, separate it in a config file. Use self.api_client to retrieve this configuration information. Pass this config file path to getClsTestClient call to get a config object dynamically. If needed change the lib\api for getClsTestClient. This removes lot of repetitive code across various test modules.

4. Initialization: Initialize all class and instance variables beforehand to None\required values. Provide adequate doc strings as what each variable signifies to. Currently, if we see lot of places where class and instance variables are used before properly defining them, though we are using dynamic language
initialize to None and use them later to check if None or not.
EX: cls.domain, cls.zone, cls.vpc_off cls.net_off etc. These variables, if used later to check the creation of objects, will be used for further checking.

Similarly for all instance variables. This is a good programming practice.

5. Why do we need to change the value as part of setup as below code per say:
cls.services"vpc""cidr" = '10.1.1.1/16'. Can't we any ways keep this as part of config file as mentioned in Step2 above? What is need for assigning this value dynamically here with a static value?

6. Lot of places, there are create calls, EX: See below, it has create calls separately for both "Ingress" and "Egress", Can we generalize them in to a simple API call? EX: Create( Entitytype,params ). Pass all your parameters to it and say EntityType as "NetworkACL", “PublicIpaddress” etc. “Params’’ signifies the necessary parameters for that entity creation. With this, we avoid repetitive code and make it easier for maintenance. I believe, we don’t need separate calls for creation of PublicIpaddres, networkacl etc . Use a simple "factory" design to create a given type. Make it as a library and abstract to the feature code. This will not add lot of code to feature test.
cls.nwacl_internet_1 = NetworkACL.create(
338
cls.nwacl_internet_1 = NetworkACL.create(
339
cls.api_client,
339
cls.api_client,
340
networkid=cls.network_1.id,
340
networkid=cls.network_1.id,
341
services=cls.services"icmp_rule",
341
services=cls.services"icmp_rule",
342
traffictype='Egress'
342
traffictype='Egress'
343
)

7. Though there are lot of create entities happening EX: NetworkACL,Serviceoffering,ip etc, but clean-up is having only few entities, any thought?

8. If we see the below code for EX, there are two asserts, one for whether the type is list , then length and parameter check. why can’t we club this validation in to one single call? check whether the type is list and verify whether the entry we are verifying is there or not. Abstract to feature code as library call.
With this, the code looks two to three lines instead of multiple lines of assertions, repetitive code. Generally, API usage is best if we have repetitive tasks like this. This will enhance maintainability. Add to library call, to take Validate( entitytype, params ), Use entitytype as list, and params to verify. Lot of places, these assertions are available currently and seems repetitive, this will just bloat the code unnecessarily, that could have been avoided. I believe the assertion check for whether type is list, then its length and returning first argument is too repetitive, it can be better fine tuned. what say?

vpc_offs = VpcOffering.list(
379
vpc_offs = VpcOffering.list(
380
self.apiclient,
380
self.apiclient,
381
id=vpc_offering.id
381
id=vpc_offering.id
382
)
382
)
383
self.assertEqual(
383
self.assertEqual(
384
isinstance(vpc_offs, list),
384
isinstance(vpc_offs, list),
385
True,
385
True,
386
"List VPC offerings should return a valid list"
386
"List VPC offerings should return a valid list"
387
)
387
)
388
self.assertEqual(
388
self.assertEqual(
389
vpc_offering.name,
389
vpc_offering.name,
390
vpc_offs0.name,
390
vpc_offs0.name,
391
"Name of the VPC offering should match with listVPCOff data"
391
"Name of the VPC offering should match with listVPCOff data"
392
)
392
)
393

9. Please use uniform return codes\status codes\global constants across all test features. Maintain these codes as part of common library or utils. See the below status codes example.
‘’’
@Desc: Status Codes to be used across various test features.
‘’’
Class CSStatusCodes:
SUCCESS = 1
FAILED = 0
INGRESS = X
EGRESS = Y
NETWORKACL = Z

( or ) keep all the codes as part of _builtins_ module, this way, we can directly use those codes in other features.

Use the above status codes in the feature code for better readability EX:

from _builtins_ import *
obj = Create_ssh( parms )
If obj is not None :
If obj.execute() != SUCCESS:
Return FAILED

10. Please verify the output of method call\object creation etc. Check the below code for EX, we are not verifying the ssh_2 handle, whether it is None or other return type or client call is successful, We straight forward went to execute call, what ssh_2 object is None or not successful? if it is not returning None from library call, change the library call accordingly for the possible failure in the library or raise an issue for library change?

AS a new user, how do we know ssh_2 is successful or not.

ssh_2 = self.vm_1.get_ssh_client(
457
ssh_2 = self.vm_1.get_ssh_client(
458
ipaddress=self.public_ip_2.ipaddress.ipaddress,
458
ipaddress=self.public_ip_2.ipaddress.ipaddress,
459
reconnect=True)
459
reconnect=True)
460
self.debug("SSH into VM is successfully")

11. Check the below code, we didn’t see any code to check whether "execute" went successful or not? directly we went to say str(res). res = ssh_2.execute("ping -c 1 www.google.com")

12. In relation to log message in the same code above, logging was little haphazard, its better to follow as below log for a given test run. Use a decorator for logging if possible. How and where user can set error log ? Separate debug logs from error logs, if library enhancement needs to be done, raise an issue so that concerned person or whoever responsible can add some enhancements. Logging after a sample test run should be more readable like below.

:::::::::::::::Test module "A" started:::::::::::

::::::::Testcase1 Started::::::.
Test Case flow ..Step1,step2,step3 etc.
Test case result...PASS\FAIL
Any exceptions...
::::::::::::::::Testcase 1 Ended:::::::::::

13. Check the below code for EX, i didnt see any code to see whether the vm creation or start\stop are really started or not, where did we checked whether the vms are started successfully? what if a code exception away from functional exception is raised. Are we assuming that raising an exception is synonymous with not starting vm, if so why? Why cant we check the start operation return code explicitly? If library call "start" is not returning proper status code for starting or not, raise an issue. This way, some body has to rewrite the code again for proper verification.

self.vm_1.start(self.apiclient)
566
self.vm_1.start(self.apiclient)
567
self.vm_2.start(self.apiclient)

14. Check the below code for EX, some times we deleted the vm and then went to check natrule list, how do we know that delete went successful? Some times we added a random delay? Why random delay? Write a simple poll api to check whether the api operation is successful or not? This is not the proper way and going ahead it may lead to lot of maintenance issues i believe?
self.vm_1.delete(self.apiclient)
620
self.vm_1.delete(self.apiclient)
621
self.vm_2.delete(self.apiclient)
621
self.vm_2.delete(self.apiclient)
622
except Exception as e:
622
except Exception as e:
623
self.fail("Failed to stop the virtual instances, %s" % e)
623
self.fail("Failed to stop the virtual instances, %s" % e)
624
624
625

  1. Check if the network rules still exists after Vm stop
    625
  2. Check if the network rules still exists after Vm stop
    626
    self.debug("Checking if NAT rules ")
    626
    self.debug("Checking if NAT rules ")
    627
    nat_rules = NATRule.list(
    627
    nat_rules = NATRule.list(
    628
    self.apiclient,
    628
    self.apiclient,
    629
    id=self.nat_rule.id,
    629
    id=self.nat_rule.id,
    630
    listall=True
    630
    listall=True
    631
    )

15. As mentioned earlier, put your test cases code as flow, step1, step2, step3, etc, though there are some doc strings ,that is not mapping to flow of code, can we please make sure this way it is added, so that a new user can added what steps are followedfor a test and what checks for each step?

16. Check the below for EX, not sure exactly what we are verifying, i could see that the code just executes some list API and thats it, thereafter what?
with self.assertRaises(Exception):
852
with self.assertRaises(Exception):
853
nat_rules = NATRule.list(
853
nat_rules = NATRule.list(
854
self.apiclient,
854
self.apiclient,
855
id=self.nat_rule.id,
855
id=self.nat_rule.id,
856
listall=True
856
listall=True
857
)
857
)
858
858
859
lb_rules = LoadBalancerRule.list(
859
lb_rules = LoadBalancerRule.list(
860
self.apiclient,
860
self.apiclient,
861
id=self.lb_rule.id,
861
id=self.lb_rule.id,
862
listall=True
862
listall=True
863
)

17. Lot of places, start,reboot,delete vms are called, but nowhere we verified whether that operation is successful or not? Does cloudstack API does not provide this facility or our library is missing this information? If so, can we enhance or add this information?

I believe it is the test code which is missing this call.

18. Please add try\except block wherever possible to major code flow for test cases, start try at the start and exception most before the return. Use some generic exceptions atleast to catch them and add to log. We will come to know at the end which test cases are throwing exceptions as such.

19. Make sure to clean up the resources even if there is an exception. Can we follow some proper pattern of clean up even if there is any exception?

20. Componentize the repetitive code as calls, EX: assertions, create, delete, list calls to reduce the number of lines.

21. Also, attach run logs with the review, this will help the reviewer see the logs map to the code and thus provide sufficient details for the test case steps working.

22. Use pylint, tabnanny and other code checker tools to run through the test code before commit. Follow pep8 and known coding standards as below both for naming convention and other standards. http://www.python.org/dev/peps/pep-0008/ http://google-styleguide.googlecode.com/svn/trunk/pyguide.html
23 Raise all exceptions including generic ones through CS related exception module, this way all exceptions for debug purpose can be logged or tracked through one single module, don’t write separate exception classes for each test.

  • No labels