Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Below notes are documented while doing the review for below issue. Please follow these guidelines to raise an issue while reviewing Marvin code or any Test Features code under CS.

https://reviews.apache.org/r/14058/Image Removed

1. Please add module level doc strings. Add some explanation through these doc strings as what a particular test module covers if possible give specific examples. Please raise an issue while doing review, if proper code comments\doc strings are not followed. Adding proper comments,docstrings makes it easy to generate and use documentation for the code using known document generation tools.
'''
@Dates: Add Dates viz., module creation date, modified date
@version: Add version variable
@Author: Add Author Variables. Mention NA if not to be used
@CopyRight: proper copy right strings. Mention NA if not to be used
@Dependencies: Dependencies etc.
@Desc:
@Example:
'''

...

22. Use pylint, tabnanny and other code checker tools to run through the test code before commit. Follow pep8 and known coding standards as below both for naming convention and other standards. http://www.python.org/dev/peps/pep-0008/Image Removedhttp://google-styleguide.googlecode.com/svn/trunk/pyguide.htmlImage Removed

23. Raise all exceptions including generic ones through CS related exception module, this way all exceptions for debug purpose can be logged or tracked through one single module, don’t write separate exception classes for each test.

...

We just need to import verifyElementInList from utils before using it

25. Please While doing a review, if you are a reviewer, please raise a review issue, if there are no document strings enough to understand test feature code. Especially module level,class level document strings with specific usages or examples would be ideal, or wherever we see relevant documentation is missing.

...

30. Any new method added, make sure to have proper return codes wherever possible. We are using exceptions for treating the fail case , Using exceptions has a different purpose and is good to have exception handling, but please dont assume that raising exception to be a return value for a function or method for noting failure. Instead, provide explicit return values for new code added wherever possible. Check the return codes and then raise assertion or exception accordingly.

31. Whenever we pick up a feature for automation, add a new field "Automation" to your test case document wherever it is ( excel,tms like quality center etc) , this will help to know during reviews and thereafter as which test cases are pending for automation for that feature even though feature may be automated.

32. Dont add asserts in library functions or non test functions. Add asserts\failures only in your test cases.

33. Add doc strings rather then comments to a test case as below. This will help to extract the test case doc strings easily and automatically for documentation or otherwise.

Instead of comments as below:

# Validate the following
# 1. Deploy a vm and create 5 data disk
# 2. Attach all the created Volume to the vm.
# 3. Reboot the VM. VM should be successfully rebooted
# 4. Stop the VM. Stop VM should be successful
# 5. Start The VM. Start VM should be successful

Use DocStrings:

'''

@Desc : test_abc_functionality

@steps:

 1. Deploy a vm and create 5 data disk
 2. Attach all the created Volume to the vm.
 3. Reboot the VM. VM should be successfully rebooted

 4. Stop the VM. Stop VM should be successful
 5. Start The VM. Start VM should be successful

'''

34: Catch exceptions in setup as well and In case of exception in setup, don't run tests, setup failures will not call tear down in many cases. So, clean up wont happen of resources we created in setup. We are seeing many clean up failures. We need to do clean up despite of tests ran or not, setup failed or not provided setup is run atleast once.

35.  We added runCommand to sshClient.py, instead of using current execute command, please use runCommand. The current execute command return contains output and error. It does not differentiate between output and error. The new command provides the status of command executed and corresponding output or error accordingly.