...
The original version of this document is available on github with syntax highlighting
Marvin - our automation framework is a Python module that leverages the abilities of python and its multitude of librariespython testing framework for Apache CloudStack. Tests written using our the framework use the unittest module under the hood. The unittest module is the Python version of the unit-testing framework originally developed by Kent Beck et al and will be familiar to the Java people in the form of JUnit. The following document will act as a tutorial introduction to those interested in testing CloudStack with python. This document does not cover the python language and we 'll will be pointing the reader instead to explore some tutorials that are more thorough on the topic. In the following we will be assuming basic python scripting knowledge from the reader. The reader is encouraged to walk through the steps after he/she has their environment setup and configured.
If you are a developer the cloudstack development environment is sufficient to get started
Code Block |
---|
mvn -P developer -pl: cloud-apidoc, :cloud-marvin
|
Code Block |
---|
pip install tools/marvin/dist/Marvin-0.1.0.tar.gz
|
tutorial in full and explore the existing tests in the repository before beginning to write tests.
Table of Contents |
---|
Marvin requires Python 2.6 for installation but the tests written using marvin utilize Python 2.7 to the fullest. You should follow the test environment setup instructions here before proceeding further.
The developer
profile compiles and packages Marvin. It does NOT install marvin your default PYTHONPATH.
Code Block | ||||
---|---|---|---|---|
| ||||
$ mvn -P developer -pl :cloud-marvin
|
Alternately, you can fetch marvin from jenkins as well: packages are built every hour here
Code Block | ||||
---|---|---|---|---|
| ||||
$ pip | ||||
Code Block | ||||
easy_install tools/marvin/dist/Marvin-0.1*.0.tar.gz |
If you are a QA engineer you won't need the entire codebase to build marvin.
Code Block |
---|
easy_install tools/marvin/dist/Marvin-0.1.0.tar.gz
|
Code Block |
---|
root@cloud:~/cloudstack-oss/tools/marvin/dist# python
Python 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24)
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import marvin
>>> from marvin.cloudstackAPI import *
|
In our first steps we will build a simple API call and fire it against a CloudStack management server that is already deployed, configured and ready to accept API calls. You can pick any management server in your lab that has a few VMs running on it.Create a sample json config file telling us where your management server and database server are. Here's a sample:
To upgrade an existing marvin installation and sync it with the latest APIs on the managemnet server, call this command
Code Block | ||||
---|---|---|---|---|
| ||||
$ pip install --upgrade tools/marvin/dist/Marvin-*.tar.gz
|
First Things First
You will need a CloudStack management server that is already deployed, configured and ready to accept API calls. You can pick any management server in your lab that has a few VMs running on it or you can use DevCloud or the simulator environment deployed for a checkin test #checkin. Create a sample json config file demo.cfg
telling marvin where your management server and database server are.
The demo.cfg
json config looks as shown below:
Code Block | ||
---|---|---|
| ||
{
"dbSvr": {
"dbSvr": "marvin.testlab.com",
"passwd": "cloud",
"db": "cloud",
"port": 3306,
"user": "cloud"
},
"logger": {
"LogFolderPath": "/tmp/"
},
"mgtSvr": [ | ||
Code Block | ||
| ||
prasanna@cloud:~cloudstack-oss# cat demo/demo.cfg { "dbSvr": { "dbSvr": "automation.lab.vmops.com", "passwd": "cloud", { "dbmgtSvrIp": "cloudmarvin.testlab.com", "port": 33068096, "user": "cloud" }, "logger": [ {root", "namepasswd": "TestClientpassword", "filehypervisor": "/var/log/testclient.log"XenServer", }, ] } |
- Note:
dbSvr
is the location where mysql server is running and passwd is the password for user cloud
- open up the integration.port on your management server as root user.
Code Block | ||||
---|---|---|---|---|
| ||||
$ sudo iptables -I { INPUT -p tcp --dport 8096 "name": "TestCase", "file": "/var/log/testcase.log" } ], "mgtSvr": [ { "mgtSvrIp": "automation.lab.vmops.com", "port": 8096 } ] } |
dbSvr
is the location where mysql server is running and passwd is the password for user cloud.Code Block |
---|
In [1]: import marvin
In [2]: from marvin.cloudstackTestCase import *
|
Code Block |
---|
In [2]: import marvin.deployDataCenter
|
Code Block |
---|
In [3]: config = marvin.deployDataCenter.deployDataCenters('demo/demo.cfg')
In [4]: config.loadCfg()
|
Code Block |
---|
In [5]: apiClient = config.testClient.getApiClient()
|
Code Block |
---|
In [6]: listconfig = listConfigurations.listConfigurationsCmd()
|
deployVirtualMachineCmd
method inside the deployVirtualMachine
object. Simple, ain't it?Code Block |
---|
In [7]: listconfig.name = 'expunge'
|
Code Block |
---|
In [8]: listconfigresponse = apiClient.listConfigurations(listconfig)
|
Code Block |
---|
In [9]: print listconfigresponse
[ {category : u'Advanced', name : u'expunge.delay', value : u'60', description : u'Determines how long (in seconds) to wait before actually expunging destroyed vm. The default value = the default value of expunge.interval'},
{category : u'Advanced', name : u'expunge.interval', value : u'60', description : u'The interval (in seconds) to wait before running the expunge thread.'},
{category : u'Advanced', name : u'expunge.workers', value : u'3', description : u'Number of workers performing expunge '}]
|
The response is presented to us the way our UI receives it, as a JSON object. The object comprises of a list of configurations, each configuration showing the detailed dictionary (key, value) pairs of each config setting.
Listing stuff is all fine and dandy you might say - How do I launch VMs using python? And do I use the shell each time I have to do this? Well clearly not, we can have all the steps compressed into a python script. This example will show such a script which will:
Without much ado, here's the script:
-j ACCEPT |
- Change the global setting integration.api.port
on the CloudStack GUI to 8096
and restart the management server
Without much ado, let us jump into test case writing. Following is a working scenario we will test using Marvin.
You are encouraged to write your tests in a Pydev+Eclipse environment as this
features auto-completion for the test that follows. We will explain how to
run the tests in eclipse later.
Here is our test_deploy_vm.py module:
Code Block |
---|
python
#All tests inherit from cloudstackTestCase
from marvin.cloudstackTestCase import cloudstackTestCase
#Import Integration Libraries
#base - contains all resources as entities and defines create, delete, list operations on them
from marvin.integration.lib.base import Account, VirtualMachine, ServiceOffering
#utils - utility classes for common cleanup, external library wrappers etc
from marvin.integration.lib.utils import cleanup_resources
#common - commonly used methods for all tests are listed here
from marvin.integration.lib.common import get_zone, get_domain, get_template
class TestDeployVM(cloudstackTestCase):
"""Test deploy a VM into a user account
"""
@classmethod
def setUpClass(cls):
super(TestDeployVM, cls)
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.testdata = self.testClient.getParsedTestDataConfig()
# Get Zone, Domain and Default Built-in template
self.domain = get_domain(self.apiclient)
self.zone = get_zone(self.apiclient, self.testClient.getZoneForTests())
self.testdata["mode"] = self.zone.networktype
self.template = get_template(self.apiclient, self.zone.id, self.testdata["ostype"])
#create a user account
self.account = Account.create(
self.apiclient,
self.testdata["account"],
domainid=self.domain.id
)
#create a service offering
small_service_offering = self.testdata["service_offerings"]["small"]
small_service_offering['storagetype'] = 'local'
self.service_offering = ServiceOffering.create(
self.apiclient,
small_service_offering
)
#build cleanup list
self.cleanup = [
self.service_offering,
self.account
]
def tearDown(self):
try:
cleanup_resources(self.apiclient, self.cleanup)
except Exception as e:
self.debug("Warning! Exception in tearDown: %s" % e)
def test_deploy_vm |
Code Block |
#!/usr/bin/env python import marvin from marvin import cloudstackTestCase from marvin.cloudstackTestCase import * import unittest import hashlib import random class TestDeployVm(cloudstackTestCase): """ This test deploys a virtual machine into a user account using the small service offering and builtin template """ def setUp(self): """ Test Deploy Virtual Machine CloudStack internally saves its# passwordsValidate inthe md5following: form and that is how we # - listVirtualMachines returns accurate information specify it in the API. Python's hashlib library""" helps us to quickly hash self.virtual_machine = VirtualMachine.create( strings as follows self.apiclient, """ mdf = hashlib.md5() self.testdata["virtual_machine"], mdf.update('password')accountid=self.account.name, mdf_pass = mdf.hexdigest() zoneid=self.zone.id, self.apiClient domainid= self.testClient.getApiClient() #Get ourselves an API client account.domainid, serviceofferingid=self.acct = createAccount.createAccountCmd() #The createAccount command service_offering.id, templateid=self.accttemplate.accounttypeid = 0 ) list_vms = VirtualMachine.list(self.apiclient, id=self.virtual_machine.id) #We need a regular user. admins have accounttype=1self.debug( self.acct.firstname = 'bugs' "Verify listVirtualMachines response for virtual self.acct.lastname = 'bunny'machine: %s"\ % self.virtual_machine.id #What's up doc?) self.acct.password = mdf_passassertEqual( isinstance(list_vms, list), #The md5 hashed password string True, self.acct.username = 'bugs' "List VM response self.acct.email = 'bugs@rabbithole.com' was not a valid list" self.acct.account = 'bugs') self.acct.domainid = 1assertNotEqual( len(list_vms), 0, #The default ROOT domain self.acctResponse = self.apiClient.createAccount(self.acct) "List VM response was empty" # And upon) successful creation we'll log a helpful message invm our logs = list_vms[0] self.assertEqual( # using the default debug logger of the test frameworkvm.id, self.debug("successfully created account: %s, user: %s, id: \ self.virtual_machine.id, "Virtual Machine ids do not match" %s"%(self.acctResponse.account.account, \ ) self.assertEqual( vm.name, self.acctResponse.account.username, \ self.virtual_machine.name, "Virtual Machine names do not match" self.acctResponse.account.id)) def test_DeployVm(self): """self.assertEqual( Let's start by defining thevm.state, attributes of our VM that we will be "Running", deploying on CloudStack. We will be assuming a single zone msg="VM is available not in Running state" and is configured and all templates are Ready The hardcoded values are used only for brevity. """ deployVmCmd = deployVirtualMachine.deployVirtualMachineCmd() deployVmCmd.zoneid = 1 deployVmCmd.account = self.acct.account deployVmCmd.domainid = self.acct.domainid deployVmCmd.templateid = 5 #For default template- CentOS 5.6(64 bit) deployVmCmd.serviceofferingid = 1 deployVmResponse = self.apiClient.deployVirtualMachine(deployVmCmd) self.debug("VM %s was deployed in the job %s"%(deployVmResponse.id, deployVmResponse.jobid)) # At this point our VM is expected to be Running. Let's find out what # listVirtualMachines tells us about VMs in this account listVmCmd = listVirtualMachines.listVirtualMachinesCmd() listVmCmd.id = deployVmResponse.id listVmResponse = self.apiClient.listVirtualMachines(listVmCmd) self.assertNotEqual(len(listVmResponse), 0, "Check if the list API \ returns a non-empty response") vm = listVmResponse[0] self.assertEqual(vm.id, deployVmResponse.id, "Check if the VM returned \ is the same as the one we deployed") self.assertEqual(vm.state, "Running", "Check if VM has reached \ a state of running") def tearDown(self): # Teardown will delete the Account as well as the VM once the VM reaches "Running" state """ And finally let us cleanup the resources we created by deleting the account. All good unittests are atomic and rerunnable this way """ deleteAcct = deleteAccount.deleteAccountCmd() deleteAcct.id = self.acctResponse.account.id self.apiClient.deleteAccount(deleteAcct) |
To run the test we've written we'll place our class file into our demo directory. The test framework will "discover" the tests inside any directory it is pointed to and run the tests against the specified deployment. Our configuration file 'demo.cfg'
is also in the same directory
The usage for deployAndRun
is as follows:
option | purpose |
---|---|
-c | points to the configuration file defining our deployment |
-r | test results log where the summary report is written to |
-t | testcase log where all the logs we wrote in our tests is output for debugging purposes |
-d | directory containing all the test suites |
-l | only load the configuration, do not deploy the environment |
-f | Run tests in the given file |
On our shell environment we launch deployAndRun
module as follows and at the end of the run the summary of test results is also shown.
)
|
The test data class carries information in a dictionary object. (key, value) pairs in this class are needed to be externally supplied to satisfy an API call. For eg: In order to create a VM one needs to give a displayname and the vm name. These are externally supplied data. It is not mandatory to use the testdata class to supply to your test. In all cases you can simply send the right arguments to the Resource.operation(
method of your resource without using testdata dictionaries. The advantage of testdata is keeping all data to be configurable in a single place.
Mention all test data related to a test suite under tools/marvin/marvin/config/test_data.py. In our case we have identified that we need an account
(firstname,lastname etc), a virtual_machine
(with name and displayname) and a service_offering
(with cpu: 128 and some memory) as test data. Please refer to current test suites under smoke for specific examples
Test
prefix. Ideally only one test class is contained in every modulesetUp()
- the setup method is run before every test method in the class and is used to initialize any common data, clients, resources required in our tests. In our case we have initialized our testclients - apiclient and dbclient identified the zone, domain and template we will need for the VM and created the user account into which the VM shall be deployed into.self.cleanup =[]
. The cleanup list contains the list of objects which should be destroyed at the end of our test run in the tearDown methodtearDown()
- the teardown method simply calls the cleanup (delete) associated with every resource thereby garbage collecting resources of the testtest_deploy_vm
- our test scenario. All methods must begin with the test_
prefixMarvin supports test categories which enables you to run specific tests for product areas. For example if you have made a change in the accounts product area, there's a way to trigger all accounts related tests in both smoke and component tests directories. Just put a tag 'accounts' and point the directories/files
Example:
Info |
---|
nosetests --with-marvin --marvin-config=[config] --hypervisor=xenserver -a tags=accounts [file(s)] |
More info on this wiki page: Categories
In PyDev you will have to setup the default test runner to be nose. For this:
Now create a Debug Configuration with the project set the one in which you are writing your tests. And the main module to be your test_deploy_vm.py
script we defined earlier. Hit Debug and you should see your test run within the Eclipse environment and report failures in the Debug Window. You will also be able to set breakpoints, inspect values, evaluate expressions while debugging like you do with Java code in Eclipse.
Create a new python module test_deploy_vm
of type unittest, copy the above test class and save the file using your favorite editor. On your shell environment you can run the tests as follows:
Code Block |
---|
bash
tsp@cloud:~/cloudstack# nosetests --with-marvin --marvin-config=demo.cfg test_deploy_vm.py
Test Deploy Virtual Machine ok
----
Ran 1 test in 10.396s
|
Code Block |
root@cloud:~/cloudstack-oss# python -m marvin.deployAndRun -c demo/demo.cfg -t /tmp/testcase.log -r /tmp/results.log -f demo/TestDeployVm.py -l
root@cloud:~/cloudstack-oss# cat /tmp/results.log
test_DeployVm (testDeployVM.TestDeployVm) ... ok
----------------------------------------------------------------------
Ran 1 test in 100.511s
OK
|
Congratulations, your test has passed!
We do not know for sure that the CentOS VM deployed earlier actually started up on the hypervisor host. The API tells us it did - so Cloudstack assumes the VM is up and running, but did the hypervisor successfully spin up the VM? In this example we will login to the CentOS VM that we deployed earlier using a simple ssh client that is exposed by the test framework. The example assumes that you have an Advanced Zone deployment of Cloudstack running. The test case is further simplified if you have a Basic Zone deployment. It is left as an exercise to the reader to refactor the following test to work for a basic zone.
Let's get started. We will take the earlier test as is and extend it by:
PASS
NOTE: This test has been written for the 3.0 CloudStack. On 2.2.y we do not explicitly create a firewall rule.
Running from the CLI you can also experiment with the various plugins provided by Nose described in a later section.
An astute reader would by now have found that the following pattern has been used in the test examples shown so far and in most of the suites in the test/integration
directory:
This pattern is useful to contain the entire test into one atomic piece. It helps prevent tests from becoming entangled in each other ie we have failures localized to one account and that should not affect the other tests. Advanced examples in our basic verification suite are written using this pattern. Those writing tests are encouraged to follow the examples in test/integration/smoke
directory.
Many more advanced tests have been written in the test/integration/
directory of the codebase. Suites are available for many features in cloudstack. You should explore these tests for reference.
Sometimes it is not favourable for tests do have setUp and teardown run after each test. You want the setup to run only once per module and remain commonly available for all tests in the suite. In such cases you will use the @classmethod
setUpClass
defined by python unittest (since 2.7)
for eg:
Code Block |
---|
python
class TestDeployVM(cloudstackTestCase):
@classmethod
def setUpClass(cls):
cls.apiclient = super(TestDeployVM, cls).getClsTestClient().getApiClient()
|
Note that the testclient is available from the superclass using getClsTestClient in this case.
Anchor | ||||
---|---|---|---|---|
|
The agent simulator and marvin are integrated into maven build phases to help you run basic tests before pushing a commit. These tests are integration tests that will test the CloudStack system as a whole. Management Server will be running during the tests with the Simulator Agent responding to hypervisor commands. For running the checkin tests, your developer environment needs to have Marvin installed and working with the latest CloudStack APIs. These tests are lightweight and should ensure that your commit doesnt break critical functionality for others working with the master branch. The checkin-tests utilize marvin and a one-time installation of marvin will be done so as to fetch all the related dependencies. Further updates to marvin can be done by using the sync mechanism described later in this section. In order for these tests to run on simulator, we need to add an attribute <required_hardware="false"> to these test cases.
These build steps are similar to the regular build, deploydb and run of the management server. Only some extra switches are required to run the tests and should be easy to recall and run anytime:
Build with the -Dsimulator switch to enable simulator hypervisors
Code Block |
---|
$ mvn -Pdeveloper -Dsimulator clean install |
In addition to the regular deploydb you will be deploying the simulator database where all the agent information is stored for the mockvms, mockvolumes etc.
Code Block |
---|
$ mvn -Pdeveloper -pl developer -Ddeploydb
$ mvn -Pdeveloper -pl developer -Ddeploydb-simulator
|
Same as regular jetty:run.
Code Block |
---|
$ mvn -pl client jetty:run -Dsimulator
|
To enable the debug ports before the run
export MAVEN_OPTS="-XX:MaxPermSize=512m -Xmx2g -Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n"
Marvin also provides the sync facility which contacts the API discovery plugin on a running cloudstack server to rebuild its API classes and integration libraries:
You can install/upgrade marvin using the sync mechanism as follows.
Code Block |
---|
$ sudo mvn -Pdeveloper,marvin.sync -Dendpoint=localhost -pl :cloud-marvin |
Once the simulator is up, In a separate session you can use the following two commands to bring up n zone( below example creates an advanced zone ) and run tests.
command 1: Below command deploys a datacenter.
Code Block |
---|
$ python tools/marvin/marvin/deployDataCenter.py -i setup/dev/advanced.cfg |
Example configs are available in setup/dev/advanced.cfg and setup/dev/basic.cfg
command 2: Below command runs the test.
Code Block |
---|
$ export MARVIN_CONFIG=setup/dev/advanced.cfg
$ export TEST_SUITE=test/integration/smoke
$ export ZONE_NAME=Sandbox-simulator
$ nosetests-2.7 \
--with-marvin \
--marvin-config=${MARVIN_CONFIG} \
-w ${TEST_SUITE} \
--with-xunit \
--xunit-file=/tmp/bvt_selfservice_cases.xml \
--zone=${ZONE_NAME} \
--hypervisor=simulator \
-a tags=advanced,required_hardware=false |
The --zone
argument should match the name of the zone defined in the config file (currently Sandbox-simulator for basic.cfg and advanced.cfg).
Check-In tests are the same as any other tests written using Marvin. The only additional step you need to do is ensure that your test is driven entirely by the API only. This makes it possible to run the test on a simulator. Once you have your test, you need to tag it to run on the simulator so the marvin test runner can pick it up during the checkin-test run. Then place your test module in the test/integration/smoke
folder and it will become part of the checkin test run.
For eg:
Code Block |
---|
Code Block |
---|
@attr(tags =["advanced", "smoke"],required_hardware="false")
def test_deploy_virtualmachine(self):
"""Tests deployment of VirtualMachine
"""
|
Code Block |
---|
The sample simulator configurations for advanced and basic zone is available in setup/dev/ directory. The default configuration setup/dev/advanced.cfg deploys an advanced zone with two simulator hypervisors in a single cluster in a single pod, two primary NFS storage pools and a secondary storage NFS store. If your test requires any extra hypervisors, storage pools, additional IP allocations, VLANs etc - you should adjust the configuration accordingly. Ensure that you have run all the checkin tests in the new configuration. For this you can directly edit the JSON file or generate a new configuration file. The setup/dev/advanced.cfg was generated as follows
Code Block |
---|
$ cd tools/marvin/marvin/sandbox/advanced
$ python advanced_env.py -i setup.properties -o advanced.cfgThese configurations are generated using the marvin configGenerator module. You can write your own configuration by following the examples shown in the configGenerator module:
|
More detailed explanation of how the JSON configuration works is shown later sections of this tutorial.
Tests with more backend verification and complete integration of suites for network, snapshots, templates etc can be found in the test/integration/smoke
and test/integration/component
. Almost all of these test suites use common library wrappers written around the test framework to simplify writing tests. These libraries are part of the marvin.lib
package. Ensure that you have gone through the existing tests related to your feature before writing your own.
The integration library takes advantage of the fact that every resource - VirtualMachine, ISO, Template, PublicIp etc follows the pattern of
Marvin can auto-generate these resource classes using API discovery. The auto-generation ability is being added as part of this refactor
This is our so-called BVT - basic verification tests. Tests here include those that check the basic sanity of the cloudstack features. Include only simple tests for your feature here. If you are writing a check-in test, this the where the test module should be put.
More in-depth tests drilling down the entire breadth of a feature can be found here. These are used for regression testing.
Some tests have been tagged to run only for devcloud environment. In order to run these tests you can use the following command after you have setup your management server and the devcloud vm is running with tools/devcloud/devcloud.cfg
as its deployment configuration.
Code Block |
---|
tsp@cloud:~/cloudstack# nosetests --with-marvin --marvin-config=tools/devcloud/devcloud.cfg -a tags='devcloud' test/integration/smoke
Test Deploy Virtual Machine ok
Test Stop Virtual Machine ok
Test Start Virtual Machine ok
Test Reboot Virtual Machine ok
Test destroy Virtual Machine ok
Test recover Virtual Machine ok
Test destroy(expunge) Virtual Machine ok
----
Ran 7 tests in 10.001s
OK
|
The test framework by default runs all its tests under 'admin' mode which means you have admin access and visibility to resources in cloudstack. In order to run the tests as a regular user/domain-admin - you can apply the @user decorator which takes the arguments (account, domain, accounttype) at the head of your test class. The decorator will create the account and domain if they do not exist.
If the decorator is not suitable for you, for instance, you have to make some API calls as an admin and others as a user you can get two apiClients in your test. One for the admin accessiblity operations and another for user access only. The getUserApiClient()
can be used to obtain a user apiclient instead of the default admin getApiClient()
The logs from the test client detailing the requests sent by it and the responses fetched back from the management server can be found under /tmp/testclient.log
. By default all logging is in INFO
mode. In addition, you may provide your own set of DEBUG
log messages in tests you write. Each cloudstackTestCase
inherits the debug logger and can be used to output useful messages that can help troubleshooting the testcase when it is running.
nosetests will capture all the logs from running a single test and show them as part of a failure. This allows isolating the sequence of calls made that caused the failure of the test. Successful tests do not show any logs.
Marvin can be used to configure a deployed Cloudstack installation with Zones, Pods and Hosts automatically in to Advanced or Basic network types. This is done by describing the required deployment in a hierarchical json configuration file. But writing and maintaining such a configuration is cumbersome and error prone. Marvin's configGenerator is designed for this purpose. A simple hand written python description passed to the configGenerator will generate the compact json configuration of our deployment.
Examples of how to write the configuration for various zone models is within the configGenerator.py module in your marvin source directory. Look for methods describe_setup_in_advanced_mode()/ describe_setup_in_basic_mode()
.
Shown below is such an example describing a simple one host deployment:
Code Block |
---|
json
{
"zones": [
{
"name": "Sandbox-XenServer",
"guestcidraddress": "10.1.1.0/24",
"physical_networks": [
{
"broadcastdomainrange": "Zone",
"name": "test-network",
"traffictypes": [
|
Code Block |
#!/usr/bin/env python import marvin from marvin import cloudstackTestCase from marvin.cloudstackTestCase import * from marvin.remoteSSHClient import remoteSSHClient import unittest import hashlib import random import string class TestSshDeployVm(cloudstackTestCase): """ This test deploys a virtual machine into a user account using the small service offering and builtin template """ @classmethod def setUpClass(cls): """ CloudStack internally saves its passwords in md5 form and that is how we specify it in the API. Python's hashlib library helps us to quickly hash strings as follows """ mdf = hashlib.md5() mdf.update('password') mdf_pass = mdf.hexdigest() acctName = 'bugs-'+''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(6)) #randomly generated account cls.apiClient = super(TestSshDeployVm, cls).getClsTestClient().getApiClient() cls.acct = createAccount.createAccountCmd() #The createAccount command cls.acct.accounttype = 0 #We need a regular user. admins have accounttype=1 cls.acct.firstname = 'bugs' cls.acct.lastname = 'bunny' #What's up doc? cls.acct.password = mdf_pass #The md5 hashed password string cls.acct.username = acctName cls.acct.email = 'bugs@rabbithole.com' cls.acct.account = acctName cls.acct.domainid = 1 #The default ROOT domain cls.acctResponse = cls.apiClient.createAccount(cls.acct) def setUpNAT(self, virtualmachineid): listSourceNat = listPublicIpAddresses.listPublicIpAddressesCmd() listSourceNat.account = self.acct.account listSourceNat.domainid = self.acct.domainid listSourceNat.issourcenat = True listsnatresponse = self.apiClient.listPublicIpAddresses(listSourceNat) self.assertNotEqual(len(listsnatresponse), 0, "Found a source NAT for the acct %s"%self.acct.account) snatid = listsnatresponse[0].id snatip = listsnatresponse[0].ipaddress try: createFwRule = createFirewallRule.createFirewallRuleCmd() createFwRule.cidrlist = "0.0.0.0/0" createFwRule.startport = 22 createFwRule.endport = 22 createFwRule.ipaddressid = snatid createFwRule.protocol = "tcp" createfwresponse = self.apiClient.createFirewallRule(createFwRule) createPfRule = createPortForwardingRule.createPortForwardingRuleCmd() createPfRule.privateport = 22 createPfRule.publicport = 22 createPfRule.virtualmachineid = virtualmachineid createPfRule.ipaddressid = snatid createPfRule.protocol = "tcp" createpfresponse = self.apiClient.createPortForwardingRule(createPfRule) except e: self.debug("Failed to create PF rule in account %s due to %s"%(self.acct.account, e)) raise e finally: return snatip def test_SshDeployVm(self): """ Let's start by defining the attributes of our VM that we will be deploying on CloudStack. We will be assuming a single zone is available and is configured and all templates are Ready The hardcoded values are used only for brevity. """ deployVmCmd = deployVirtualMachine.deployVirtualMachineCmd() deployVmCmd.zoneid = 1 deployVmCmd.account = self.acct.account deployVmCmd.domainid = self.acct.domainid deployVmCmd.templateid = 5 #CentOS 5.6 builtin deployVmCmd.serviceofferingid = 1 deployVmResponse = self.apiClient.deployVirtualMachine(deployVmCmd) self.debug("VM %s was deployed in the job %s"%(deployVmResponse.id, deployVmResponse.jobid)) # At this point our VM is expected to be Running. Let's find out what # listVirtualMachines tells us about VMs in this account listVmCmd = listVirtualMachines.listVirtualMachinesCmd() listVmCmd.id = deployVmResponse.id listVmResponse = self.apiClient.listVirtualMachines(listVmCmd) self.assertNotEqual(len(listVmResponse), 0, "Check if the list API \ { returns a non-empty response") vm = listVmResponse[0] "typ": "Guest" hostname = vm.name nattedip = self.setUpNAT(vm.id) self.assertEqual(vm.id, deployVmResponse.id, "Check if the VM returned \ }, { is the same as the one we deployed") self.assertEqual(vm.state, "Runningtyp",: "Management"Check if VM has reached \ }, a state of running") # SSH login and compare hostname{ ssh_client = remoteSSHClient(nattedip, 22, "root", "password") stdout = ssh_client.execute("hostname") "typ": "Public" self.assertEqual(hostname, stdout[0], "cloudstack VM name and hostname match") @classmethod def tearDownClass(cls): } """ And finally let us cleanup the resources we created by deleting the], account. All good unittests are atomic and rerunnable this way "providers": [ """ deleteAcct = deleteAccount.deleteAccountCmd() deleteAcct.id = cls.acctResponse.account.id { cls.apiClient.deleteAccount(deleteAcct) |
Observe that unlike the previous test class - TestDeployVM
- we do not have methods setUp
and tearDown
. Instead, we have the methods setUpClass
and tearDownClass
. We do not want the initialization (and cleanup) code in setup (and teardown) to run after every test in the suite which is what setUp
and tearDown
will do. Instead we will have the initialization code (creation of account etc) done once for the entire lifetime of the class. This is accomplished using the setUpClass
and tearDownClass
classmethods. Since the API client is only visible to instances of cloudstackTestCase
we expose the API client at the class level using the getClsTestClient()
method. So to get the API client we call the parent class (super(TestSshDeployVm
, cls)) ie cloudstackTestCase
and ask for a class level API client.
An astute reader would by now have found that the following pattern has been used in the tutorial's test examples:
This pattern is useful to contain the entire test into one atomic piece. It helps prevent tests from becoming entangled in each other ie we have failures localized to one account and that should not affect the other tests. Advanced examples in our basic verification suite are written using this pattern. Test engineers are encouraged to follow the same unless there is good reason not to do so.
The test framework by default runs all its tests under 'admin' mode which means you have admin access and visibility to resources in cloudstack. In order to run the tests as a regular user/domain-admin - you can apply the @UserName decorator which takes the arguments (account, domain, accounttype) at the head of your test class. The decorator will create the account and domain if they do not exist. Do NOT apply the decorator to a test method.
An example can be found at: cloudstack-oss/tools/testClient/testcase/test_userDecorator.py
using the pydev plugin/ pdb and the testClient logs
The logs from the test client detailing the requests sent by it and the responses fetched back from the management server can be found under /var/log/testclient.log
. By default all logging is in INFO
mode. In addition, you may provide your own set of DEBUG
log messages in tests you write. Each cloudstackTestCase
inherits the debug logger and can be used to output useful messages that can help troubleshooting the testcase when it is running. These logs will be found in the location you specified by the -t
option when launching the tests.
eg:
Code Block |
---|
list_zones_response = self.apiclient.listZones(listzonesample)
self.debug("Number of zones: %s" % len(list_zones_response)) #This shows us how many zones were found in the deployment
|
The result log specified by the -r
option will show the detailed summary of the entire run of all the suites. It will show you how many tests failed, passed and how many had errors in them.
While debugging with the PyDev plugin you can also place breakpoints in Eclipse for a more interactive debugging session.
Marvin can be used to configure a deployed Cloudstack installation with Zones, Pods and Hosts automatically in to Advanced or Basic network types. This is done by describing the required deployment in a hierarchical json configuration file. But writing and maintaining such a configuration is cumbersome and error prone. Marvin's configGenerator is designed for this purpose. A simple hand written python description passed to the configGenerator will generate the compact json configuration of our deployment.
Examples of how to write the configuration for various zone models is within the configGenerator.py module in your marvin source directory. Look for methods describe_setup_in_advanced_mode()/ describe_setup_in_basic_mode()
.
Below is such an example describing a simple one host deployment:
Code Block |
---|
{ "zones": [ { "name": "Sandbox-XenServer", "guestcidraddress": "10.1.1.0/24", "physical_networks": [ { "broadcastdomainrange": "Zone", "name": "test-network", "traffictypes": [ { "typ": "Guest" }, { "typ": "Management" }, { "typ": "Public" } ], "providers": [ { "broadcastdomainrange": "ZONE", "name": "VirtualRouter" } ] } ], "dns1": "10.147.28.6", "ipranges": [ { "startip": "10.147.31.150", "endip": "10.147.31.159", "netmask": "255.255.255.0", "vlan": "31", "gateway": "10.147.31.1" } ], "networktype": "Advanced", "pods": [ { "endip": "10.147.29.159", "name": "POD0", "startip": "10.147.29.150", "netmask": "255.255.255.0", "clusters": [ { "clustername": "C0", "hypervisor": "XenServer", "hosts": [ { "username": "root", "url": "http://10.147.29.58", "password": "password" } ], "clustertype": "CloudManaged", "primaryStorages": [ { "url": "nfs://10.147.28.6:/export/home/sandbox/primary", "name": "PS0" } ] } ], "gateway": "10.147.29.1" } ], "internaldns1": "10.147.28.6", "secondaryStorages": [ { "url": "nfs://10.147.28.6:/export/home/sandbox/secondary" } ] } ], "dbSvr": { "dbSvr": "10.147.29.111", "passwd": "cloud", "db": "cloud", "port": 3306, "user": "cloud" }, "logger": [ { "namepasswd": "TestClientcloud", "filedb": "/var/log/testclient.log" cloud", "port": }3306, {"user": "cloud" }, "namelogger": "TestCase", { "fileLogFolderPath": "/var/log/testcase.logtmp/" } ], "globalConfig": [ { "name": "storage.cleanup.interval", "value": "300" }, { "name": "account.cleanup.interval", "value": "600" } ], "mgtSvr": [ { "mgtSvrIp": "10.147.29.111", "port": 8096 } ] } |
What you saw earlier in the beginning of the tutorial was a condensed form of this complete configuration file. If you 're are familiar with the CloudStack installation you will recognize that most of these are settings you give in the install wizards as part of configuration. What is different from the simplified configuration file are the sections "zones" and "globalConfig". The globalConfig section is nothing but a simple listing of (key, value) pairs for the "Global Settings" section of CloudStack.
The "zones
" section defines the hierarchy of our cloud. At the top-level are the availability zones. Each zone has its set of pods, secondary storages, providers and network related configuration. Every pod has a bunch of clusters and every cluster a set of hosts and their associated primary storage pools. These configurations are easy to maintain and deploy by just passing them through marvin.
Code Block |
---|
bash root@cloudtsp@cloud:~/cloudstack-oss#cloudstack# pythonnosetests --with-mmarvin --marvin.deployAndRun -c advanced_zone.cfg -t /tmp/t.log -r /tmp/r.log -d tests/ -config=advanced.cfg -deploy -w /tmp #Empty directory where there are no tests to be discovered |
Here we have pointed to a likely empty directory so as to only deploy and configure the zoneNotice that we didn't pass the -l
option to deployAndRun
. The reason being we don't want to just load the configuration but also deploy the configuration. This is the default behaviour of Marvin wherein the cloud configuration is deployed and the tests in the directory "tests/
" are run against it.
The above one host configuration was described as follows:
Code Block |
---|
Code Block |
#!/usr/bin/env python import random import marvin from marvin.configGenerator import * def describeResources(): zs = cloudstackConfiguration() z = zone() z.dns1 = '10.147.28.6' z.internaldns1 = '10.147.28.6' z.name = 'Sandbox-XenServer' z.networktype = 'Advanced' z.guestcidraddress = '10.1.1.0/24' pn = physical_networkphysicalNetwork() pn.name = "test-network" pn.traffictypes = [traffictype("Guest"), traffictype("Management"), traffictype("Public")] z.physical_networks.append(pn) p = pod() p.name = 'POD0' p.gateway = '10.147.29.1' p.startip = '10.147.29.150' p.endip = '10.147.29.159' p.netmask = '255.255.255.0' v = iprange() v.gateway = '10.147.31.1' v.startip = '10.147.31.150' v.endip = '10.147.31.159' v.netmask = '255.255.255.0' v.vlan = '31' z.ipranges.append(v) c = cluster() c.clustername = 'C0' c.hypervisor = 'XenServer' c.clustertype = 'CloudManaged' h = host() h.username = 'root' h.password = 'password' h.url = 'http://10.147.29.58' c.hosts.append(h) ps = primaryStorage() ps.name = 'PS0' ps.url = 'nfs://10.147.28.6:/export/home/sandbox/primary' c.primaryStorages.append(ps) p.clusters.append(c) z.pods.append(p) p) secondary = secondaryStorage() secondary.url = 'nfs://10.147.28.6:/export/home/sandbox/secondary' z.secondaryStorages.append(secondary) '''Add zone''' zs.zones.append(z) '''Add mgt server''' secondarymgt = secondaryStoragemanagementServer() secondarymgt.urlmgtSvrIp = 'nfs://10.147.28.6:/export/home/sandbox/secondary29.111' zzs.secondaryStoragesmgtSvr.append(secondarymgt) '''Add zonea database''' zs.zones.append(z) db = dbServer() db.dbSvr = '''Add mgt server''10.147.29.111' db.user = 'cloud' mgtdb.passwd = managementServer()'cloud' mgtzs.mgtSvrIpdbSvr = db '''Add some '10.147.29.111configuration''' [zs.mgtSvrglobalConfig.append(mgt)cfg) for cfg in getGlobalSettings()] '''Add a database'add loggers''' dbtestLogger = dbServerlogger() dbtestLogger.dbSvrlogFolderPath = '10.147.29.111/tmp/' dbzs.userlogger = 'cloud' testLogger return zs def getGlobalSettings(): db.passwd = 'cloud'globals = { "storage.cleanup.interval" : "300", zs.dbSvr = db "account.cleanup.interval" : "60", '''Add some configuration''' } [zs.globalConfig.append(cfg) for k, cfgv in getGlobalSettingsglobals.iteritems()]: ''''add loggers''' testClientLoggercfg = loggerconfiguration() testClientLogger cfg.name = 'TestClient'k testClientLogger cfg.filevalue = '/var/log/testclient.log' testCaseLogger = logger() testCaseLogger.name = 'TestCase' testCaseLogger.file = '/var/log/testcase.log' zs.logger.append(testClientLogger) zs.logger.append(testCaseLogger) return zs def getGlobalSettings(): globals = [ { "name": "storage.cleanup.interval", "value": "300" }, { "name": "account.cleanup.interval", "value": "600" } ] for k, v in globals: cfg = configuration() cfg.name = k cfg.value = v yield cfg if __name__ == '__main__': config = describeResources() generate_setup_config(config, 'advanced_cloud.cfg') |
The zone(), pod(), cluster(), host()
are plain objects that carry just attributes. For instance a zone consists of the attributes - name, dns entries, network type
etc. Within a zone I create pod()s
and append them to my zone object, further down creating cluster()s
in those pods and appending them to the pod and within the clusters finally my host()s
that get appended to my cluster object. Once I have defined all that is necessary to create my cloud I pass on the described configuration to the generate_setup_config()
method which gives me my resultant configuration in JSON format.
You don't always want to describe one hosts configurations in python files so we've included some common examples in the Marvin tarball under the sandbox
directory. In the sandbox are configurations of a single host advanced and a single host basic zone that can be tailored to your environment using a simple properties file. The property file, setup.properties
is contains editable name, value (name=value
) pairs that you can change to the IPs, hostnames etc that you have in your environment. The properties file when passed to the python script will generate the JSON configuration for you.
Sample setup.properties:
Code Block | ||
---|---|---|
| ||
[globals]
secstorage.allowed.internal.sites=10.147.28.0/24
[environment]
dns=10.147.28.6
mshost=localhost
mysql.host=localhost
mysql.cloud.user=cloud
mysql.cloud.passwd=cloud
[cloudstack]
private.gateway=10.147.29.1
private.pod.startip=10.147.29.150
private.pod.endip=10.147.29.159
|
And generate the JSON config as follows:
Code Block | ||
---|---|---|
| ||
root@cloud:~/incubator-cloudstack/tools/marvin/marvin/sandbox/advanced# python advanced_env.py -i setup.properties -o advanced.cfg
root@cloud:~/incubator-cloudstack/tools/marvin/marvin/sandbox/advanced# head -10 advanced.cfg
{
"zones": [
{
"name": "Sandbox-XenServer",
"guestcidraddress": "10.1.1.0/24",
... <snip/> ...
|
Nose extends unittest to make testing easier. Nose comes with plugins that help integrating your regular unittests into external build systems, coverage, profiling etc. Marvin comes with its own nose plugin for this so you can use nose to drive CloudStack tests. The plugin can be installed by simply running setuptools in your marvin source directory. Running nosetests -p will show if the plugin registered successfully.
Code Block | ||
---|---|---|
| ||
$ cd /usr/local/lib/python2.7/site-packages/marvin
$ easy_install .
Processing .
Running setup.py -q bdist_egg --dist-dir
Installed /usr/local/lib/python2.7/dist-packages/marvin_nose-0.1.0-py2.7.egg
Processing dependencies for marvin-nose==0.1.0
Finished processing dependencies for marvin-nose==0.1.0
$ nosetests -p
Plugin xunit
Plugin multiprocess
Plugin capture
Plugin logcapture
Plugin coverage
Plugin attributeselector
Plugin doctest
Plugin profile
Plugin collect-only
Plugin isolation
Plugin pdb
Plugin marvin
# Usage and running tests
$ nosetests --with-marvin --marvin-config=/path/to/basic_zone.cfg --load /path/to/tests
|
The smoke tests and component tests contain attributes that can be used to filter the tests that you would like to run against your deployment. You would use nose's attrib plugin for this. Currently zone models are
Some tests have been tagged to run only for devcloud environment. In order to run these tests you can use the following command after you've setup your management server and the host only devcloud is running with devcloud.cfg as its deployment configuration. This assumes you have the marvin-nose plugin installed on it as listed above.
~/workspace/cloudstack/incubator-cloudstack(branch:master*) » nosetests --with-marvin --marvin-config=tools/devcloud/devcloud.cfg --load --collect-only -a tags='devcloud' test/integration/smoke tsp@cloud
Test Deploy Virtual Machine ... ok
Test Stop Virtual Machine ... ok
Test Start Virtual Machine ... ok
Test Reboot Virtual Machine ... ok
Test destroy Virtual Machine ... ok
Test recover Virtual Machine ... ok
Test destroy(expunge) Virtual Machine ... ok
----------------------------------------------------------------------
Ran 7 tests in 0.001s
OK
...
v
yield cfg
if __name__ == '__main__':
config = describeResources()
generate_setup_config(config, 'advanced_cloud.cfg')
|
The zone(), pod(), cluster(), host()
are plain objects that carry just attributes. For instance a zone consists of the attributes - name, dns entries, network type
etc. Within a zone I create pod()s
and append them to my zone object, further down creating cluster()s
in those pods and appending them to the pod and within the clusters finally my host()s
that get appended to my cluster object. Once I have defined all that is necessary to create my cloud I pass on the described configuration to the generate_setup_config()
method which gives me my resultant configuration in JSON format.
Nose extends unittest to make testing easier. Nose comes with plugins that help integrating your regular unittests into external build systems, coverage, profiling etc. Marvin comes with its own nose plugin for this so you can use nose to drive CloudStack tests. The plugin is installed on installing marvin. Running nosetests -p
will show if the plugin registered successfully.
Code Block |
---|
bash
$ nosetests -p
Plugin xunit
Plugin multiprocess
Plugin capture
Plugin logcapture
Plugin coverage
Plugin attributeselector
Plugin doctest
Plugin profile
Plugin collect-only
Plugin isolation
Plugin pdb
Plugin marvin
|
There is an interesting plugin named Nose-timer, which helps you to generate the execution time of individual python test. Link here: https://github.com/mahmoudimus/nose-timer
Running nosetest with --with-timer flag.
If you execute testing with marvin, add this parameter in cloud-marvin/pom.xml like this:
Code Block |
---|
<executable>nosetests</executable>
<arguments>
<argument>--with-marvin</argument>
<argument>--marvin-config</argument>
<argument>${resolved.user.dir}/${resolved.marvin.config}</argument>
<argument>-a</argument>
<argument>tags=${tag}</argument>
<argument>${resolved.user.dir}/${test}</argument>
<argument>-v</argument>
<argument>--with-timer</argument>
</arguments>
|
Code Block |
---|
bash
$ nosetests --with-marvin --marvin-config=/path/to/basic_zone.cfg /path/to/tests
|
The smoke tests and component tests contain attributes that can be used to filter the tests that you would like to run against your deployment. You would use nose attrib plugin for this. Following tags are available for filtering:
Code Block |
---|
bash
$ nosetests --with-marvin --marvin-config=/path/to/config.cfg -w <test_directory> -a tags=advanced # run tests tagged to run on an advanced zone
#Use below options to run all test cases under smoke directory on advanced zone "and" are provisioning cases, i.e., require hardware to run them. See "hypervisor" option to specify against which hypervisor to run them against, provided your zone and cluster has multiple hosts of various hypervisors type.
$ nosetests-2.7 --with-marvin --marvin-config=/home/abc/softwares/cs_4_4_forward/setup/dev/advanced.cfg -w /home/abc/softwares/cs_4_4_forward/test/integration/smoke/ --with-xunit --xunit-file=/tmp/bvt_provision_cases.xml --zone=<zone_in_cfg> --hypervisor=<xenserver\kvm\vmware> -a tags=advanced,required_hardware=true
#Use below options to run all test cases under smoke directory on advanced zone "and" are selfservice test cases, i.e., "not" requiring hardware to run them, and can be run against simulator.
$ nosetests-2.7 --with-marvin --marvin-config=/home/abc/softwares/cs_4_4_forward/setup/dev/advanced.cfg -w /home/abc/softwares/cs_4_4_forward/test/integration/smoke/ --with-xunit --xunit-file=/tmp/bvt_selfservice_cases.xml --zone=<zone_in_cfg> --hypervisor=simulator -a tags=advanced,required_hardware=false
#Use below options to run all test cases under smoke directory on advanced zone "and" are selfservice test cases, i.e., "not" requiring hardware to run them, and can be run against simulator. As well, below "deploy" option takes care to deploy datacenter as well.
$ nosetests-2.7 --with-marvin --marvin-config=/home/abc/softwares/cs_4_4_forward/setup/dev/advanced.cfg -w /home/abc/softwares/cs_4_4_forward/test/integration/smoke/ --with-xunit --xunit-file=/tmp/bvt_provision_cases.xml --zone=<zone_in_cfg> --hypervisor=simulator -a tags=advanced,required_hardware=false --deploy |
There are a few do's and don'ts in choosing the automated scenario for an integration test. These are mostly for the system to blend well with the continuous test infrastructure and to keep environments pure and clean without affecting other tests.
iptables -L INPUT
to list the INPUT chain of iptablesservice iptables stop; service iptables start #to stop and start iptables
ssh <target-backend-machine> "<your script>"
...
...
...
...
...
...
...
Examples of tests with more backend verification and complete integration of suites for network, snapshots, templates etc can be found in the test/integration/smoke
directory. Almost all of these test suites use common library wrappers written around the test framework to simplify writing tests. These libraries are part of marvin.integration
. You may start using these libraries at your convenience but there's no better way than to write the complete API call yourself to understand its behaviour.
The libraries take advantage of the fact that every resource - VirtualMachine, ISO, Template, PublicIp etc follows the pattern of
...
...
For any feedback, typo corrections please email the -dev listsdev@cloudstack.apache.org list.