Introduction
Marvin - our automation framework is a Python module that leverages the abilities of python and its multitude of libraries. Tests written using our framework use the unittest module under the hood. The unittest module is the Python version of the unit-testing framework originally developed by Kent Beck et al and will be familiar to the Java people in the form of JUnit. The following document will act as a tutorial introduction to those interested in testing CloudStack with python.
This document does not cover the python language and we'll be pointing the reader instead to explore some tutorials that are more thorough on the topic. In the following we will be assuming basic python scripting knowledge from the reader. The reader is encouraged to walk through the steps after he/she has their environment setup and configured.
If you are a developer the cloudstack development environment is sufficient to get started
mvn -P developer -pl: cloud-apidoc, :cloud-marvin
pip install tools/marvin/dist/Marvin-0.1.0.tar.gz
easy_install tools/marvin/dist/Marvin-0.1.0.tar.gz
If you are a QA engineer you won't need the entire codebase to build marvin.
easy_install tools/marvin/dist/Marvin-0.1.0.tar.gz
root@cloud:~/cloudstack-oss/tools/marvin/dist# python Python 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24) [GCC 4.5.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import marvin >>> from marvin.cloudstackAPI import *
In our first steps we will build a simple API call and fire it against a CloudStack management server that is already deployed, configured and ready to accept API calls. You can pick any management server in your lab that has a few VMs running on it.Create a sample json config file telling us where your management server and database server are. Here's a sample:
prasanna@cloud:~cloudstack-oss# cat demo/demo.cfg { "dbSvr": { "dbSvr": "automation.lab.vmops.com", "passwd": "cloud", "db": "cloud", "port": 3306, "user": "cloud" }, "logger": [ { "name": "TestClient", "file": "/var/log/testclient.log" }, { "name": "TestCase", "file": "/var/log/testcase.log" } ], "mgtSvr": [ { "mgtSvrIp": "automation.lab.vmops.com", "port": 8096 } ] }
dbSvr
is the location where mysql server is running and passwd is the password for user cloud.In [1]: import marvin In [2]: from marvin.cloudstackTestCase import *
In [2]: import marvin.deployDataCenter
In [3]: config = marvin.deployDataCenter.deployDataCenters('demo/demo.cfg') In [4]: config.loadCfg()
In [5]: apiClient = config.testClient.getApiClient()
In [6]: listconfig = listConfigurations.listConfigurationsCmd()
deployVirtualMachineCmd
method inside the deployVirtualMachine
object. Simple, ain't it?In [7]: listconfig.name = 'expunge'
In [8]: listconfigresponse = apiClient.listConfigurations(listconfig)
In [9]: print listconfigresponse [ {category : u'Advanced', name : u'expunge.delay', value : u'60', description : u'Determines how long (in seconds) to wait before actually expunging destroyed vm. The default value = the default value of expunge.interval'}, {category : u'Advanced', name : u'expunge.interval', value : u'60', description : u'The interval (in seconds) to wait before running the expunge thread.'}, {category : u'Advanced', name : u'expunge.workers', value : u'3', description : u'Number of workers performing expunge '}]
The response is presented to us the way our UI receives it, as a JSON object. The object comprises of a list of configurations, each configuration showing the detailed dictionary (key, value) pairs of each config setting.
Listing stuff is all fine and dandy you might say - How do I launch VMs using python? And do I use the shell each time I have to do this? Well clearly not, we can have all the steps compressed into a python script. This example will show such a script which will:
Without much ado, here's the script:
#!/usr/bin/env python import marvin from marvin import cloudstackTestCase from marvin.cloudstackTestCase import * import unittest import hashlib import random class TestDeployVm(cloudstackTestCase): """ This test deploys a virtual machine into a user account using the small service offering and builtin template """ def setUp(self): """ CloudStack internally saves its passwords in md5 form and that is how we specify it in the API. Python's hashlib library helps us to quickly hash strings as follows """ mdf = hashlib.md5() mdf.update('password') mdf_pass = mdf.hexdigest() self.apiClient = self.testClient.getApiClient() #Get ourselves an API client self.acct = createAccount.createAccountCmd() #The createAccount command self.acct.accounttype = 0 #We need a regular user. admins have accounttype=1 self.acct.firstname = 'bugs' self.acct.lastname = 'bunny' #What's up doc? self.acct.password = mdf_pass #The md5 hashed password string self.acct.username = 'bugs' self.acct.email = 'bugs@rabbithole.com' self.acct.account = 'bugs' self.acct.domainid = 1 #The default ROOT domain self.acctResponse = self.apiClient.createAccount(self.acct) # And upon successful creation we'll log a helpful message in our logs # using the default debug logger of the test framework self.debug("successfully created account: %s, user: %s, id: \ %s"%(self.acctResponse.account.account, \ self.acctResponse.account.username, \ self.acctResponse.account.id)) def test_DeployVm(self): """ Let's start by defining the attributes of our VM that we will be deploying on CloudStack. We will be assuming a single zone is available and is configured and all templates are Ready The hardcoded values are used only for brevity. """ deployVmCmd = deployVirtualMachine.deployVirtualMachineCmd() deployVmCmd.zoneid = 1 deployVmCmd.account = self.acct.account deployVmCmd.domainid = self.acct.domainid deployVmCmd.templateid = 5 #For default template- CentOS 5.6(64 bit) deployVmCmd.serviceofferingid = 1 deployVmResponse = self.apiClient.deployVirtualMachine(deployVmCmd) self.debug("VM %s was deployed in the job %s"%(deployVmResponse.id, deployVmResponse.jobid)) # At this point our VM is expected to be Running. Let's find out what # listVirtualMachines tells us about VMs in this account listVmCmd = listVirtualMachines.listVirtualMachinesCmd() listVmCmd.id = deployVmResponse.id listVmResponse = self.apiClient.listVirtualMachines(listVmCmd) self.assertNotEqual(len(listVmResponse), 0, "Check if the list API \ returns a non-empty response") vm = listVmResponse[0] self.assertEqual(vm.id, deployVmResponse.id, "Check if the VM returned \ is the same as the one we deployed") self.assertEqual(vm.state, "Running", "Check if VM has reached \ a state of running") def tearDown(self): # Teardown will delete the Account as well as the VM once the VM reaches "Running" state """ And finally let us cleanup the resources we created by deleting the account. All good unittests are atomic and rerunnable this way """ deleteAcct = deleteAccount.deleteAccountCmd() deleteAcct.id = self.acctResponse.account.id self.apiClient.deleteAccount(deleteAcct)
To run the test we've written we'll place our class file into our demo directory. The test framework will "discover" the tests inside any directory it is pointed to and run the tests against the specified deployment. Our configuration file 'demo.cfg'
is also in the same directory
The usage for deployAndRun
is as follows:
option |
purpose |
---|---|
-c |
points to the configuration file defining our deployment |
-r |
test results log where the summary report is written to |
-t |
testcase log where all the logs we wrote in our tests is output for debugging purposes |
-d |
directory containing all the test suites |
-l |
only load the configuration, do not deploy the environment |
-f |
Run tests in the given file |
On our shell environment we launch deployAndRun
module as follows and at the end of the run the summary of test results is also shown.
root@cloud:~/cloudstack-oss# python -m marvin.deployAndRun -c demo/demo.cfg -t /tmp/testcase.log -r /tmp/results.log -f demo/TestDeployVm.py -l root@cloud:~/cloudstack-oss# cat /tmp/results.log test_DeployVm (testDeployVM.TestDeployVm) ... ok ---------------------------------------------------------------------- Ran 1 test in 100.511s OK
Congratulations, your test has passed!
We do not know for sure that the CentOS VM deployed earlier actually started up on the hypervisor host. The API tells us it did - so Cloudstack assumes the VM is up and running, but did the hypervisor successfully spin up the VM? In this example we will login to the CentOS VM that we deployed earlier using a simple ssh client that is exposed by the test framework. The example assumes that you have an Advanced Zone deployment of Cloudstack running. The test case is further simplified if you have a Basic Zone deployment. It is left as an exercise to the reader to refactor the following test to work for a basic zone.
Let's get started. We will take the earlier test as is and extend it by:
PASS
NOTE: This test has been written for the 3.0 CloudStack. On 2.2.y we do not explicitly create a firewall rule.
#!/usr/bin/env python import marvin from marvin import cloudstackTestCase from marvin.cloudstackTestCase import * from marvin.remoteSSHClient import remoteSSHClient import unittest import hashlib import random import string class TestSshDeployVm(cloudstackTestCase): """ This test deploys a virtual machine into a user account using the small service offering and builtin template """ @classmethod def setUpClass(cls): """ CloudStack internally saves its passwords in md5 form and that is how we specify it in the API. Python's hashlib library helps us to quickly hash strings as follows """ mdf = hashlib.md5() mdf.update('password') mdf_pass = mdf.hexdigest() acctName = 'bugs-'+''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(6)) #randomly generated account cls.apiClient = super(TestSshDeployVm, cls).getClsTestClient().getApiClient() cls.acct = createAccount.createAccountCmd() #The createAccount command cls.acct.accounttype = 0 #We need a regular user. admins have accounttype=1 cls.acct.firstname = 'bugs' cls.acct.lastname = 'bunny' #What's up doc? cls.acct.password = mdf_pass #The md5 hashed password string cls.acct.username = acctName cls.acct.email = 'bugs@rabbithole.com' cls.acct.account = acctName cls.acct.domainid = 1 #The default ROOT domain cls.acctResponse = cls.apiClient.createAccount(cls.acct) def setUpNAT(self, virtualmachineid): listSourceNat = listPublicIpAddresses.listPublicIpAddressesCmd() listSourceNat.account = self.acct.account listSourceNat.domainid = self.acct.domainid listSourceNat.issourcenat = True listsnatresponse = self.apiClient.listPublicIpAddresses(listSourceNat) self.assertNotEqual(len(listsnatresponse), 0, "Found a source NAT for the acct %s"%self.acct.account) snatid = listsnatresponse[0].id snatip = listsnatresponse[0].ipaddress try: createFwRule = createFirewallRule.createFirewallRuleCmd() createFwRule.cidrlist = "0.0.0.0/0" createFwRule.startport = 22 createFwRule.endport = 22 createFwRule.ipaddressid = snatid createFwRule.protocol = "tcp" createfwresponse = self.apiClient.createFirewallRule(createFwRule) createPfRule = createPortForwardingRule.createPortForwardingRuleCmd() createPfRule.privateport = 22 createPfRule.publicport = 22 createPfRule.virtualmachineid = virtualmachineid createPfRule.ipaddressid = snatid createPfRule.protocol = "tcp" createpfresponse = self.apiClient.createPortForwardingRule(createPfRule) except e: self.debug("Failed to create PF rule in account %s due to %s"%(self.acct.account, e)) raise e finally: return snatip def test_SshDeployVm(self): """ Let's start by defining the attributes of our VM that we will be deploying on CloudStack. We will be assuming a single zone is available and is configured and all templates are Ready The hardcoded values are used only for brevity. """ deployVmCmd = deployVirtualMachine.deployVirtualMachineCmd() deployVmCmd.zoneid = 1 deployVmCmd.account = self.acct.account deployVmCmd.domainid = self.acct.domainid deployVmCmd.templateid = 5 #CentOS 5.6 builtin deployVmCmd.serviceofferingid = 1 deployVmResponse = self.apiClient.deployVirtualMachine(deployVmCmd) self.debug("VM %s was deployed in the job %s"%(deployVmResponse.id, deployVmResponse.jobid)) # At this point our VM is expected to be Running. Let's find out what # listVirtualMachines tells us about VMs in this account listVmCmd = listVirtualMachines.listVirtualMachinesCmd() listVmCmd.id = deployVmResponse.id listVmResponse = self.apiClient.listVirtualMachines(listVmCmd) self.assertNotEqual(len(listVmResponse), 0, "Check if the list API \ returns a non-empty response") vm = listVmResponse[0] hostname = vm.name nattedip = self.setUpNAT(vm.id) self.assertEqual(vm.id, deployVmResponse.id, "Check if the VM returned \ is the same as the one we deployed") self.assertEqual(vm.state, "Running", "Check if VM has reached \ a state of running") # SSH login and compare hostname ssh_client = remoteSSHClient(nattedip, 22, "root", "password") stdout = ssh_client.execute("hostname") self.assertEqual(hostname, stdout[0], "cloudstack VM name and hostname match") @classmethod def tearDownClass(cls): """ And finally let us cleanup the resources we created by deleting the account. All good unittests are atomic and rerunnable this way """ deleteAcct = deleteAccount.deleteAccountCmd() deleteAcct.id = cls.acctResponse.account.id cls.apiClient.deleteAccount(deleteAcct)
Observe that unlike the previous test class - TestDeployVM
- we do not have methods setUp
and tearDown
. Instead, we have the methods setUpClass
and tearDownClass
. We do not want the initialization (and cleanup) code in setup (and teardown) to run after every test in the suite which is what setUp
and tearDown
will do. Instead we will have the initialization code (creation of account etc) done once for the entire lifetime of the class. This is accomplished using the setUpClass
and tearDownClass
classmethods. Since the API client is only visible to instances of cloudstackTestCase
we expose the API client at the class level using the getClsTestClient()
method. So to get the API client we call the parent class (super(TestSshDeployVm
, cls)) ie cloudstackTestCase
and ask for a class level API client.
An astute reader would by now have found that the following pattern has been used in the tutorial's test examples:
This pattern is useful to contain the entire test into one atomic piece. It helps prevent tests from becoming entangled in each other ie we have failures localized to one account and that should not affect the other tests. Advanced examples in our basic verification suite are written using this pattern. Test engineers are encouraged to follow the same unless there is good reason not to do so.
The test framework by default runs all its tests under 'admin' mode which means you have admin access and visibility to resources in cloudstack. In order to run the tests as a regular user/domain-admin - you can apply the @UserName decorator which takes the arguments (account, domain, accounttype) at the head of your test class. The decorator will create the account and domain if they do not exist. Do NOT apply the decorator to a test method.
An example can be found at: cloudstack-oss/tools/testClient/testcase/test_userDecorator.py
using the pydev plugin/ pdb and the testClient logs
The logs from the test client detailing the requests sent by it and the responses fetched back from the management server can be found under /var/log/testclient.log
. By default all logging is in INFO
mode. In addition, you may provide your own set of DEBUG
log messages in tests you write. Each cloudstackTestCase
inherits the debug logger and can be used to output useful messages that can help troubleshooting the testcase when it is running. These logs will be found in the location you specified by the -t
option when launching the tests.
eg:
list_zones_response = self.apiclient.listZones(listzonesample) self.debug("Number of zones: %s" % len(list_zones_response)) #This shows us how many zones were found in the deployment
The result log specified by the -r
option will show the detailed summary of the entire run of all the suites. It will show you how many tests failed, passed and how many had errors in them.
While debugging with the PyDev plugin you can also place breakpoints in Eclipse for a more interactive debugging session.
Marvin can be used to configure a deployed Cloudstack installation with Zones, Pods and Hosts automatically in to Advanced or Basic network types. This is done by describing the required deployment in a hierarchical json configuration file. But writing and maintaining such a configuration is cumbersome and error prone. Marvin's configGenerator is designed for this purpose. A simple hand written python description passed to the configGenerator will generate the compact json configuration of our deployment.
Examples of how to write the configuration for various zone models is within the configGenerator.py module in your marvin source directory. Look for methods describe_setup_in_advanced_mode()/ describe_setup_in_basic_mode()
.
Below is such an example describing a simple one host deployment:
{ "zones": [ { "name": "Sandbox-XenServer", "guestcidraddress": "10.1.1.0/24", "physical_networks": [ { "broadcastdomainrange": "Zone", "name": "test-network", "traffictypes": [ { "typ": "Guest" }, { "typ": "Management" }, { "typ": "Public" } ], "providers": [ { "broadcastdomainrange": "ZONE", "name": "VirtualRouter" } ] } ], "dns1": "10.147.28.6", "ipranges": [ { "startip": "10.147.31.150", "endip": "10.147.31.159", "netmask": "255.255.255.0", "vlan": "31", "gateway": "10.147.31.1" } ], "networktype": "Advanced", "pods": [ { "endip": "10.147.29.159", "name": "POD0", "startip": "10.147.29.150", "netmask": "255.255.255.0", "clusters": [ { "clustername": "C0", "hypervisor": "XenServer", "hosts": [ { "username": "root", "url": "http://10.147.29.58", "password": "password" } ], "clustertype": "CloudManaged", "primaryStorages": [ { "url": "nfs://10.147.28.6:/export/home/sandbox/primary", "name": "PS0" } ] } ], "gateway": "10.147.29.1" } ], "internaldns1": "10.147.28.6", "secondaryStorages": [ { "url": "nfs://10.147.28.6:/export/home/sandbox/secondary" } ] } ], "dbSvr": { "dbSvr": "10.147.29.111", "passwd": "cloud", "db": "cloud", "port": 3306, "user": "cloud" }, "logger": [ { "name": "TestClient", "file": "/var/log/testclient.log" }, { "name": "TestCase", "file": "/var/log/testcase.log" } ], "globalConfig": [ { "name": "storage.cleanup.interval", "value": "300" }, { "name": "account.cleanup.interval", "value": "600" } ], "mgtSvr": [ { "mgtSvrIp": "10.147.29.111", "port": 8096 } ] }
What you saw earlier was a condensed form of this complete configuration file. If you're familiar with the CloudStack installation you will recognize that most of these are settings you give in the install wizards as part of configuration. What is different from the simplified configuration file are the sections "zones" and "globalConfig". The globalConfig section is nothing but a simple listing of (key, value) pairs for the "Global Settings" section of CloudStack.
The "zones" section defines the hierarchy of our cloud. At the top-level are the availability zones. Each zone has its set of pods, secondary storages, providers and network related configuration. Every pod has a bunch of clusters and every cluster a set of hosts and their associated primary storage pools. These configurations are easy to maintain and deploy by just passing them through marvin.
root@cloud:~/cloudstack-oss# python -m marvin.deployAndRun -c advanced_zone.cfg -t /tmp/t.log -r /tmp/r.log -d tests/
Notice that we didn't pass the -l
option to deployAndRun
. The reason being we don't want to just load the configuration but also deploy the configuration. This is the default behaviour of Marvin wherein the cloud configuration is deployed and the tests in the directory "tests/
" are run against it.
The above one host configuration was described as follows:
#!/usr/bin/env python import random import marvin from marvin.configGenerator import * def describeResources(): zs = cloudstackConfiguration() z = zone() z.dns1 = '10.147.28.6' z.internaldns1 = '10.147.28.6' z.name = 'Sandbox-XenServer' z.networktype = 'Advanced' z.guestcidraddress = '10.1.1.0/24' pn = physical_network() pn.name = "test-network" pn.traffictypes = [traffictype("Guest"), traffictype("Management"), traffictype("Public")] z.physical_networks.append(pn) p = pod() p.name = 'POD0' p.gateway = '10.147.29.1' p.startip = '10.147.29.150' p.endip = '10.147.29.159' p.netmask = '255.255.255.0' v = iprange() v.gateway = '10.147.31.1' v.startip = '10.147.31.150' v.endip = '10.147.31.159' v.netmask = '255.255.255.0' v.vlan = '31' z.ipranges.append(v) c = cluster() c.clustername = 'C0' c.hypervisor = 'XenServer' c.clustertype = 'CloudManaged' h = host() h.username = 'root' h.password = 'password' h.url = 'http://10.147.29.58' c.hosts.append(h) ps = primaryStorage() ps.name = 'PS0' ps.url = 'nfs://10.147.28.6:/export/home/sandbox/primary' c.primaryStorages.append(ps) p.clusters.append(c) z.pods.append(p) secondary = secondaryStorage() secondary.url = 'nfs://10.147.28.6:/export/home/sandbox/secondary' z.secondaryStorages.append(secondary) '''Add zone''' zs.zones.append(z) '''Add mgt server''' mgt = managementServer() mgt.mgtSvrIp = '10.147.29.111' zs.mgtSvr.append(mgt) '''Add a database''' db = dbServer() db.dbSvr = '10.147.29.111' db.user = 'cloud' db.passwd = 'cloud' zs.dbSvr = db '''Add some configuration''' [zs.globalConfig.append(cfg) for cfg in getGlobalSettings()] ''''add loggers''' testClientLogger = logger() testClientLogger.name = 'TestClient' testClientLogger.file = '/var/log/testclient.log' testCaseLogger = logger() testCaseLogger.name = 'TestCase' testCaseLogger.file = '/var/log/testcase.log' zs.logger.append(testClientLogger) zs.logger.append(testCaseLogger) return zs def getGlobalSettings(): globals = { "storage.cleanup.interval" : "300", "account.cleanup.interval" : "60", } for k, v in globals.iteritems(): cfg = configuration() cfg.name = k cfg.value = v yield cfg if __name__ == '__main__': config = describeResources() generate_setup_config(config, 'advanced_cloud.cfg')
The zone(), pod(), cluster(), host()
are plain objects that carry just attributes. For instance a zone consists of the attributes - name, dns entries, network type
etc. Within a zone I create pod()s
and append them to my zone object, further down creating cluster()s
in those pods and appending them to the pod and within the clusters finally my host()s
that get appended to my cluster object. Once I have defined all that is necessary to create my cloud I pass on the described configuration to the generate_setup_config()
method which gives me my resultant configuration in JSON format.
You don't always want to describe one hosts configurations in python files so we've included some common examples in the Marvin tarball under the sandbox
directory. In the sandbox are configurations of a single host advanced and a single host basic zone that can be tailored to your environment using a simple properties file. The property file, setup.properties
is contains editable name, value (name=value
) pairs that you can change to the IPs, hostnames etc that you have in your environment. The properties file when passed to the python script will generate the JSON configuration for you.
Sample setup.properties:
[globals] secstorage.allowed.internal.sites=10.147.28.0/24 [environment] dns=10.147.28.6 mshost=localhost mysql.host=localhost mysql.cloud.user=cloud mysql.cloud.passwd=cloud [cloudstack] private.gateway=10.147.29.1 private.pod.startip=10.147.29.150 private.pod.endip=10.147.29.159
And generate the JSON config as follows:
root@cloud:~/incubator-cloudstack/tools/marvin/marvin/sandbox/advanced# python advanced_env.py -i setup.properties -o advanced.cfg root@cloud:~/incubator-cloudstack/tools/marvin/marvin/sandbox/advanced# head -10 advanced.cfg { "zones": [ { "name": "Sandbox-XenServer", "guestcidraddress": "10.1.1.0/24", ... <snip/> ...
Nose extends unittest to make testing easier. Nose comes with plugins that help integrating your regular unittests into external build systems, coverage, profiling etc. Marvin comes with its own nose plugin for this so you can use nose to drive CloudStack tests. The plugin can be installed by simply running setuptools in your marvin source directory. Running nosetests -p will show if the plugin registered successfully.
$ cd /usr/local/lib/python2.7/site-packages/marvin $ easy_install . Processing . Running setup.py -q bdist_egg --dist-dir Installed /usr/local/lib/python2.7/dist-packages/marvin_nose-0.1.0-py2.7.egg Processing dependencies for marvin-nose==0.1.0 Finished processing dependencies for marvin-nose==0.1.0 $ nosetests -p Plugin xunit Plugin multiprocess Plugin capture Plugin logcapture Plugin coverage Plugin attributeselector Plugin doctest Plugin profile Plugin collect-only Plugin isolation Plugin pdb Plugin marvin # Usage and running tests $ nosetests --with-marvin --marvin-config=/path/to/basic_zone.cfg --load /path/to/tests
The smoke tests and component tests contain attributes that can be used to filter the tests that you would like to run against your deployment. You would use nose's attrib plugin for this. Currently zone models are
Some tests have been tagged to run only for devcloud environment. In order to run these tests you can use the following command after you've setup your management server and the host only devcloud is running with devcloud.cfg as its deployment configuration. This assumes you have the marvin-nose plugin installed on it as listed above.
~/workspace/cloudstack/incubator-cloudstack(branch:master*) » nosetests --with-marvin --marvin-config=tools/devcloud/devcloud.cfg --load -a tags='devcloud' test/integration/smoke Test Deploy Virtual Machine ... ok Test Stop Virtual Machine ... ok Test Start Virtual Machine ... ok Test Reboot Virtual Machine ... ok Test destroy Virtual Machine ... ok Test recover Virtual Machine ... ok Test destroy(expunge) Virtual Machine ... ok ---------------------------------------------------------------------- Ran 7 tests in 0.001s OK
There are a few do's and don'ts in choosing the automated scenario for an integration test. These are mostly for the system to blend well with the continuous test infrastructure and to keep environments pure and clean without affecting other tests.
User > DomainAdmin > Admin
At the end of the test we delete this account so as to keep tests atomic and contained within a tenant's users space.
Examples of tests with more backend verification and complete integration of suites for network, snapshots, templates etc can be found in the test/integration/smoke
directory. Almost all of these test suites use common library wrappers written around the test framework to simplify writing tests. These libraries are part of marvin.integration
. You may start using these libraries at your convenience but there's no better way than to write the complete API call yourself to understand its behaviour.
The libraries take advantage of the fact that every resource - VirtualMachine, ISO, Template, PublicIp etc follows the pattern of
For any feedback, typo corrections please email the -dev lists