...
...
The developer
profile compiles and packages Marvin. It does NOT install marvin your default PYTHONPATH.
Code Block | ||||
---|---|---|---|---|
| ||||
bash
$ mvn -P developer -pl :cloud-marvin
|
Alternately, you can fetch marvin from jenkins as well: packages are built every hour here
Code Block | ||||
---|---|---|---|---|
| ||||
$ pip install tools/marvin/dist/Marvin-0.1.0*.tar.gz |
To upgrade an existing marvin installation and sync it with the latest APIs on the managemnet server, follow the steps herecall this command
Code Block | ||||
---|---|---|---|---|
| ||||
$ pip install --upgrade tools/marvin/dist/Marvin-*.tar.gz
|
First Things First
You will need a CloudStack management server that is already deployed, configured and ready to accept API calls. You can pick any management server in your lab that has a few VMs running on it or you can use DevCloud or the simulator environment deployed for a checkin test #checkin. Create a sample json config file demo.cfg
telling marvin where your management server and database server are.
The demo.cfg
json config looks as shown below:
Code Block | ||
---|---|---|
| ||
json
{
"dbSvr": {
"dbSvr": "marvin.testlab.com",
"passwd": "cloud",
"db": "cloud",
"port": 3306,
"user": "cloud"
},
"logger": {
"LogFolderPath": "/tmp/"
},
"mgtSvr": [
{
"mgtSvrIp": "marvin.testlab.com",
"port": 8096,
"user": "root",
"passwd": "password",
"hypervisor": "XenServer",
}
]
}
|
...
- open up the integration.port on your management server iptables -I INPUT -p tcp
dport 8096 -j ACCEPT
as root user.
Code Block | ||||
---|---|---|---|---|
| ||||
$ sudo iptables -I INPUT -p tcp --dport 8096 -j ACCEPT |
- Change the global setting integration.api.port
on the CloudStack GUI to 8096
and restart the management server
...
Here is our test_deploy_vm.py modulemodule:
Code Block |
---|
python #All tests inherit from cloudstackTestCase from marvin.cloudstackTestCase import cloudstackTestCase #Import Integration Libraries #base - contains all resources as entities and defines create, delete, list operations on them from marvin.integration.lib.base import Account, VirtualMachine, ServiceOffering #utils - utility classes for common cleanup, external library wrappers etc from marvin.integration.lib.utils import cleanup_resources #common - commonly used methods for all tests are listed here from marvin.integration.lib.common import get_zone, get_domain, get_template class TestDeployVM(cloudstackTestCase): """Test deploy a VM into a user account """ @classmethod def setUpClass(cls): super(TestDeployVM, cls) def setUp(self): self.apiclient = self.testClient.getApiClient() self.testdata = self.testClient.getParsedTestDataConfig() # Get Zone, Domain and Default Built-in template self.domain = get_domain(self.apiclient, self.testdata) self.zone = get_zone(self.apiclient, self.testClient.getZoneForTests()) self.testdata["mode"] = self.zone.networktype self.template = get_template(self.apiclient, self.zone.id, self.testdata["ostype"]) #create a user account self.account = Account.create( self.apiclient, self.testdata["account"], domainid=self.domain.id ) #create a service offering self.small_service_offering = ServiceOffering.create( self.testdata["service_offerings"]["small"] small_service_offering['storagetype'] = 'local' self.service_offering = ServiceOffering.create( self.apiclient, self.testdata["small_service_offering"]["small"] ) #build cleanup list self.cleanup = [ self.service_offering, self.account ] def test_deploy_vmtearDown(self): """Test Deploy Virtual Machine try: # Validate the following: cleanup_resources(self.apiclient, self.cleanup) except Exception as e: # 1. Virtual Machine is accessible via SSHself.debug("Warning! Exception in tearDown: %s" % e) def test_deploy_vm(self): """Test Deploy Virtual Machine # Validate the following: # 2.- listVirtualMachines returns accurate information """ self.virtual_machine = VirtualMachine.create( self.apiclient, self.testdata["virtual_machine"], accountid=self.account.name, zoneid=self.zone.id, domainid=self.account.domainid, serviceofferingid=self.service_offering.id, templateid=self.template.id ) list_vms = VirtualMachine.list(self.apiclient, id=self.virtual_machine.id) self.debug( "Verify listVirtualMachines response for virtual machine: %s"\ % self.virtual_machine.id ) self.assertEqual( isinstance(list_vms, list), True, "List VM response was not a valid list" ) self.assertNotEqual( len(list_vms), 0, "List VM response was empty" ) vm = list_vms[0] self.assertEqual( vm.id, self.virtual_machine.id, "Virtual Machine ids do not match" ) self.assertEqual( vm.name, self.virtual_machine.name, "Virtual Machine names do not match" ) self.assertEqual( vm.state, "Running", msg="VM is not in Running state" ) def tearDown(self): try: cleanup_resources(self.apiclient, self.cleanup) except Exception as e: self.debug("Warning! Exception in tearDown: %s" % e) |
...
Marvin supports test categories which enables you to run specific tests for product areas. For example if you have made a change in the accounts product area, there's a way to trigger all accounts related tests in both smoke and component tests directories. Just put a tag 'accounts' and point the directories/files
Example:
Info |
---|
nosetests --with-marvin --marvin-config=[config] --hypervisor=xenserver -a tags=accounts [file(s)] |
More info on this wiki page: Categories
...
Note that the testclient is available from the superclass using getClsTestClient in this case.
Anchor | ||||
---|---|---|---|---|
|
The agent simulator and marvin are integrated into maven build phases to help you run basic tests before pushing a commit. These tests are integration tests that will test the CloudStack system as a whole. Management Server will be running during the tests with the Simulator Agent responding to hypervisor commands. For running the checkin tests, your developer environment needs to have Marvin installed and working with the latest CloudStack APIs. These tests are lightweight and should ensure that your commit doesnt break critical functionality for others working with the master branch. The checkin-tests utilize marvin and a one-time installation of marvin will be done so as to fetch all the related dependencies. Further updates to marvin can be done by using the sync mechanism described later in this section. In order for these tests to run on simulator, we need to add an attribute <required_hardware="false"> to these test cases.
...
command 1: Below command deploys a datacenter.
Code Block |
---|
$ python <cs_code_directory>/tools/marvin/marvin/deployDataCenter.py -i <cs_code_directory>/setup/dev/advanced.cfg |
Example configs are available in setup/dev/advanced.cfg and setup/dev/basic.cfg
...
command 2: Below command runs the test.
Code Block |
---|
$ export MARVIN_CONFIG=setup/dev/advanced.cfg $ export TEST_SUITE=test/integration/smoke $ export ZONE_NAME=Sandbox-simulator $ nosetests-2.7 \ --with-marvin \ --marvin-config=/home/abc/softwares/cs_4_4_forward/setup/dev/advanced.cfg${MARVIN_CONFIG} \ -w /home/abc/softwares/cs_4_4_forward/test/integration/smoke/${TEST_SUITE} \ --with-xunit \ --xunit-file=/tmp/bvt_selfservice_cases.xml \ --zone=<zone_mentioned_config>${ZONE_NAME} \ --hypervisor=simulator \ -a tags=advanced,required_hardware=false |
The --zone
argument should match the name of the zone defined in the config file (currently Sandbox-simulator for basic.cfg and advanced.cfg).
Check-In tests are the same as any other tests written using Marvin. The only additional step you need to do is ensure that your test is driven entirely by the API only. This makes it possible to run the test on a simulator. Once you have your test, you need to tag it to run on the simulator so the marvin test runner can pick it up during the checkin-test run. Then place your test module in the test/integration/smoke
folder and it will become part of the checkin test run.
For eg:
Code Block |
---|
|
Code Block |
---|
@attr(tags =["advanced", "smoke"],required_hardware="false") def test_deploy_virtualmachine(self): """Tests deployment of VirtualMachine """ |
Code Block |
---|
|
The sample simulator configurations for advanced and basic zone is available in setup/dev/ directory. The default configuration setup/dev/advanced.cfg deploys an advanced zone with two simulator hypervisors in a single cluster in a single pod, two primary NFS storage pools and a secondary storage NFS store. If your test requires any extra hypervisors, storage pools, additional IP allocations, VLANs etc - you should adjust the configuration accordingly. Ensure that you have run all the checkin tests in the new configuration. For this you can directly edit the JSON file or generate a new configuration file. The setup/dev/advanced.cfg was generated as follows
...
Code Block |
---|
tsp@cloud:~/cloudstack# nosetests --with-marvin --marvin-config=tools/devcloud/devcloud.cfg -a tags='devcloud' test/integration/smoke Test Deploy Virtual Machine ok Test Stop Virtual Machine ok Test Start Virtual Machine ok Test Reboot Virtual Machine ok Test destroy Virtual Machine ok Test recover Virtual Machine ok Test destroy(expunge) Virtual Machine ok ---- Ran 7 tests in 10.001s OK |
...
Code Block |
---|
import random import marvin from marvin.configGenerator import * def describeResources(): zs = cloudstackConfiguration() z = zone() z.dns1 = '10.147.28.6' z.internaldns1 = '10.147.28.6' z.name = 'Sandbox-XenServer' z.networktype = 'Advanced' z.guestcidraddress = '10.1.1.0/24' pn = physical_networkphysicalNetwork() pn.name = "test-network" pn.traffictypes = [traffictype("Guest"), traffictype("Management"), traffictype("Public")] z.physical_networks.append(pn) p = pod() p.name = 'POD0' p.gateway = '10.147.29.1' p.startip = '10.147.29.150' p.endip = '10.147.29.159' p.netmask = '255.255.255.0' v = iprange() v.gateway = '10.147.31.1' v.startip = '10.147.31.150' v.endip = '10.147.31.159' v.netmask = '255.255.255.0' v.vlan = '31' z.ipranges.append(v) c = cluster() c.clustername = 'C0' c.hypervisor = 'XenServer' c.clustertype = 'CloudManaged' h = host() h.username = 'root' h.password = 'password' h.url = 'http://10.147.29.58' c.hosts.append(h) ps = primaryStorage() ps.name = 'PS0' ps.url = 'nfs://10.147.28.6:/export/home/sandbox/primary' c.primaryStorages.append(ps) p.clusters.append(c) z.pods.append(p) secondary = secondaryStorage() secondary.url = 'nfs://10.147.28.6:/export/home/sandbox/secondary' z.secondaryStorages.append(secondary) '''Add zone''' zs.zones.append(z) '''Add mgt server''' mgt = managementServer() mgt.mgtSvrIp = '10.147.29.111' zs.mgtSvr.append(mgt) '''Add a database''' db = dbServer() db.dbSvr = '10.147.29.111' db.user = 'cloud' db.passwd = 'cloud' zs.dbSvr = db '''Add some configuration''' [zs.globalConfig.append(cfg) for cfg in getGlobalSettings()] ''''add loggers''' testLogger = logger() testLogger.logFolderPath = '/tmp/' zs.logger = testLogger return zs def getGlobalSettings(): globals = { "storage.cleanup.interval" : "300", "account.cleanup.interval" : "60", } for k, v in globals.iteritems(): cfg = configuration() cfg.name = k cfg.value = v yield cfg if __name__ == '__main__': config = describeResources() generate_setup_config(config, 'advanced_cloud.cfg') |
...