Introduction: As part of the Quality initiative we have implemented a fully automated continuous integration system for cloudstack. Using this system we can schedule automatic builds capable of deploying cloudstack, executing tests and publishing reports. The system uses various open source tools like cobbler, puppet and jenkins. Here we present the system

design and the workflow.



Design:

 

 



The diagram shown above shows various components of the system and their interaction with each other.  

 

Component Description.

  • Jenkins: This is an opensource application that schedules and monitors jobs.

 

  • Driver VM: This is the core component of the CI system. This VM handles creation of management server, refreshing the hosts, deploying data center, keeping track of the

           resources and execution of tests.

 

  • Infra xen server cluster: These are set of xen servers which are a part of the CI system and are used for hosting management servers and running other required service like a nfs servers , secondary DNS servers etc.

 

  • Server farm: These are the set of machines used as hypervisors which are used to create cloudstack datacenter.

Design description.

  • The automated system is broken down into three jobs, namely the testbed creation job, test execution job and the reporting job. For every automation run these jobs are run in sequence and in the order specified above.  

  • The test bed creation job is exposed via jenkins to the CI admin. We do this by creating a parameterized jenkins job which internally calls the testbed creation job to initiate the test run. we can schedule this job to run periodically or trigger this based on some criteria.

  • The Driver VM is added as a slave in jenkins and executes the test creation jobs.

  • The test execution job is nothing but  a jenkins matrix job which uses nosetest commands and executes tests based on the arguments passed by the test creation job.

  • The report generator job is again jenkins job which uses junit plugin and a custom jelly script to generate the result of a particular test run.

  • All the test results are archive on a nfs server for later analysis.

  • We treat a set of resources (hosts, ips, vlans) specified by a configuration file a one unit of resource instead of treating each ip or vlan as a resource. This reduces the complexity in resource allocation and resource management. No two configurations can be used at the same time.

  • we use the jenkins concurrent job throttling plugin to enable queuing in cases where jobs are scheduled but no resource is free.  

  • We use cobbler to pxe boot machines and puppet to install packages and configure the VMs.

  • The cloudstack management server is installed from source for every test run instead of installing it from previously built packages. We can track each of the test run based on the commit hash from which the management server was built.  

  • There can be multiple test runs running concurrently in order to isolate the test execution environments we use python virtual environments. This gives a separate execution environment for each run.

  • Each version of cloudstack is associated with its own version of test cases and test framework version. we fetch the test cases and the marvin test framework packages from the management server once it is built and then install them in the corresponding virtual environments.

  • The links to the systemvm templates corresponding to each of the versions is maintained in db. For every run we fetch the system vm templates from these urls and seed them in the corresponding secondary storages. The path to the secondary storage is read from the configuration file.

  • In order to reduce the time to get the built in templates we pre-seed them just like the system vm templates.   

  • In case of hypervisors like KVM we need to install cloudstack agents. we generate the agent packages as a part of the management server build and push them to the required host, we then use puppet recipe to configure the repos and install the agent from the packages which we copied earlier.

  • In order to reduce the setup time and execution time the tests were categorised into simulator tests and hardware specific tests. We are required to run only the simulator specific tests in case of simulator test runs. we use the information from the configuration files and the use appropriate tags to fetch the corrected test caseses that need to be executed.

  • For cloudstack datacenter creation we rely on marvin test framework’s deployDataCenter script.

  • The test cases requires some data to run, like the storage to use or the porable ips availbles etc. These kind of things are all maintained in the test_data.py file. The data in this file is enviroment specific. We have to edit this file based on the available ips and setup of the loacl environment in wich the tests are being run.

 

Workflows.

 

The CI is designed to support two workflows. The self service workflow and provisioning workflow.

 

Self service workflow.

 

The intent of this workflow is to provide a way to the developers to run sanity checks on the feature branch using simulator before merging  the code to the mainline repo. The diagram below illustrates the steps required when using this workflow.  This workflow can also be extended to running tests using actula hardware. But for exposing such service we need to have a large amount of hardware available for use at any given time.

 

self service view..jpg




Continuous Integration Workflow:

The CI workflow enables running of tests at regular intervals, generate and publish reports.

we create jobs in jenkins which are triggered at required intervals. These jobs in turn trigger the test runs and generate results.


Automatic bug logger:

report analyser. The Automatic bug Logger can be use to log bugs when ever we see

a failure in the CI runs. This way every CI failure will be accounted for, which will help in

catching bugs and deleigateing issues.

 

Test result analyser.








    











  • No labels