The project uses a Jenkins server for continuous integration testing. An extensive set of tests is run on a daily basis. Each pull-request is also subject to a sub-set of test suites, called "check" tests.

The daily tests run all the automated suites on each of the supported Hadoop distros.

The check tests runs three types of jobs:

  1. Static tests: These do a quick scan of the code looking for common problems. These include checking for binary files, looking for merge conflict markers, and license checking (RAT).
  2. Build jobs: These build release and debug versions of the code and post tar files back to jenkins for test jobs to use. There are also some post-build checks done to ensure .gitignore files are kept up to date and proper version information is in built files (sqvers command).
  3. Test jobs: These run in parallel with build jobs in order to prep the machine by checking the Hadoop configuration and cleaning up any HBase data left from previous tests. Then they wait for build jobs to complete so they can pick up the tar files. They then use installer to to install trafodion and run the particular test. 

Static checks run directly on the jenkins (master) server.

Builds run on a dedicated build machine, which is larger than the other machines, so that it can run multiple build jobs concurrently.

Test jobs run on machines set up as single-node Hadoop clusters. Each one only runs a single job at a time.

Each job archives key log files to a log server so they can be inspected as needed. Log files are compressed after a week, and deleted after two weeks.

The testing environments are maintained with Puppet. The configuration is maintained in https://github.com/trafodion/infra.

 

  • No labels