Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: updated smoke tests integration tests disambiguation

...

These tests can also be run with a new feature (pending BIGTOP-1388) - cluster failure tests, which is explained at the end of this page.

Note:  After BIGTOP-1222 There is now a simple way to run smoke tests for your hadoop cluster in bigtop, which doesnt require jar files or maven. For running smoke tests to validate your cluster, see the README of the bigtop-tests/smoke-tests directory.

...

There are 3 levels of testing that bigtop supports.

  1. smoke-tests :  Basic hadoop ecosystem interoperability.  These are gradle based, and can be run from the bigtop-tests/smoke-tests directory, and are super easy to modify at run time (no jar file is built - they are run from groovy source and compiled on the fly at runtime)
  2. integration tests : Advanced, maven based tests which run from jar files.  These are maven based, and are run from the bigtop-tests/test-execution directory.
  3. integration tests : with test failures : Same as (2), but also featuring cluster failures during the tests, to test resiliency.

Since many of the integration tests are simply smoke tests, we will hope to see convergence of much of (2) into (1),over time.

Running smoke-tests 

If you are looking to simply test that your basic ecosystem components are working, most likely the smoke-tets will suite your needs.

Also, note that the smoke tests can be used to call the tets from the integration tests directory quite easily : and run them from source without needing a jar file intermediate.

To see examples of how to do this, check out the mapreduce/ and mahout/ tests, which both reference groovy files in bigtop-tests/test-artifacts.

To avoid redundant documentation, you can read about how to run these tests in the README file under bigtop-tests/smoke-tests/.

These tests are particularly easy to modify - without requiring precompiled jars, and run directly from groovy scripts.  

Running bigtop's integration tests

For the testing of binary compatibility with particular versions of hadoop ecosystem components, or for other integration tests, the original bigtop tests, which live in the bigtop-tests/test-artifacts directory, can be used.

This maven project contains a battery of junit based tests which are built as jar files, and compiled against a specific hadoop version, and is executed using the bigtop-tests/test-execution project.

These can be a good way to do fine grained integration testing of your bigtop based hadoop installation.

  • Make sure that you have the latest Maven installed (3.0.4)
  • Make sure that you have the following defined in your environment:

    No Format
    export JAVA_HOME=/usr/java/latest
    export HADOOP_HOME=/usr/lib/hadoop
    export HADOOP_CONF_DIR=/etc/hadoop/conf
    export HBASE_HOME=/usr/lib/hbase
    export HBASE_CONF_DIR=/etc/hbase/conf
    export ZOOKEEPER_HOME=/usr/lib/zookeeper
    export HIVE_HOME=/usr/lib/hive
    export PIG_HOME=/usr/lib/pig
    export FLUME_HOME=/usr/lib/flume
    export SQOOP_HOME=/usr/lib/sqoop
    export HCAT_HOME=/usr/lib/hcatalog
    export OOZIE_URL=http://localhost:11000/oozie
    export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
    
  • Given the on-going issues with Apache Jenkins builds you might need to deploy everything locally:

    No Format
    mvn -f bigtop-test-framework/pom.xml -DskipTests install
    mvn -f bigtop-tests/test-execution/conf/pom.xml install
    mvn -f bigtop-tests/test-execution/common/pom.xml install 
    mvn -f bigtop-tests/test-artifacts/pom.xml install
    
  • Start test execution:

    No Format
    cd bigtop-tests/test-execution/smokes/<subsystem>
    mvn verify
    

    Cluster Failure Tests

    The purpose of this test is to check whether or not mapreduce jobs complete when failing the nodes of the cluster that is performing the job. When applying these cluster failures, the mapreduce job should complete with no issues. If mapreduce jobs fail as a result of any of the cluster failure tests, the user may not have a functional cluster or implementation of mapreduce. 


    Cluster failures are handled by three classes - ServiceKilledFailure.groovy, ServiceRestartFailure.groovy, and NetworkShutdownFailure.groovy. We will call the functionality of these classes "cluster failures." The cluster failures extend an abstract class called AbstractFailure.groovy, which is a runnable. Each of these runnable classes execute specific shell commands that purposely fail a cluster. When cluster failures are executed, they call a function populateCommandsLIst(), which will fill up the datastructures failCommands and restoreCommands with values pertaining to the cluster failure. The values include a shell string such as "sudo pkill -9 -f %s" and a specified host to run the command on. From this, shell commands are generated and executed. Note: the host can be specified when instantiating the cluster failure or configured in /resources/vars.properties

...