...
- Make sure that you have the latest Maven installed (3.3.3+)
Make sure that you have the following defined in your environment:
No Format export JAVA_HOME=/usr/lib/jvm/java-openjdk export HADOOP_HOME=/usr/lib/hadoop export HADOOP_CONF_DIR=/etc/hadoop/conf export HBASE_HOME=/usr/lib/hbase export HBASE_CONF_DIR=/etc/hbase/conf export ZOOKEEPER_HOME=/usr/lib/zookeeper export HIVE_HOME=/usr/lib/hive export PIG_HOME=/usr/lib/pig export FLUME_HOME=/usr/lib/flume export SQOOP_HOME=/usr/lib/sqoop export HCAT_HOME=/usr/lib/hcatalog export OOZIE_URL=http://localhost:11000/oozie export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce export SPARK_HOME=/usr/lib/spark export SPARK_MASTER=spark://localhost:7077
Given the on-going issues with Apache Jenkins builds you might need to deploy everything locally:
No Format # Under bigtop home dir mvn install mvn -f bigtop-test-framework/pom.xml -DskipTests install mvn -f bigtop-tests/test-execution/conf/pom.xml install mvn -f bigtop-tests/test-execution/common/pom.xml install mvn -f bigtop-tests/test-artifacts/pom.xml install
Start test execution:
No Format cd bigtop-tests/test-execution/smokes/<subsystem> mvn verify
If you want to run a specific class of test:
No Format cd bigtop-tests/test-execution/smokes/<subsystem> mvn failsafe:integration-test -Dit.test=TestWebHDFS
If you want to run a specific test in a class:
No Format cd bigtop-tests/test-execution/smokes/<subsystem> mvn failsafe:integration-test -Dit.test=TestWebHDFS#testGetFileChecksum
Cluster Failure Tests
The purpose of this test is to check whether or not mapreduce jobs complete when failing the nodes of the cluster that is performing the job. When applying these cluster failures, the mapreduce job should complete with no issues. If mapreduce jobs fail as a result of any of the cluster failure tests, the user may not have a functional cluster or implementation of mapreduce.
Cluster failures are handled by three classes - ServiceKilledFailure.groovy, ServiceRestartFailure.groovy, and NetworkShutdownFailure.groovy.
We will call the functionality of these classes "cluster failures." The cluster failures extend an abstract class called AbstractFailure.groovy, which is a runnable. Each of these runnable classes execute specific shell commands that purposely fail a cluster. When cluster failures are executed, they call a function populateCommandsLIst(), which will fill up the datastructures failCommands and restoreCommands with values pertaining to the cluster failure. The values include a shell string such as "sudo pkill -9 -f %s" and a specified host to run the command on. From this, shell commands are generated and executed. Note: the host can be specified when instantiating the cluster failure or configured in /resources/vars.properties
...