Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

    • ServiceKilledFailure will execute commands that will kill a specified service.

      Code Block
        private static final String KILL_SERVICE_TEMPLATE = "sudo pkill -9 -f %s"
        private static final String START_SERVICE_TEMPLATE = "sudo service %s start"
    • ServiceRestartFailure will execute commands that will stop and start a service.

      Code Block
        private static final String STOP_SERVICE_TEMPLATE = "sudo service %s stop"
        private static final String START_SERVICE_TEMPLATE = "sudo service %s start"
    • NetworkShutdownFailure will execute a series of commands that restarts the network.

      Code Block
        private static final String DROP_INPUT_CONNECTIONS = "sudo iptables -A INPUT -s %s -j DROP"
        private static final String DROP_OUTPUT_CONNECTIONS = "sudo iptables -A OUTPUT -d %s -j DROP"
        private static final String RESTORE_INPUT_CONNECTIONS = "sudo iptables -D INPUT -s %s -j DROP"
        private static final String RESTORE_OUTPUT_CONNECTIONS = "sudo iptables -D OUTPUT -d %s -j DROP"
    Two other classes that must be mentioned are FailureVars.groovy and FailureExecutor.groovy. FailureVars, when instantiated, will load configurations from /resources/vars.properties to prepare for cluster failing. The configurations dictate which cluster failures will be executed along with a variety of different timing options. More information in "How to run cluster failure tests." FailureExecutor is the main driver that creates and runs cluster failure threads (threads run parallel to hadoop and mapreduce jobs). The sequence of execution are as follows:

    • FailureVars will configure all variables that are necessary for cluster failures.
    • For configuration of FailureVars, see the properties file associated with it (in FailureVars.groovy).
    • FailureExecutor will then spawn and execute cluster failure threads.
    • The threads will then run its respective shell commands on hosts specified by the user.

  • How to run cluster failure tests:

    Since the cluster failures are all runnable, the user just has to instantiate the objects and execute them in the tests they are running. If the user wishes to run cluster failures in parallel to hadoop and mapreduce jobs to test for job completion, the user must utilize FailureVars and FailureExecutor. Let's say we want to run cluster failures test while a mapreduce test such as TestDFSIO is running:
    • First step is to create a FailureVars object before the test is run inside TestDFSIO.groovy.

      Code Block
        @Before
        void configureVars() {
          def failureVars = new FailureVars();
        }
    • Next step is to insert code to spawn and start a FailureExecutor thread inside the test body of TestDFSIO

      Code Block
        @Test
        public void testDFSIO() {    
      	FailureExecutor failureExecutor = new FailureExecutor();
          Thread failureThread = new Thread(failureExecutor, "DFSIO");
          failureThread.start();
      
      	//the test
      	...
      	...
        }
    • Now the user just has to execute the test. When the test is run, the cluster failures will run in parallel to the mapreduce test.
    • To configure the hosts as well as various timing options, open /resources/vars.properties. There, you can specify hosts, which cluster failures to run, and when the cluster failures start. You can also specify the time in between cluster failures and how long services can be killed before being brought back up. Refer to the /bigtop/bigtop-test-framework/README for more information on vars.properties.

Package tests

There's a special kind of tests designed to validate and find bugs in the packages before they are getting deployed. The source code of the tests could be found in bigtop-tests/test-artifacts/package. Before you can run tests you actually have to specify the testsuite that you want to use. You can pick from the following list:

      • TestPackagesBasicsWithRM
      • TestPackagesBasics
      • TestPackagesPseudoDistributedServices
      • TestPackagesPseudoDistributedDependency
      • TestPackagesPseudoDistributedFileContents
      • TestPackagesPseudoDistributedWithRM
        (you can open up corresponding class implementation to see how they are different from each other).

As the first step, pick TestPackagesBasicsWithRM. With that in mind your last command line is going to look something like:

$ mvn clean verify -f bigtop-tests/test-execution/package/pom.xml -Dorg.apache.bigtop.itest.log4j.level=TRACE -Dlog4j.debug=true -Dorg.apache.maven-failsafe-plugin.testInclude="**/TestPackagesBasicsWithRM.*" -Dbigtop.repo.file.url=http://xxxxxxx

The last two -D settings are for the name of the test suite and for the URL of the repo file describing the repo with your packages.

Things to keep in mind

  • If you want to select a subset of tests you can use -Dorg.apache.maven-failsafe-plugin.testInclude='**/Mask*'. e.g., mvn verify -Dorg.apache.maven-failsafe-plugin.testInclude='**/TestHDFSBalancer*'
  • It is helpful to add -Dorg.apache.bigtop.itest.log4j.level=TRACE to your mvn verify command
  • These tests are not currently executed via our smoke tests - which remains a separate testing package.