Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagebash
 # For the back-end tests, the Impala cluster must not be running:
./bin/start-impala-cluster.py --kill

./bin/run-backend-tests.sh
# or
ctest
 
# To run just one test (and show what broke):
ctest --output-on-failure -R expr-test
 
# You can also run just a single test at a time. If this hangs, be sure
# that you've run set-classpath.sh before-hand.
# You can then use command line parameters to run a subset of tests:
# To get help with parameters:
be/build/latest/runtime/mem-pool-test --help
 
# To run just one test within a test:
be/build/latest/runtime/mem-pool-test --gtest_filter=MemPoolTest.TryAllocateAligned

Run most end-to-end tests (but not custom cluster tests)

Code Block
languagebash
# For the end-to-end tests, Impala must be running:
./bin/start-impala-cluster.py

./tests/run-tests.py

# To run the tests in just one directory:
./tests/run-tests.py metadata
 
# To run the tests in that directory with a matching name: 
./tests/run-tests.py metadata -k test_partition_metadata_compatibility
 
# To run the tests in a file with a matching name:
./tests/run-tests.py query_test/test_insert.py -k test_limit

# To run the tests that don't match a name:
./tests/run-tests.py query_test/test_insert.py -k "-test_limit".

# To run the tests with more restrictive name-matching
./bin/impala-py.test tests/shell/test_shell_commandline.py::TestImpalaShell::test_multiple_queries
 
# To run tests only against a specific file format(s):
./tests/run-tests.py --file_format_filter="seq,text"
 
# To change the impalad instance to connect to (default is localhost:21000):
./tests/run-tests.py --impalad=hostname:port

# To use a different exploration strategy (default is core):
./tests/run-tests.py --exploration_strategy=exhaustive

# To run a particular .test file, e.g. testdata/workloads/functional-query/queries/QueryTest/exprs.test
# search for a .py test file which refers to the test, (try 'grep -r QueryTest/exprs tests')
# then run that test file (in this case tests/query_test/test_exprs.py) using run-tests.py
./tests/run-tests.py query_test/test_exprs.py

# To update the results of tests (The new test files will be located in ${IMPALA_EE_TEST_LOGS_DIR}/test_file_name.test):
./tests/run-tests.py --update_results

...

  1. Look for Minidump crashes. These look like: ./Impala/logs_static/logs/ee_tests/minidumps/impalad/71bae77a-07d2-4b6e-0885eca5-adc7ee36.dmp. Because some tests run in parallel (especially in the "ee" test suite), if one of the Impala daemons crashed, subsequent test results are meaningless. Start with the crash. If you see a crash dump, it's possible that there's a stack trace in the logs:
    # ag is a recursive grep tool; see https://github.com/ggreer/the_silver_searcher
    $ag 'Check failure stack trace:'
    Impala/logs_static/logs/ee_tests/impalad.ip-172-31-24-168.ubuntu.log.ERROR.20171023-180534.38031
    101743:*** Check failure stack trace: ***
    101746:*** Check failure stack trace: ***
    Impala/logs_static/logs/custom_cluster_tests/impalad.ip-172-31-24-168.ubuntu.log.ERROR.20171023-193700.44118
    12:*** Check failure stack trace: ***
    Impala/logs_static/logs/custom_cluster_tests/impalad.ip-172-31-24-168.ubuntu.log.ERROR.20171023-193658.43993
    12:*** Check failure stack trace: ***
    Impala/logs_static/logs/custom_cluster_tests/impalad.ip-172-31-24-168.ubuntu.log.ERROR.20171023-193659.44062
    12:*** Check failure stack trace: ***

     


  2. Look for "FAILED" (case insensitive) in the log output of the Jenkins job:

    $cat console | egrep -o -i 'FAILED +[^ ]+ +[^ ]+'
    FAILED query_test/test_udfs.py::TestUdfExecution::test_ir_functions[exec_option: {'disable_codegen_rows_threshold':
    FAILED query_test/test_spilling.py::TestSpillingDebugActionDimensions::test_spilling_large_rows[exec_option: {'debug_action':
    FAILED query_test/test_udfs.py::TestUdfExecution::test_native_functions[exec_option: {'disable_codegen_rows_threshold':
    failed to connect:
    failed in 12443.43
    failed in 4077.50
  3. The tests produce xunit XML files; you can look for failures in these like so:
    # Search for and extract 'failure message="..."' in all XML files; uses
    # a multiline RE.
    $grep -Pzir -o 'failure message=\"[^"]*' $(find . -name '*.xml') | tr '\0' ' ' | head
    ./Impala/logs_static/logs/ee_tests/results/TEST-impala-parallel.xml:failure message="query_test/test_nested_types.py:70: in test_subplan
    self.run_test_case('QueryTest/nested-types-subplan', vector)
    common/impala_test_suite.py:391: in run_test_case
    result = self.__execute_query(target_impalad_client, query, user=user)
    common/impala_test_suite.py:600: in __execute_query
    return impalad_client.execute(query, user=user)
    common/impala_connection.py:160: in execute
    return self.__beeswax_client.execute(sql_stmt, user=user)
    beeswax/impala_beeswax.py:173: in execute
    handle = self.__execute_query(query_string.strip(), user=user)

...

To make things even more complicated, there's bin/run-all-tests.sh, which invokes run-tests.py (but also "mvn test" and the C++ tests), and buildall.sh, which invokes run-all-tests.sh.