Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagebash
./buildall.sh -noclean -testdata

Note: Sometimes your may get a messed up terminal after data loading, e.g. not displaying any typed in characters or carriage return. In this case, type ctrl+c, then type "stty sane", then press enter to recover.

Run all tests:

Code Block
languagebash
MAX_PYTEST_FAILURES=12345678 ./bin/run-all-tests.sh

...

Code Block
languagebash
# For the front-end tests, Impala the minicluster and Impala cluster must be running:
./testdata/bin/run-all.sh
./bin/start-impala-cluster.py
 
(pushd fe && mvn -fae test)
 
# To run one group of tests:
(pushd fe && mvn -fae test -Dtest=AnalyzeStmtsTest)
 
# To run a single test:
(pushd fe && mvn -fae test -Dtest=AnalyzeStmtsTest#TestStar)

Run just back-end tests




# To run a specific parameterized test:
(pushd fe && mvn test -Dtest=AuthorizationStmtTest#testPrivilegeRequests[0])
# or
(pushd fe && mvn test -Dtest=AuthorizationStmtTest#testPrivilegeRequests[*])

Run just back-end tests

Code Block
languagebash
 # For the back-end tests, the Impala cluster must not be running:
./bin/start-impala-cluster.py --kill

./bin/run-backend-tests.sh
# or
ctest
 
# To run just one test (and show what broke) (make sure to source ./bin/set-classpath.sh first):
ctest --output-on-failure -R expr-test
 
Code Block
languagebash
 # For the back-end tests, the Impala cluster must not be running:
./bin/start-impala-cluster.py --kill

./bin/run-backend-tests.sh
# or
ctest
 
# To run just one test (and show what broke):
ctest --output-on-failure -R expr-test
 
# You can also run just a single test at a time. If this hangs, be sure
# that you've run set-classpath.sh before-hand.
# You can thenalso use run just a single test at a time. If this fails, be sure
# that you've run set-classpath.sh before-hand.
# You can then use command line parameters to run a subset of tests:
# To get help with parameters:
be/build/latest/runtime/mem-pool-test --help
 
# To run just one test within a test:
be/build/latest/runtime/mem-pool-test --gtest_filter=MemPoolTest.TryAllocateAligned

Run just JavaScript tests (webUI pages)

Code Block
languagebash
# To run JavaScript tests using JEST for the webUI pages
./tests/run-js-tests.sh

Run most end-to-end tests (but not custom cluster tests)

Code Block
languagebash
# For the end-to-end tests, Impala must be running:
./bin/start-impala-cluster.py

./tests/run-tests.py

# To run the tests in just one directory:
./tests/run-tests.py metadata
 
# To run the tests in that directory with a matching name: 
./tests/run-tests.py metadata -k test_partition_metadata_compatibility
 
# To run the tests in a file with a matching name:
./tests/run-tests.py query_test/test_insert.py -k test_limit

# To run the tests that don't match a name:
./tests/run-tests.py query_test/test_insert.py -k "-test_limit".

# To run the tests with more restrictive name-matching
./bin/impala-py.test tests/shell/test_shell_commandline.py::TestImpalaShell::test_multiple_queries
 
# To run tests only against a specific file format(s):
./tests/run-tests.py --filetable_format_filterformats="seq/snap/block,text"/none
 
# To change the impalad instance to connect to (default is localhost:21000):
./tests/run-tests.py --impalad=hostname:port

# To use a different exploration strategy (default is core):
./tests/run-tests.py --exploration_strategy=exhaustive

# To run a particular .test file, e.g. testdata/workloads/functional-query/queries/QueryTest/exprs.test
# search for a .py test file which refers to the test, (try 'grep -r QueryTest/exprs tests')
# then run that test file (in this case tests/query_test/test_exprs.py) using run-tests.py
./tests/run-tests.py query_test/test_exprs.py

# To update the results of tests (The new test files will be located in ${IMPALA_EE_TEST_LOGS_DIR}/test_file_name.test):
./tests/run-tests.py --update_results

...

  1. Look for Minidump crashes. These look like: ./Impala/logs_static/logs/ee_tests/minidumps/impalad/71bae77a-07d2-4b6e-0885eca5-adc7ee36.dmp. Because some tests run in parallel (especially in the "ee" test suite), if one of the Impala daemons crashed, subsequent test results are meaningless. Start with the crash. If you see a crash dump, it's possible that there's a stack trace in the logs:
    # ag is a recursive grep tool; see https://github.com/ggreer/the_silver_searcher
    $ag 'Check failure stack trace:'
    Impala/logs_static/logs/ee_tests/impalad.ip-172-31-24-168.ubuntu.log.ERROR.20171023-180534.38031
    101743:*** Check failure stack trace: ***
    101746:*** Check failure stack trace: ***
    Impala/logs_static/logs/custom_cluster_tests/impalad.ip-172-31-24-168.ubuntu.log.ERROR.20171023-193700.44118
    12:*** Check failure stack trace: ***
    Impala/logs_static/logs/custom_cluster_tests/impalad.ip-172-31-24-168.ubuntu.log.ERROR.20171023-193658.43993
    12:*** Check failure stack trace: ***
    Impala/logs_static/logs/custom_cluster_tests/impalad.ip-172-31-24-168.ubuntu.log.ERROR.20171023-193659.44062
    12:*** Check failure stack trace: ***

     


  2. Look for "FAILED" (case insensitive) in the log output of the Jenkins job:

    $cat console | egrep -o -i 'FAILED +[^ ]+ +[^ ]+'
    FAILED query_test/test_udfs.py::TestUdfExecution::test_ir_functions[exec_option: {'disable_codegen_rows_threshold':
    FAILED query_test/test_spilling.py::TestSpillingDebugActionDimensions::test_spilling_large_rows[exec_option: {'debug_action':
    FAILED query_test/test_udfs.py::TestUdfExecution::test_native_functions[exec_option: {'disable_codegen_rows_threshold':
    failed to connect:
    failed in 12443.43
    failed in 4077.50
  3. The tests produce xunit XML files; you can look for failures in these like so:
    # Search for and extract 'failure message="..."' in all XML files; uses
    # a multiline RE.
    $grep -Pzir -o 'failure message=\"[^"]*' $(find . -name '*.xml') | tr '\0' ' ' | head
    ./Impala/logs_static/logs/ee_tests/results/TEST-impala-parallel.xml:failure message="query_test/test_nested_types.py:70: in test_subplan
    self.run_test_case('QueryTest/nested-types-subplan', vector)
    common/impala_test_suite.py:391: in run_test_case
    result = self.__execute_query(target_impalad_client, query, user=user)
    common/impala_test_suite.py:600: in __execute_query
    return impalad_client.execute(query, user=user)
    common/impala_connection.py:160: in execute
    return self.__beeswax_client.execute(sql_stmt, user=user)
    beeswax/impala_beeswax.py:173: in execute
    handle = self.__execute_query(query_string.strip(), user=user)

...

To make things even more complicated, there's bin/run-all-tests.sh, which invokes run-tests.py (but also "mvn test" and the C++ tests), and buildall.sh, which invokes run-all-tests.sh. bin/run-all-tests.sh, which invokes run-tests.py (but also "mvn test" and the C++ tests), and buildall.sh, which invokes run-all-tests.sh.

Troubleshooting

1. Test failures due to missing libTestUdfs.so

Code Block
E    MESSAGE: AnalysisException: Could not load binary: /test-warehouse/libTestUdfs.so
E   Failed to get file info hdfs://localhost:20500/test-warehouse/libTestUdfs.so
E   Error(2): No such file or directory

That file is created and uploaded by testdata/bin/copy-udfs-udas.sh. You can try 

Code Block
testdata/bin/copy-udfs-udas.sh -build