You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

Contains information describing how to test Trafodion using the Trafodion test libraries and automated build tests. 



Components Tests

Trafodion comes with several component-specific testing libraries.

SQL Core

 

The SQL core components are written in a combination of C++ and Java.

 

You should ensure that the current set of regression tests pass each time you add or modify a SQL feature.

 

If adding a new feature, then check that it is either covered by an existing regression test or add a new test to an existing test suite.

Test Suites

Locationcore/sql/regress

DirectoryUsage
charsetsTests Character Sets.
compGeneralCompiler test suite; tests optimizer-specific features.
coreTests a subset/sample of all features from all the test suites.
executorTests the SQL Executor.
fullstack2Similar to core but a very limited subset.
hiveTests HDFS access to Hive tables.
newregr

Unused/Saved repository for some unpublished features. These are not run.

qatTests basic DDL and DML syntax.
privs1Privilege tests part 1 - authorization setup, utilities, misc
privs2Privilege tests part 2 - grants and revokes
seabaseTests JNI interface to HBase.
toolsRegression driver scripts and general regression scripts.
udrTests the User Defined Routines (UDR) and TMUDF functionality.

Check Test Results

On completion, the test run prints out a test summary. All tests should pass, or pass with known differences.

Test results are written to the runregr-sb.log file in each component’s directory. Therefore, you can check the test results after the fact as follows:

cd $MY_SQROOT/rundir
grep FAIL */runregr-sb.log

A successful test run shows no failures.

Run Full Test Suite

 

This suite tests:

  • SQL Compiler
  • SQL Executor
  • Transactions
  • Foundation

 

Do the following:

cd $MY_SQROOT
. ./sqenv.sh
cd $MY_SQROOT/../sql/regress
tools/runallsb
Example: Run Individual Test Suites
cd $MY_SQROOT
. ./sqenv.sh
cd $MY_SQROOT/../sql/regress
tools/runallsb charsets

Running an Individual Tests

If You’ve Already Run the Test Suite

If you have already run the suite once, then you will have all your directories set up and you can run one test as follows: 

 

cd $MY_SQROOT/../sql/regress/<suite>

# You can add the following two exports to .bashrc or .profile for convenience
export rundir=$MY_SQROOT/rundir
export scriptsdir=$MY_SQROOT/../sql/regress

# run the test
cd $rundir/<suite>
runregr -sb <test>
Example: Run Individual Test Suite
cd $rundir/executor
$scriptsdir/executor/runregr -sb TEST130

 

If You’ve Not Run the Test Suite

If you have not run any regression suites so far, then you will not have the required sub directories set up. You manually create them for each suite you want to run.

 

cd $MY_SQROOT/../sql/regress/<suite>

# You can add the following two exports to .bashrc or .profile for convenience
export rundir=$MY_SQROOT/rundir
export scriptsdir=$MY_SQROOT/../sql/regress
mkdir $rundir
cd $rundir

# <suitename> should match the name of each directory in $scriptsdir
mkdir <suitename>

# run the test
cd $rundir/<suite>
$scriptsdir/<suite>/runregr -sb <test>

 

Detecting Failures

 

If you see failures in any of your tests, you want to try running that suite or test individually as detailed above.

 

Open up the DIFF file and correlate them to the LOG and EXPECTED files.

  • DIFF files are in $rundir/<suite name>.
  • LOG files are in $rundir/<suite name>.
  • EXPECTED files are in $scriptsdir/<suite name>.

 

To narrow down the failure, open up the test file (for example: TEST130) in $scriptsdir/executor.

 

Recreate the problem with a smaller set of SQL commands and create a script to run from sqlci. If it’s an issue that can be recreated only by running the whole suite, you can add a line to the test just before the command that fails to include a wait or a sleep sh sleep 60 will make the test pause and give you time to attach the sqlci process to the debugger. (You can find the PID of the sqlci process using sqps on the command line)

Introducing a wait in the test will wait forever until you enter a character. This is another way to make the test pause to attach the debugger to the sqlci process.

Modify an Existing Test

If you would like to add coverage for your new change, you can modify an existing test.

 

Run the test after your modifications. If you are satisfied with your results, you need to modify the EXPECTED<test number> file to reflect your new change. The standard way to do it is to copy the LOG<test number> file to EXPECTED<test number> file. See Modify Tests for more information.

Database Connectivity Services (DCS)

The DCS test suite is organized per the Maven standard.

JDBC T4 Tests

 

The code is written in Java, and is built and unit tested using Maven. The test suite organization and use follow Maven standards.

 

Instructions for setting up and running the test can be found in source tree at dcs/src/test/jdbc_test.

If using a the local_hadoop environment, use the swjdbc script to run the tests.

ODBC Tests

The code is written for the Python 2.7 unittest framework.

 

It is run via the Testr and Tox.

cd dcs/src/test/pytests
./config.sh -d <host>t:<port> -t # Location of your Linux ODBC driver tar file
tox -e py27

Further instructions for setting up and running the test can be found in source tree at dcs/src/test/pytests.

If using the local_hadoop environment, use the swpyodbc script to run the tests.

Functional Tests

Phoenix

 

The Phoenix tests provides basic functional tests for Trafodion. These tests were originally adapted from their counterpart atsalesforce.com.

 

The tests are executed using Maven with a Python wrapper. You can run them the same way on your own workstation instance just like the way Jenkins runs them. Do the following:

  1. Prior to running Phoenix tests, you need to bring up your Trafodion instance and DCS. You need to configure at least 2-4 servers for DCS. The tests need at least two mxosrvrs as they make two connections at any given time. We recommend configuring DCS with four mxosrvrs since we have seen situations that mxosrvrs do not get released in time for the next connection if there are only two mxosrvrs.
  2. Run the Phoenix tests from source tree.

    cd tests/phx
    phoenix_test.py --target=<host>:<port> --user=dontcare --pw=dontcare --targettype=TR --javahome=<jdk> --jdbccp=<jdir>/jdbcT4.jar
      • <host>: Your workstation name or IP address.

      • <port>: your DCS master port number.

      • <jdk>: the directory containing the jdk1.7.0_21_64 or later version of the JDK.

      • <jdir>: the directory containing your JDBC T4 jar file. (export/lib if you downloaded a Trafodion binary package.)

     

    The source code can be found in phoenix_test/src/test/java/com/trafodion/phoenix/end2end. These are JDBC tests written in Java.

     

  3. Analyze the results. The test results can be found in phoenix_test/target/surefire-reports. If there are any failures, they would come with file names and line numbers.

If using the local_hadoop environment, use the swphoenix script to run the tests.


Automated Tests

 

Automated tests take several hours to complete from when your pull-request was approved by a committer or updated with a new commit.

 

Normally, the Traf-Jenkins user will post a message in the pull-request with a link to the results. You can also check the Jenkins server to see the status even before the tests are finished. Look in the Build History table for the build/test job that matches your pull-request. For example, the master branch tests are located at: https://jenkins.esgyn.com/job/Check-PR-master/

Reviewing Logs

There are two approaches to reviewing logs.

 

Approach 1

  • The first two columns in build-job table are links to the specific sub-job. Click on the link to drill down.
  • The console log of each job has a link to the log file directories (close to the top). Look for Detailed logs.

 

Approach 2

 

 

More Information

The check tests do not include all of the automated daily tests. If you (or another contributor) want, you can run additional tests on the pull request. Refer Automated Test Setup below for more information.

Automated Test Setup

To be done

 

 

  • No labels