Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Update to build process from TRAFODION-1521

...

Example yum install script:

<TDB - should we include this yum command?  It make things easier when setting up a new VM>

 yum    yum install alsa-lib-devel ant ant-nodeps boost-devel device-mapper-multipath dhcp gcc-c++ gd glibc-devel glibc-devel.i686 \
  graphviz graphviz-perl java-1.7.0-openjdk-devel libaio-devel libibcm.i686 libibumad-devel libibumad-devel.i686 \
libiodbc    libiodbc libiodbc-devel librdmacm-devel librdmacm-devel.i686 log4cxx log4cxx-devel lua-devel lzo-minilzo net-snmp-devel \
net    net-snmp-perl openldap-clients openldap-devel.i686 openmotif openssl-devel openssl-devel.i686 openssl-static \
  perl perl-Config-IniFiles perl-DBD-SQLite perl-Config-Tiny perl-Expect perl-IO-Tty perl-Math-Calc-Units \
  perl perl-Params-Validate perl-Parse-RecDescent perl-TermReadKey perl-Time-HiRes protobuf-compiler \
protobuf   protobuf-devel python-qpid python-qpid-qmf qpid-cpp-client qpid-cpp-client-devel qpid-cpp-client-ssl \
qpid    qpid-cpp-server qpid-cpp-server-ssl qpid-qmf qpid-tools readline-devel saslwrapper sqlite-devel \
  unixODBC unixODBC unixODBC-devel uuid-perl xinetd xerces-c-devel \
  openldap  openldap-devel flex libcurl-devel libxml2-devel cmake

...

Additional development tools are required before building Trafodion  as described Additional Build Tools.  A convenience script script exists that downloads, installs, and builds all these tools in a common directory.  If this convenience script is not used or if any of these additional build tools are not found in the expected location, then the Trafodion configuration file needs to be updated. The Trafodion configuration file template is located in <download directory>/apache-trafodion-1.2.0-incubating/core/sqf/LocalSettingsTemplate.sh.  To change values, copy this file to your home directory and change its name to .trafodion.  Edit the .trafodion file and update according to the instructions.  Be sure to change the location of your TOOLSDIR to your <tools installation directory>.

   cp <traf download directory>/apache-trafodion-1.2.0-incubating/core/sqf/LocalSettingsTemplate.sh ~/.trafodion

 For now, don't change locations for HADOOP_PREFIX, HBASE_HOME, and HIVE_HOME.  There is a later step that describes this process.

Build Trafodion

You can use single node standalone Apache installation OR use a Trafodion supplied solution that encapsulates the building and testing in an isolated environment .  If you plan to use an existing standalone Apache installation please follow instructions for  "Build using standalone Apache installation".  If you want to use your existing environment, please following instructions for "Build using local Hadoop" ".

Build using standalone Apache installation

If you wish to build Trafodion using an existing standalone Apache installation, follow instructions in this section.  With this option, you can only build Trafodion. If you want to test Trafodion, you will need to create binary tar files and follow our installation instructions described here (Installation).   Note:  the Trafodion installer only supports Hortonworks and Cloudera distributions at this time.

Much of Trafodion is written in C++ and it uses libraries that are built as part of the Hadoop native libraries.  Therefore, your Hadoop distribution must have these libraries available.  To check this:

    cd <hadoop installation directory>

    ls lib/native

The list of lib/native code should include libhdfs.so

If you have not already done so, copy the Trafodion configuration file into your root directory. 

   cp <download directory>/ apache-trafodion-1.2.0-incubating/core/sqf/LocalSettingsTemplate.sh ~/.trafodion

Edit .trafodion and change locations for HADOOP_PREFIX, HBASE_HOME, and HIVE_HOME to point to your installed environment. Be sure to update the TOOLSDIR variable in this file to point to the  location of the components installed via  Additional Build Tools.  

Note: Trafodion has only been tested with Hadoop 2.5.?, HBase 0.98.6, and Hive 0.13.1.  If you have different versions, you might run into unexpected build errors.

Setup build environment

    Start a new ssh session

    Set the following environment variables, these are needed to create correct class paths when setting up the build environment.

        export HADOOP_PREFIX=<location of Hadoop 2.5.?>

        export HBASE_HOME=<location of hbase-0.98.6>

        export HIVE_HOME=<location of apache-hive-0.13.1>

        export THRIFT_INC_DIR=<location of thrift-0.9.0/include>

        export THRIFT_LIB_DIR=<location of thrift-0.9.0/lib>

    Make sure that  TOOLSDIR is not set.

        unset $TOOLSDIR

Do the build

...

Start a new ssh session

    cd <download directory>/apache-trafodion-1.2.0-incubating
 

...

   source ./env.sh

   Build Build using one of the following options:

        make all          (Build Trafodion, DCS, REST) OR
        make package    make package      (Build Trafodion, DCS, REST, Client drivers)  OR
          make package-all  (Build Trafodion, DCS, REST, Client drivers and Tests for all components

    Verify build by executing the following statement, this script should return over 90 components        $MY_SQROOT/sql/script/

    sqvers

Build using local Hadoop

This section describes the steps to use the Trafodion installation script called 'install_local_hadoop' that encapsulates building the product and starting the Hadoop eco-system.  This script uses a Cloudera distribution.

...

Install Hadoop eco-system

    Start a new ssh session and set environment

 

...

            export TOOLSDIR=<tools installation directory>

...

   cd <download directory>/apache-trafodion-1.2.0-incubating

...

    source ./env.sh

    cd $MY_SQROOT/sql/scripts

...

    install_local_

...

hadoop 
Verify that build completed by running the following command - should report: 6 java servers and 2 mysqld processes are running
    

     Verify that build completed by running the following command - should report: 6 java servers and 2 mysqld processes are running

...

swstatus

Note: 

The 'install_local_hadoop' script downloads Hadoop, HBase, Hive, and MySql jar files from the internet. To avoid this overhead, you can download the required files into a separate directory and set the environment variable MY_LOCAL_SW_DIST to point to this directory.The following options are available with to point to this directory.

The following options are available with 'install_local_hadoop'. Use the -p option if the default Hadoop ports are already in use on your machine:

install_local_hadoop' — will use default port numbers for all services

'install_local_hadoop

...

-p fromDisplay' - will start Hadoop with a port number range determined from the DISPLAY environment variable OR

install_local_hadoop –p -p rand' —  will start with any random port number range between 9000 and 49000 OR

install_local_hadoop –p -p < specify a port # >'  will start with port number specified ORinstall_local_hadoopport # >'  will use default port numbers for all servicesstart with port number specified OR

When this script completes, Hadoop, HBase, Hive, and MySql (used as Hive's metadata repository) have been installed and are started. 

The 'install_local_hadoop' script also creates several helper scripts starting with "sw" in the $MY_SQROOT/sql/scripts directory including:

...

To start/stop/check Hadoop environment using Trafodion supplied scripts, you can execute  ‘swstartall’ ,  ‘swstopall’ and ‘swstatus’, and if you need to remove the installation, execute the 'swuninstall_local_hadoop'. 

Do the build

    Start a new ssh session and set environment

 

...

            export TOOLSDIR=<tools installation directory>

...

   cd <download directory>/apache-trafodion-1.2.0-incubating

...

  

...

  source ./env.sh

     Build using one of the following options: 

    make all           make all (Build Trafodion, DCS, REST)    OR
    make package       make package  (Build Trafodion, DCS, REST, Client drivers) OR
        make package-all  (Build Trafodion, DCS, REST, Client drivers and Tests for all components

    Verify build by executing the following statement, this script should return over 90 components

    sqvers

      $MY_SQROOT/sql/script/sqvers     Install Trafodion components

 

...

   install_traf_

...

components 

Test Trafodion build

If you built Trafodion using "Build using standard Apache installation", following the instructions in "Test using standard Apache installation".  If you built Trafodion using the "Build using local Hadoop", follow the instructions in "Test using local Hadoop".

...

  •  core file size                    (blocks, -c) 1000000
  • data seg size                    (kbytes, -d) unlimited
  • scheduling priority            (-e) 0
  • file size                             (blocks, -f) unlimited
  • pending signals                (-i) 515196
  • max locked memory         (kbytes, -l) 49595556
  • max memory size             (kbytes, -m) unlimited
  • open files                          (-n) 32000
  • pipe size                           (512 bytes, -p) 8
  • POSIX message queues  (bytes, -q) 819200
  • real-time priority               (-r) 0
  • stack size                         (kbytes, -s) 10240
  • cpu time                           (seconds, -t) unlimited
  • max user processes         (-u) 267263
  • virtual memory                 (kbytes, -v) unlimited
  • file locks                           (-x) unlimited

Start Trafodion

    Start a new ssh session and set environment        cd <download , then perform a one-time step to set up run environment:

    cd <download directory>/apache-trafodion-1.2.0-incubating

...

    source ./env.sh
    cd $MY_SQROOT/sql/scripts

...


sqgen 

Important: After sqgen, exit all shells used for Trafodion, and source the env.sh file in a new shell.

    Perform a one-time step to set up run environment, <TBD, add more details on when this is required and rm ms.env>

        Execute the script, ‘sqgen’ 

    Sometimes HBase does not get started correctly and needs to be restarted:

        cd $MY_SQR00T/sql/scripts

        ./swstatus

    swstatus

If         if HMaster is not started do:  ./swstarthbase    Execute the script, 'sqstart'

    swstophbase
swstarthbase

Finally, start the Trafodion processes:

    sqstart

Note: In case of any issues and if there is a need to stop and restart a specific Trafodion component, you can use the component based  start/stop scripts.

...

There are several health check scripts that are available which will provide the status of Trafodion. They are :

sqcheck   (For all of Trafodion)
dcscheck  (For Database Connectivity Service)
rmscheck  (For RMS Server)

Creating Trafodion metadata

Trafodion is up and running, you can now start up a SQL command line  interface and initialize Trafodion

    Start a new ssh session and set environment         cd <download and run the sqlci tool:

    cd <download directory>/apache-trafodion-1.2.0-incubating
  

...

  source ./env.sh

    cd $MY_SQROOT/sql/scripts

    sqlci 

Perform the following statements:

    initialize trafodion;
exit

Test your setup by using "sqlci" or "trafci"     Execute the script 'sqlci' (direct to sql engine) or 'trafci'  (uses DCS to connect to the SQL engine)     Perform the following statements:

...

    get schemas;
create table table1 (a int);
invoke table1;
insert into table1 values (1), (2), (3), (4);
select * from table1;
exit;

You are done and ready to go! <TBD: add reference to documentation>

...