You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

This document describes the steps required to build ApacheTrafodion software.

Supported Platforms

  • Red Hat 6.4 or Centos 6.4 versions are supported as development and production platforms.

Required Software

  1. Install Cloudera or Horton works Hadoop distribution. In situations where you do not have the hadoop distributions already available you can use the install local hadoop script as described below. See section "Set up Hadoop distribution "
  2. Java 1.7.x or greater must be installed. Ensure JAVA_HOME environment environment variable exists and set to your JDK installation.

  3. Download, build and install additional development tools via  Additional Build Tools
  4. Install the following packages  via yum install <package>
alsa-lib-devel
ant
ant-nodeps
boost-devel
device-mapper-multipath
dhcp
gcc-c++
gd
glibc-devel.i686
graphviz-perl
gzip
java-1.7.0-openjdk-devel
java-1.6.0-openjdk-devel
libaio-devel
libibcm.i686
libibumad-devel
libibumad-devel.i686
libiodbc
libiodbc-devel
librdmacm-devel
librdmacm-devel.i686
log4cxx
log4cxx-devel
lua-devel
lzo-minilzo
net-snmp-devel
net-snmp-perl
openldap-clients
openldap-devel.i686
openmotif
openssl-devel.i686
openssl-static
perl-Config-IniFiles
perl-DBD-SQLite
perl-Config-Tiny
perl-Expect
perl-IO-Tty
perl-Math-Calc-Units
perl-Params-Validate
perl-Parse-RecDescent
perl-TermReadKey
perl-Time-HiRes
protobuf-compiler
protobuf-devel
python-qpid
python-qpid-qmf
qpid-cpp-client
qpid-cpp-client-devel
qpid-cpp-client-ssl
qpid-cpp-server
qpid-cpp-server-ssl
qpid-qmf
qpid-tools
readline-devel
saslwrapper
sqlite-devel
tog-pegasus
libXext-devel 
libX11-devel  
libXau-devel
unixODBC
unixODBC-devel
uuid-perl
xinetd
xerces-c-devel
Note :
  1.  The qpid-cpp-client-devel package is not in the latest CentOS distribution, you may need to enable an earlier repo using the following command

                    yum --enablerepo=C6.3-updates install qpid-cpp-client-devel

     2.    Not all packages come standard with RHEL/CentOS, the EPEL repo will need to be downloaded and installed using wget command

                     wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

                     sudo rpm -Uvh epelrelease-6-8.noarch.rpm

Set up Hadoop distribution (Install Hadoop, Hbase, Hive to local workspace)

You can use single node standalone Apache install OR use Trafodion supplied ‘install_local_hadoop’ script  pre-configured to install Cloudera distribution for a single node. When using Trafodion supplied Hadoop install script you can do the following:

  1. Make sure you have set up password less authentication. Basically you should be able to "ssh localhost" without having to enter a password

  2. Download latest Apache Trafodion source from Apache Incubator https://github.com/apache/incubator-trafodion
  3. Using ssh, set Trafodion environment
    1. cd incubator-trafodion; . ./env.sh

    2. cd $MY_SQROOT/sql/scripts

    3. Execute the script ‘install_local_hadoop

Note: 

This script will download Hadoop and HBase jar files from the internet. To avoid this overhead for future executions of the script, you can save the downloaded files into a separate directory and set the environment variable MY_LOCAL_SW_DIST to point to that directory. The files to save are: $MY_SQROOT/sql/local_hadoop/*.tar.gz $MY_SQROOT/sql/local_hadoop/tpcds/tpcds_kit.zip.

install_local_hadoop –p rand' —  will start with any random port number between 9000 and 49000

OR

install_local_hadoop –p < specify a port # >'  will start with port number specified

OR

install_local_hadoop' will use default port numbers for all services

To start/stop/check Hadoop environment when using Trafodion supplied Hadoop install script, you can execute  ‘swstartall’ ,  ‘swstopall’ and ‘swstatus

For Hadoop installs that did not use Trafodion supplied Hadoop install script, please update HBase configuration as shown below and restart HBase

For hbase-site.xml:

  <property>

    <name>hbase.client.scanner.caching</name>

    <value>100</value>

  </property>

  <property>

    <name>hbase.client.scanner.timeout.period</name>

    <value>60000</value>

  </property>

  <property>

    <name>hbase.coprocessor.region.classes</name>

     <value>

           org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionObserver,

           org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint,

           org.apache.hadoop.hbase.coprocessor.AggregateImplementation

      </value>

  </property>

  <property>

    <name>hbase.hregion.impl</name>

    <value>org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion</value>

  </property>

For hbase-env.xml:

   export HBASE_CLASSPATH=${HBASE_TRXDIR}/${HBASE_TRX_JAR}

To compile and configure Trafodion and its components

  1. Set your TOOLSDIR environment variable to the location of the components installed via  Additional Build Tools  

  2. If you have not already downloaded Apache Trafodion source now you can download it from  here  https://github.com/apache/incubator-trafodion

    1. Using a new ssh session, set Trafodion environment

       cd incubator-trafodion

       . ./env.sh

       make all (Build Trafodion, DCS, REST)    OR

      make package  (Build Trafodion, DCS, REST, Client drivers)  OR 

      make package-all (Build Trafodion, DCS, REST, Client drivers and Tests for all components)

b. . cd $MY_SQROOT/sql/scripts

c. Execute script , ‘install_traf_components’. Based on the tar files available in the distribution folder this script will install the Trafodion components.   

Note: All tar files will be created in ‘distribution’ folder located at the very top level (incubator-trafodion)

To install a custom Trafodion component you can set various environment variable to overwrite the default tar files found in distribution folder.

Environment variables supported by install_traf_component script are :

DCS_TAR         —  Specify the fully qualified path of  DCS tar file

REST_TAR       —  Specify the fully qualified path of  REST tar file

PHX_TAR         —  Specify the fully qualified path of  Phoenix test tar file

CLIENT_TAR    —  Specify the fully qualified path of  Trafodion client tar file

DCSTEST_TAR —  Specify the fully qualified path of  DCS tests tar file

Starting Trafodion and its components

Using a new ssh session,

  1. cd incubator-trafodion;  Execute . ./env.sh

  2. cd $MY_SQROOT/sql/scripts

  3. Execute the script, ‘sqgen’ and then start using the script ‘sqstart

Note: In case of any issues and if there is a need to stop and restart a specific Trafodion component, you can use the component based  start/stop scripts.

Component

Start script

Stop script

For all of Trafodion

sqstart

sqstop

For DCS (Database Connectivity Service)

dcsstart

dcsstop

For REST server

reststart

reststop

For LOB server

lobstart

lobstop

For RMS server

rmsstart

rmsstop

 

 

 

Checking the status of Trafodion and its components

There are several health check scripts that are available which will provide the status of Trafodion. They are :

sqcheck (For all of Trafodion)

dcscheck (For Database Connectivity Service)

rmscheck (For RMS Server)

Creating Trafodion metadata

 Using a new ssh session,

    1. cd incubator-trafodion; . ./env.sh

    2. cd $MY_SQROOT/sql/scripts

    3. Use sqlci (direct to sql engine) or trafci (uses DCS to connect to SQL engine)

    4. Execute the sql initialization script via sqlci or trafci, ‘initialize trafodion

Testing Trafodion

There are several helper scripts provided to run the tests for Trafodion components in your workspace. These scripts are generated based on the tar files that are made available during execution of install_traf_component script

swphoenix {t4 | t2 } — This script will run the Phoenix test using JDBC Type 4 driver or JDBC Type 2 driver

swjdbc —  This script will run JDBC Type 4 tests

swpyodbc —  This script will install Linux drivers run ODBC tests using Linux driver

 

  • No labels