You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

This document describes the steps required to build Trafodion software.

Supported Platforms

  • Red Hat 6.4 or Centos 6.4 are supported as development and production platforms.

Required Software

  1. Install Cloudera or Horton Works Hadoop distribution
  2. Java 1.7.x or greater must be installed. Ensure JAVA_HOME environment variable is set and points to your JDK installation.
  3. Install the maven build tool.

Additional requirements for Trafodion include:

Install the following packages via yum install <package>

alsa-lib-devel
ant
ant-nodeps
boost-devel
device-mapper-multipath
dhcp
gcc-c++
gd
glibc-devel.i686
graphviz-perl
gzip
java-1.7.0-openjdk-devel
java-1.6.0-openjdk-devel
libaio-devel
libibcm.i686
libibumad-devel
libibumad-devel.i686
libiodbc
libiodbc-devellibrdmacm-devellibrdmacm-devel.i686
log4cxxlog4cxx-devellua-devel
lzo-minilzonet-snmp-develnet-snmp-perl
openldap-clientsopenldap-devel.i686
openmotif
openssl-devel.i686
openssl-static
perl-Config-IniFiles
perl-DBD-SQLite
perl-Config-Tiny
perl-Expect
perl-IO-Tty
perl-Math-Calc-Units
perl-Params-Validate
perl-Parse-RecDescent
perl-TermReadKey
perl-Time-HiRes
protobuf-compiler
protobuf-devel
python-qpid
python-qpid-qmf
qpid-cpp-client
qpid-cpp-client-devel
qpid-cpp-client-ssl
qpid-cpp-server
qpid-cpp-server-ssl
qpid-qmf
qpid-tools
readline-devel
saslwrapper
sqlite-devel
tog-Pegasus
unixODBCunixODBC-devel
uuid-perl
xinetdxerces-c-devel

2. Download, build and install the non-standard development tools via  Additional Build Tools

Note :
  1.  The qpid-cpp-client-devel package is not in the latest CentOS distribution, so you may need to enable an earlier repo using the following command

                    yum --enablerepo=C6.3-updates install qpid-cpp-client-devel

2  . Not all packages come standard with RHEL/CentOS, the EPEL repo will need to be downloaded and installed using wget command

                     wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

                     sudo rpm -Uvh epelrelease-6-8.noarch.rpm

 

 

Database Connectivity Services (DCS)

Prerequisites

  • Build
  1. Download sources: https://github.com/trafodion/dcs
  2. Run a Maven clean site package command:
    mvn clean site package
    

Core Trafodion

Prerequisites

 

 

Hadoop Components

Install Hadoop, HBase, and Hive dependencies in a common location of your choice ($TOOLSDIR). Dependencies for release 0.9.x:

wget http://archive.apache.org/dist/hive/hive-0.13.1/apache-hive-0.13.1-bin.tar.gz
tar xzf apache-hive-0.13.1-bin.tar.gz $TOOLSDIR/apache-hive-0.13.1-bin
wget http://archive-primary.cloudera.com/cdh5/cdh/5/hbase-0.98.1-cdh5.1.0.tar.gz
tar xzf hbase-0.98.1-cdh5.1.0.tar.gz $TOOLSDIR/hbase-0.98.1-cdh5.1.0

NOTE: The hadoop release contains 32-bit libraries. You must build hadoop from source for 64-bit architecture, and not just download the release tar file. See: http://wiki.apache.org/hadoop/HowToContribute

wget http://archive.apache.org/dist/hadoop/common/hadoop-2.4.0/hadoop-2.4.0-src.tar.gz
tar xzf hadoop-2.4.0-src.tar.gz 
cd hadoop-2.4.0-src
export JAVA_HOME=...           # path to 1.7.x JDK
export HADOOP_PROTOC_PATH=...  # path to protobufs 2.5.0 protoc command
mvn clean install package -Pdist -Pnative -Dtar -DskipTests \
   -Dtomcat.download.url=http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.36/bin/apache-tomcat-6.0.36.tar.gz

Before building trafodion, be sure to export the TOOLSDIR environment variable set to the directory location of these components and the additional build tools from the section above.

Alternatively, use the install_local_hadoop script instead. See step 3 under the "Build" heading below.

Custom Tool Settings

The location of build dependencies can be customized. See the source code file https://github.com/trafodion/core/blob/master/sqf/LocalSettingsTemplate.sh.

Build

  1. Get a clone of the git repository (https://github.com/trafodion/core)
  2. Set up shell environment
     cd sqf
     source ./sqenv.sh 
    
  3. Build the software
     cd $MY_SQROOT/..
     make all
    

Install Hadoop and Start Trafodion

  1. (OPTIONAL) Create a sand-boxed installation of Hadoop, HBase, Hive, MySQL to be used for building and testing. If these tools are not installed on your development system, you can install them locally to your workspace.
     install_local_hadoop
    

    Note: This script will download Hadoop and HBase jar files from the internet. To avoid this overhead for future executions of the script, you can save the downloaded files into a separate directory and set the environment variable MY_LOCAL_SW_DIST to point to that directory. The files to save are: $MY_SQROOT/sql/local_hadoop/*.tar.gz $MY_SQROOT/sql/local_hadoop/tpcds/tpcds_kit.zip.

  2. Generate files necessary to start Trafodion
     cd $MY_SQROOT
     source ./sqenv.sh
     sqgen
    
  3. Exit your shell and get a new clean shell (note that sqgen edited sqenv.sh)
     cd $MY_SQROOT
     source ./sqenv.sh  
    
  4. Make sure you can do "ssh localhost" without having to enter a password
  5. (Sandbox Hadoop option): Bring up your Hadoop/HBase instance if it is not already up.
     swstartall
    
  6. (Pre-installed Hadoop/HBase option): Update the HBase configuration and restart HBase.
     hbase-site.xml:
      <property>
        <name>hbase.client.scanner.caching</name>
        <value>100</value>
      </property>
      <property>
        <name>hbase.client.scanner.timeout.period</name>
        <value>60000</value>
      </property>
      <property>
        <name>hbase.coprocessor.region.classes</name>
          <value>
               org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionObserver,
               org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint,
               org.apache.hadoop.hbase.coprocessor.AggregateImplementation
          </value>
      </property>
      <property>
        <name>hbase.hregion.impl</name>
        <value>org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion</value>
      </property>
    
     hbase-env.xml:
       export HBASE_CLASSPATH=${HBASE_TRXDIR}/${HBASE_TRX_JAR}
    
  7. Start Trafodion
     sqstart
     sqlci
     > initialize trafodion;
    
  8. Perform a quick sanity test of the install
     sqlci
     > set schema trafodion.usr;
     > create table t(a integer not null primary key);
     > get tables;
     > insert into t values (1);
     > select * from t;
    

Notes

  • The $MY_SQROOT/sqenv.sh file sources in the file sqenvcom.sh, where most of the Trafodion environment is set up: PATH, CLASSPATH, LD_LIBRARY_PATH, and so on.
  • The sqgen command takes CLASSPATH and other environment variables and makes sure that they are used when starting Trafodion processes across the cluster. Therefore, it's very important that the correct CLASSPATH is set up before calling sqgen. Trafodion processes actually use the CLASSPATH that's defined in $MY_SQROOT/etc/ms.env, which should match what you get after sourcing sqenv.sh.
  • The install_local_hadoop script copies jar files and executables for a single-node Hadoop install into your source tree: $MY_SQROOT/sql/local_hadoop. If you already have Hadoop running on the system and also want a sandbox version, install the sand-boxed Hadoop on non-standard ports:
    install_local_hadoop -p <start-port>
    
    <start-port> ... <start-port>+199 should be a range of unused ports.
    
  • To run the software you built on a cluster, use the "package" make target instead of the "all" target above and use the built tar files to install on the cluster. Generally, most developers run on a single-node cluster, since a multi-node cluster requires more complex steps to deploy the built software. Here is how to modify software and run the modified objects on the local node (note there is no "make install"):
     sqstop
     # edit source files
     cd $MY_SQROOT/..
     make all
     sqstart
    
  • Shutting Trafodion down. To do this, you would shut down Trafodion, then shut down the sand-boxed Hadoop that's used. The sw commands only apply if you are using the sandbox hadoop (install_local_hadoop).
     sqstop
     swstopall
    

    To start it up later, use the following commands:

     swstartall
     sqstart
    

    To check on the status, use these commands:

     sqcheck
     swstatus
    
  • If you get rid of the entire source tree, all of the local Hadoop install will also be lost. Before removing these files, make sure to stop Hadoop. The easiest way to do that is with the swstopall or swuninstall_local_hadoop script (these are generated script in your path).

  • No labels