Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Introduction

Installing Bigtop Hadoop distribution artifacts lets you have an up and running Hadoop cluster complete with
various Hadoop ecosystem projects in just a few minutes. Be it a single node pseudo-distributed
configuration, or a fully distributed cluster, just make sure you install the packages, install the JDK,
format the namenode and have fun! If Bigtop is not supported on your OS, you can install one of the supported 64-bit OSes
on a virtual machine. There is known issues with 32-bit OSes.

Table of Contents

Getting the packages onto your box

CentOS 5, CentOS 6, Fedora 15, RHEL5, RHEL6

  1. Make sure to grab the repo file:

    No Format
    
    wget -O /etc/yum.repos.d/bigtop.repo http://www.apache.org/dist/incubator/bigtop/stable/repos/[centos5|centos6|fedora15|fedorafedora16]/bigtop.repo
    
  2. This step is optional, but recommended: enable the mirror that is closest to you (uncomment one and only one of the baseurl lines and remove the mirrorlist line). If the downloads are too slow, try another mirror

    No Format
    
    sudo vi /etc/yum.repos.d/bigtop.repo
    
  3. Note: Since 0.3.0 is currently available only from the archives use:

    No Format
    baseurl=http://archive.apache.org/dist/incubator/bigtop/bigtop-0.3.0-incubating/repos/[centos5|centos6|fedora15|fedora16]/
    
  4. Browse through the artifacts

    No Format
    
    yum search hadoop
    yum --disablerepo "*" --enablerepo "bigtop-0.3.0-incubating" list available
    
  5. Install the full Hadoop stack (or parts of it)

    No Format
    
    sudo yum install hadoop\* flume\* mahout\* oozie\* whirr\* hbase\*
    

SLES 11, OpenSUSE

  1. Make sure to grab the repo file:

    No Format
    
    wget -O  http://www.apache.org/dist/incubator/bigtop/stable/repos/suse/bigtop.repo
    mv bigtop.repo  /etc/zypp/repos.d/bigtop.repo
    
  2. Enable the mirror that is closest to you (uncomment one and only one of the baseurl lines). If the downloads are too slow, try another mirror

    No Format
    
    As root:  vi /etc/zypp/repos.d/bigtop.repo
    
  3. Browse through the artifacts

    No Format
    
    zypper search hadoop
    
  4. Install the full Hadoop stack (or parts of it)

    No Format
    
    sudo zypper install hadoop\* flume\* mahout\* oozie\* whirr\*
    

Ubuntu

  1. Install the Apache Bigtop GPG key

    No Format
    
    wget -O- http://www.apache.org/dist/incubator/bigtop/bigtop-0.23.0-incubating/repos/GPG-KEY-bigtop | sudo apt-key add -
    
  2. Make sure to grab the repo file:

    No Format
    
    sudo wget -O /etc/apt/sources.list.d/bigtop.list http://www.apache.org/dist/incubator/bigtop/bigtop-0.23.0-incubating/repos/ubuntu/bigtop.list
    
  3. Enable the mirror that is closest to you (uncomment one and only one pair of deb/deb-src lines). If the downloads are too slow, try another mirror

    No Format
    
    sudo vi /etc/apt/sources.list.d/bigtop.list
    
  4. Update the apt cache

    No Format
    
    sudo apt-get update
    
  5. Browse through the artifacts

    No Format
    
    apt-cache search hadoop
    
  6. Make sure that you have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html) or follow the advice given by your Linux distribution. If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/hadoop file

    No Format
    
    export JAVA_HOME=XXXX
    
  7. Install the full Hadoop stack (or parts of it)

    No Format
    
    sudo apt-get install hadoop\* flume-* mahout\* oozie\* whirr-*
    

Running Hadoop

After installing Hadoop packages onto your Linux box, make sure that:

  1. You have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html) or follow the advice given by your Linux distribution (e.g. some Debian based Linux distributions have JDK packaged as part of their extended set of packages). If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/hadoop file

    No Format
    
    export JAVA_HOME=XXXX
    
  2. Format the namenode

    No Format
    
    sudo -u hdfs hadoop namenode -format
    
  3. Start the necessary Hadoop services. E.g. for the pseudo distributed Hadoop installation you can simply do:

    No Format
    
    for i in hadoop-namenode hadoop-datanode hadoop-jobtracker hadoop-tasktracker ; do sudo service $i start ; done
    
  4. Once your basic cluster is up and running it is a good idea to create a home directory on the HDFS:

    No Format
    
    sudo -u hdfs hadoop fs -mkdir /user/$USER
    sudo -u hdfs hadoop fs -chown $USER /user/$USER
    
  5. Enjoy your cluster

    No Format
    
    hadoop fs -lsr /
    hadoop jar /usr/lib/hadoop/hadoop-examples.jar pi 10 1000
    

Running Hadoop Components

One of the advantages of Bigtop is the ease of installation of the different Hadoop Components without having to hunt for a specific Hadoop Component distribution and matching it with a specific Hadoop version.

Running Pig

Install Pig

No Format

sudo apt-get install pig

Create a tab delimited file using a text editor and import it into HDFS. Start the pig shell and verify a load and dump work. Make sure you have a space on both sides of the = sign. The statement using PigStorage('\t') tells Pig the columns in the text file are delimited using tabs.

No Format

$pig
grunt>A = load '/pigdata/PIGTESTA.txt' using PigStorage('\t');
grunt>dump A

Running HBase

Install HBase

No Format

sudo apt-get install hbase\*
  1. For bigtop-0.2.0 uncomment and set JAVA_HOME in /etc/hbase/conf/hbase-env.sh
  2. For bigtop-0.3.0 this shouldn't be necessary because JAVA_HOME is auto detected
    No Format
    
    sudo service hbase-master start
    hbase shell
    
    Test the HBase shell by creating a HBase table named t1 with 3 columns f1, f2 and f3. Verify the table exists in HBase
    No Format
    
    create 't1','f1','f2','f3'
    list
    
  3. you should see a verification from HBase the table t1 exists, the symbol t1 which is the table name should appear under list

Running Hive

No Format

# This is for bigtop-0.2.0 where hadoop-hive, hadoop-hive-server, and hadoop-hive-metastore are installed 
# create the HDFS directories Hive needs

hadoop fs -mkdir /tmp
hadoop fs -mkdir /user/hive/warehouse
hadoop -chmod g+x /tmp
hadoop -chmod g+x /user/hive/warehouse

No Format

# create directory /var/run/hive
# create directory /var/lock/subsys

sudo mkdir /var/run/hive
sudo mkdir /var/lock/subsys
sudo /etc/init.d/hadoop-hive-server start

No Format

# create a table in Hive and verify it is there

$hive
hive>create table doh(id int);
hive>show tables;

  1. If you are using Amazon AWS it is important the IP address in /etc/hostname matches the Private IP Address in the AWS Management Console. If the addresses do not match Map Reduce programs will not complete.
    Image Added

    No Format
    ubuntu@ip-10-224-113-68:~$ cat /etc/hostname
    ip-10-224-113-68
    
  2. If the IP address in /etc/hostname does not match then open the hostname file in a text editor, change and reboot

Running Hadoop Components

 

HTML
<h3>
Here are <a href="https://cwiki.apache.org/confluence/display/BIGTOP/Running+various+Bigtop+components" target="_blank">step-by-step instructions on running Hadoop Components!</a>
</h3>
 

One of the advantages of Bigtop is the ease of installation of the different Hadoop Components without having to hunt for a specific Hadoop Component distribution and matching it with a specific Hadoop version.

Please visit the link above to run some easy examples from the Bigtop distribution !

Provided at the link above are examples to run Hadoop 1.0.1 and nine other components from the Hadoop ecosystem (hive/hbase/zookeeper/pig/sqoop/oozie/mahout/whirr and flume).

See the

HTML
<a href="https://svn.apache.org/repos/asf/incubator/bigtop/trunk/bigtop.mk" target="_blank">Bigtop Make File</a>
for a list of Hadoop components , officially available from the Bigtop distribution.

 

...

Where to go from here

It is highly recommended that you read documentation provided by the Hadoop project itself (http

  1. https://hadoop.apache.org/common/docs/

...

  1. r1.

...

  1. 0.

...

  1. 1/

...

  1. for Bigtop 0.

...

  1. 3
  2. or

...

  1. , http://hadoop.apache.org/common/docs/

...

  1. r0.20.

...

  1. 205.0/

...

  1. ) Bigtop 0.2

3 and that you browse through the Puppet deployment code that is shipped as part of the Bigtop release (bigtop-deploy/puppet/modules, bigtop-deploy/puppet/manifests).