Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

Table of Contents

Introduction

Installing Bigtop Hadoop distribution artifacts lets you have an up and running Hadoop cluster complete with
various Hadoop ecosystem projects in just a few minutes. Be it a single node pseudo-distributed
configuration, or a fully distributed cluster, just make sure you install the packages, install the JDK,
format the namenode and have fun! If Bigtop is not supported on your OS, you can install one of the supported 64-bit OSes
on a virtual machine. There are known issues with 32-bit OSes.

...

CentOS 5, CentOS 6, Fedora 18, RHEL5, RHEL6

  1. Make sure to grab the repo file:

    No Format
    
    wget -O /etc/yum.repos.d/bigtop.repo http://wwwarchive.apache.org/dist/bigtop/bigtop-0.6.0/repos/[centos5|centos6|fedora17|fedora18]/bigtop.repo
    
  2. Browse through the artifacts

    No Format
    
    yum search mahout
    
  3. Install the full Hadoop stack (or parts of it)

    No Format
    
    sudo yum install hadoop\* flume\* mahout\* oozie\* whirr\* hbase\* hive\* hue\*
    

SLES 11, OpenSUSE

  1. Make sure to grab the repo file:

    No Format
    
    #wget http://wwwarchive.apache.org/dist/bigtop/bigtop-0.6.0/repos/[sles11|opensuse12]/bigtop.repo
    #mv bigtop.repo  /etc/zypp/repos.d/bigtop.repo
    
  2. Refresh zypper to start looking at the newly added repo

    No Format
    
    #zypper refresh
    
  3. Browse through the artifacts

    No Format
    
    zypper search mahout
    
  4. Install the full Hadoop stack (or parts of it)

    No Format
    
    zypper install hadoop\* flume\* mahout\* oozie\* whirr\* hive\* hue\*
    

Ubuntu (64 bit, lucid, precise, quantal)

  1. Install the Apache Bigtop GPG key

    No Format
    
    wget -O- http://archive.apache.org/dist/bigtop/bigtop-0.56.0/repos/GPG-KEY-bigtop | sudo apt-key add -
    
  2. Make sure to grab the repo file:

    No Format
    
    sudo wget -O /etc/apt/sources.list.d/bigtop.list http://wwwarchive.apache.org/dist/bigtop/bigtop-0.6.0/repos/`lsb_release --codename --short`/bigtop.list
    
  3. Update the apt cache

    No Format
    
    sudo apt-get update
    
  4. Browse through the artifacts

    No Format
    
    apt-cache search mahout
    
  5. Install bigtop-utils

    No Format
    
    sudo apt-get install bigtop-utils
    
  6. Make sure that you have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html) or follow the advice given by your Linux distribution. If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/bigtop-utils file

    No Format
    
    export JAVA_HOME=XXXX
    
  7. Install the full Hadoop stack (or parts of it)

    No Format
    
    sudo apt-get install hadoop\* flume-* mahout\* oozie\* whirr-* hive\* hue\*
    

...

After installing Hadoop packages onto your Linux box, make sure that:

  1. You have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html) or follow the advice given by your Linux distribution (e.g. some Debian based Linux distributions have JDK packaged as part of their extended set of packages). If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/bigtop-utils file

    No Format
    
    export JAVA_HOME=XXXX
    
  2. Format the namenode

    No Format
    
    sudo /etc/init.d/hadoop-hdfs-namenode init
    
  3. Start the necessary Hadoop services. E.g. for the pseudo distributed Hadoop installation you can simply do:

    No Format
    
    for i in hadoop-hdfs-namenode hadoop-hdfs-datanode ; do sudo service $i start ; done
    
  4. Make sure to create a sub-directory structure in HDFS before running any daemons:

    No Format
    
    sudo /usr/lib/hadoop/libexec/init-hdfs.sh
    
  5. Now start YARN daemons:

    No Format
    
    sudo service hadoop-yarn-resourcemanager start
    sudo service hadoop-yarn-nodemanager start
    
  6. Enjoy your cluster

    No Format
    
    hadoop fs -ls -R /
    hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples*.jar pi 10 1000
    
  7. If you are using Amazon AWS it is important the IP address in /etc/hostname matches the Private IP Address in the AWS Management Console. If the addresses do not match Map Reduce programs will not complete.
    Image Modified

    No Format
    
    ubuntu@ip-10-224-113-68:~$ cat /etc/hostname
    ip-10-224-113-68
    
  8. If the IP address in /etc/hostname does not match then open the hostname file in a text editor, change and reboot

...

To make both of them to work on the same box, you should modify hbase-site.xml :

No Format

<configuration>
 <property>
  <name>hbase.rest.port</name>
  <value>8070</value>
  <description>The HBase REST port. </description>
 </property>
</configuration>

You can choose another port number, but please check if this port not used by any other hadoop component

 

HTML

...

<h1>Running Hadoop Components </h1>
<h3>
<a href="https://cwiki.apache.org/confluence/display/BIGTOP/Running+various+Bigtop+components" target="_blank">Running Bigtop Hadoop* Components</a>
</h3>

 

One of the advantages of Bigtop is the ease of installation of the different Hadoop Components without having to hunt for a specific Hadoop Component distribution and matching it with a specific Hadoop version.
Please visit the link above to run some easy examples from the Bigtop distribution !
Provided at the link above are examples to run Hadoop 1.0.1 and nine other components from the Hadoop ecosystem (hive/hbase/zookeeper/pig/sqoop/oozie/mahout/whirr and flume).
See the

HTML
<a href="https://github.com/apache/bigtop/blob/master/bigtop.mk" target="_blank">Bigtop Make File</a>

for a list of Hadoop components , officially available from the Bigtop distribution.

 

Where to go from here

It is highly recommended that you read documentation provided by the Hadoop project itself

...