You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 43 Next »

Introduction

Installing Bigtop Hadoop distribution artifacts lets you have an up and running Hadoop cluster complete with
various Hadoop ecosystem projects in just a few minutes. Be it a single node pseudo-distributed
configuration, or a fully distributed cluster, just make sure you install the packages, install the JDK,
format the namenode and have fun! If Bigtop is not supported on your OS, you can install one of the supported 64-bit OSes
on a virtual machine. There is known issues with 32-bit OSes.

Getting the packages onto your box

CentOS 5, CentOS 6, Fedora 15, RHEL5, RHEL6

  1. Make sure to grab the repo file:
    wget -O /etc/yum.repos.d/bigtop.repo http://www.apache.org/dist/incubator/bigtop/stable/repos/[centos5|centos6|fedora]/bigtop.repo
    
  2. This step is optional, but recommended: enable the mirror that is closest to you (uncomment one and only one of the baseurl lines and remove the mirrorlist line). If the downloads are too slow, try another mirror
    sudo vi /etc/yum.repos.d/bigtop.repo
    
  3. Browse through the artifacts
    yum search hadoop
    
  4. Install the full Hadoop stack (or parts of it)
    sudo yum install hadoop\* flume\* mahout\* oozie\* whirr\*
    

SLES 11, OpenSUSE

  1. Make sure to grab the repo file:
    wget -O  http://www.apache.org/dist/incubator/bigtop/stable/repos/suse/bigtop.repo
    mv bigtop.repo  /etc/zypp/repos.d/bigtop.repo
    
  2. Enable the mirror that is closest to you (uncomment one and only one of the baseurl lines). If the downloads are too slow, try another mirror
    As root:  vi /etc/zypp/repos.d/bigtop.repo
    
  3. Browse through the artifacts
    zypper search hadoop
    
  4. Install the full Hadoop stack (or parts of it)
    sudo zypper install hadoop\* flume\* mahout\* oozie\* whirr\*
    

Ubuntu

  1. Install the Apache Bigtop GPG key
    wget -O- http://www.apache.org/dist/incubator/bigtop/bigtop-0.2.0-incubating/repos/GPG-KEY-bigtop | sudo apt-key add -
    
  2. Make sure to grab the repo file:
    sudo wget -O /etc/apt/sources.list.d/bigtop.list http://www.apache.org/dist/incubator/bigtop/bigtop-0.2.0-incubating/repos/ubuntu/bigtop.list
    
  3. Enable the mirror that is closest to you (uncomment one and only one pair of deb/deb-src lines). If the downloads are too slow, try another mirror
    sudo vi /etc/apt/sources.list.d/bigtop.list
    
  4. Update the apt cache
    sudo apt-get update
    
  5. Browse through the artifacts
    apt-cache search hadoop
    
  6. Make sure that you have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html) or follow the advice given by your Linux distribution. If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/hadoop file
    export JAVA_HOME=XXXX
    
  7. Install the full Hadoop stack (or parts of it)
    sudo apt-get install hadoop\* flume-* mahout\* oozie\* whirr-*
    

Running Hadoop

After installing Hadoop packages onto your Linux box, make sure that:

  1. You have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html) or follow the advice given by your Linux distribution (e.g. some Debian based Linux distributions have JDK packaged as part of their extended set of packages). If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/hadoop file
    export JAVA_HOME=XXXX
    
  2. Format the namenode
    sudo -u hdfs hadoop namenode -format
    
  3. Start the necessary Hadoop services. E.g. for the pseudo distributed Hadoop installation you can simply do:
    for i in hadoop-namenode hadoop-datanode hadoop-jobtracker hadoop-tasktracker ; do sudo service $i start ; done
    
  4. Once your basic cluster is up and running it is a good idea to create a home directory on the HDFS:
    sudo -u hdfs hadoop fs -mkdir /user/$USER
    sudo -u hdfs hadoop fs -chown $USER /user/$USER
    
  5. Enjoy your cluster
    hadoop fs -lsr /
    hadoop jar /usr/lib/hadoop/hadoop-examples.jar pi 10 1000
    

Running Hadoop Components

One of the advantages of Bigtop is the ease of installation of the different Hadoop Components without having to hunt for a specific Hadoop Component distribution and matching it with a specific Hadoop version.

Running Pig

  1. Install Pig
    sudo apt-get install pig
    
  2. Create a tab delimited file using a text editor and import it into HDFS. Start the pig shell and verify a load and dump work. Make sure you have a space on both sides of the = sign. The statement using PigStorage('\t') tells Pig the columns in the text file are delimited using tabs.
    $pig
    grunt>A = load '/pigdata/PIGTESTA.txt' using PigStorage('\t');
    grunt>dump A
    

Running HBase

  1. Install HBase
    sudo apt-get install hbase\*
    
  2. For bigtop-0.2.0 uncomment and set JAVA_HOME in /etc/hbase/conf/hbase-env.sh
  3. For bigtop-0.3.0 this shouldn't be necessary because JAVA_HOME is auto detected
    sudo service hbase-master start
    hbase shell
    
  4. Test the HBase shell by creating a HBase table named t1 with 3 columns f1, f2 and f3. Verify the table exists in HBase
    create 't1','f1','f2','f3'
    list
    
    you should see a verification from HBase the table t1 exists, the symbol t1 which is the table name should appear under list

Running Hive

  1. This is for bigtop-0.2.0 where hadoop-hive, hadoop-hive-server, and hadoop-hive-metastore are installed automatically because the hive services start with the word hadoop. For bigtop-0.3.0 if you use the sudo apt-get install hadoop* command you won't get the Hive components installed. For bigtop-0.3.0 you will have to do
    sudo apt-get install hive hive-server hive-metastore
    
    Create the HDFS directories Hive needs
    The Hive Post install scripts should create the /tmp and /user/hive/warehouse directories. If they don't exist, create them in HDFS. The Hive post install script doesn't create these directories because HDFS is not up and running during the deb file installation because JAVA_HOME is buried in hadoop-env.sh and HDFS can't start to allow these directories to be created.
    hadoop fs -mkdir /tmp
    hadoop fs -mkdir /user/hive/warehouse
    hadoop -chmod g+x /tmp
    hadoop -chmod g+x /user/hive/warehouse
    
  2. If the post install scripts didn't create directories /var/run/hive and /var/lock/subsus, create directory /var/run/hive and create directory /var/lock/subsys
    sudo mkdir /var/run/hive
    sudo mkdir /var/lock/subsys
    
  3. start the Hive Server
    sudo /etc/init.d/hadoop-hive-server start
    
  4. create a table in Hive and verify it is there
    ubuntu@ip-10-101-53-136:~$ hive 
    WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
    Hive history file=/tmp/ubuntu/hive_job_log_ubuntu_201203202331_281981807.txt
    hive> create table doh(id int);
    OK
    Time taken: 12.458 seconds
    hive> show tables;
    OK
    doh
    Time taken: 0.283 seconds
    hive> 
    

Running Mahout

Running Whirr

Where to go from here

It is highly recommended that you read documentation provided by the Hadoop project itself (http://hadoop.apache.org/common/docs/r0.20.205.0/) Bigtop 0.2 or https://hadoop.apache.org/common/docs/r1.0.0/ for Bigtop 0.3 and that you browse through the Puppet deployment code that is shipped as part of the Bigtop release (bigtop-deploy/puppet/modules, bigtop-deploy/puppet/manifests).

  • No labels