Introduction
Installing Bigtop Hadoop distribution artifacts lets you have an up and running Hadoop cluster complete with
various Hadoop ecosystem projects in just a few minutes. Be it a single node pseudo-distributed
configuration, or a fully distributed cluster, just make sure you install the packages, install the JDK,
format the namenode and have fun! If Bigtop is not supported on your OS, you can install one of the supported 64-bit OSes
on a virtual machine. There is known issues with 32-bit OSes.
Table of Contents |
---|
Getting the packages onto your box
CentOS 5, CentOS 6, Fedora 15, RHEL5, RHEL6
Make sure to grab the repo file:
No Format wget -O /etc/yum.repos.d/bigtop.repo http://www.apache.org/dist/incubator/bigtop/stable/repos/[centos5|centos6|fedorafedora15|fedora16]/bigtop.repo
This step is optional, but recommended: enable the mirror that is closest to you (uncomment one and only one of the baseurl lines and remove the mirrorlist line). If the downloads are too slow, try another mirror
No Format sudo vi /etc/yum.repos.d/bigtop.repo
Note: Since 0.3.0 is currently available only from the archives use:
No Format baseurl=http://archive.apache.org/dist/incubator/bigtop/bigtop-0.3.0-incubating/repos/[centos5|centos6|fedora15|fedora16]/
Browse through the artifacts
No Format yum search hadoop yum --disablerepo "*" --enablerepo "bigtop-0.3.0-incubating" list available
Install the full Hadoop stack (or parts of it)
No Format sudo yum install hadoop\* flume\* mahout\* oozie\* whirr\* hbase\*
SLES 11, OpenSUSE
Make sure to grab the repo file:
No Format wget -O http://www.apache.org/dist/incubator/bigtop/stable/repos/suse/bigtop.repo mv bigtop.repo /etc/zypp/repos.d/bigtop.repo
Enable the mirror that is closest to you (uncomment one and only one of the baseurl lines). If the downloads are too slow, try another mirror
No Format As root: vi /etc/zypp/repos.d/bigtop.repo
Browse through the artifacts
No Format zypper search hadoop
Install the full Hadoop stack (or parts of it)
No Format sudo zypper install hadoop\* flume\* mahout\* oozie\* whirr\*
Ubuntu
Install the Apache Bigtop GPG key
No Format wget -O- http://www.apache.org/dist/incubator/bigtop/bigtop-0.23.0-incubating/repos/GPG-KEY-bigtop | sudo apt-key add -
Make sure to grab the repo file:
No Format sudo wget -O /etc/apt/sources.list.d/bigtop.list http://www.apache.org/dist/incubator/bigtop/bigtop-0.23.0-incubating/repos/ubuntu/bigtop.list
Enable the mirror that is closest to you (uncomment one and only one pair of deb/deb-src lines). If the downloads are too slow, try another mirror
No Format sudo vi /etc/apt/sources.list.d/bigtop.list
Update the apt cache
No Format sudo apt-get update
Browse through the artifacts
No Format apt-cache search hadoop
Make sure that you have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html) or follow the advice given by your Linux distribution. If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/hadoop file
No Format export JAVA_HOME=XXXX
Install the full Hadoop stack (or parts of it)
No Format sudo apt-get install hadoop\* flume-* mahout\* oozie\* whirr-*
Running Hadoop
After installing Hadoop packages onto your Linux box, make sure that:
You have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html) or follow the advice given by your Linux distribution (e.g. some Debian based Linux distributions have JDK packaged as part of their extended set of packages). If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/hadoop file
No Format export JAVA_HOME=XXXX
Format the namenode
No Format sudo -u hdfs hadoop namenode -format
Start the necessary Hadoop services. E.g. for the pseudo distributed Hadoop installation you can simply do:
No Format for i in hadoop-namenode hadoop-datanode hadoop-jobtracker hadoop-tasktracker ; do sudo service $i start ; done
Once your basic cluster is up and running it is a good idea to create a home directory on the HDFS:
No Format sudo -u hdfs hadoop fs -mkdir /user/$USER sudo -u hdfs hadoop fs -chown $USER /user/$USER
Enjoy your cluster
No Format hadoop fs -lsr / hadoop jar /usr/lib/hadoop/hadoop-examples.jar pi 10 1000
If you are using Amazon AWS it is important the IP address in /etc/hostname matches the Private IP Address in the AWS Management Console. If the addresses do not match Map Reduce programs will not complete.
No Format ubuntu@ip-10-224-113-68:~$ cat /etc/hostname ip-10-224-113-68
- If the IP address in /etc/hostname does not match then open the hostname file in a text editor, change and reboot
Running Hadoop Components
HTML |
---|
<h3> Here are <a href="https://cwiki.apache.org/confluence/display/BIGTOP/Running+various+Bigtop+components" target="_blank">step-by-step instructions on running Hadoop Components!</a> </h3> |
One of the advantages of Bigtop is the ease of installation of the different Hadoop Components without having to hunt for a specific Hadoop Component distribution and matching it with a specific Hadoop version.
Running Pig
- Install Pig
No Format sudo apt-get install pig
- Create a tab delimited file using a text editor and import it into HDFS. Start the pig shell and verify a load and dump work. Make sure you have a space on both sides of the = sign. The statement using PigStorage('\t') tells Pig the columns in the text file are delimited using tabs.
No Format $pig grunt>A = load '/pigdata/PIGTESTA.txt' using PigStorage('\t'); grunt>dump A
Running HBase
- Install HBase
No Format sudo apt-get install hbase\*
- For bigtop-0.2.0 uncomment and set JAVA_HOME in /etc/hbase/conf/hbase-env.sh
- For bigtop-0.3.0 this shouldn't be necessary because JAVA_HOME is auto detected
No Format sudo service hbase-master start hbase shell
- Test the HBase shell by creating a HBase table named t1 with 3 columns f1, f2 and f3. Verify the table exists in HBase
you should see a verification from HBase the table t1 exists, the symbol t1 which is the table name should appear under listNo Format hbase(main):001:0> create 't2','f1','f2','f3' SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/lib/hbase/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 0 row(s) in 3.4390 seconds hbase(main):002:0> list TABLE t2 2 row(s) in 0.0220 seconds hbase(main):003:0>
Running Hive
- This is for bigtop-0.2.0 where hadoop-hive, hadoop-hive-server, and hadoop-hive-metastore are installed automatically because the hive services start with the word hadoop. For bigtop-0.3.0 if you use the sudo apt-get install hadoop* command you won't get the Hive components installed. For bigtop-0.3.0 you will have to do
Create the HDFS directories Hive needsNo Format sudo apt-get install hive hive-server hive-metastore
The Hive Post install scripts should create the /tmp and /user/hive/warehouse directories. If they don't exist, create them in HDFS. The Hive post install script doesn't create these directories because HDFS is not up and running during the deb file installation because JAVA_HOME is buried in hadoop-env.sh and HDFS can't start to allow these directories to be created.No Format hadoop fs -mkdir /tmp hadoop fs -mkdir /user/hive/warehouse hadoop -chmod g+x /tmp hadoop -chmod g+x /user/hive/warehouse
- If the post install scripts didn't create directories /var/run/hive and /var/lock/subsus, create directory /var/run/hive and create directory /var/lock/subsys
No Format sudo mkdir /var/run/hive sudo mkdir /var/lock/subsys
- start the Hive Server
No Format sudo /etc/init.d/hadoop-hive-server start
- create a table in Hive and verify it is there
No Format ubuntu@ip-10-101-53-136:~$ hive WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files. Hive history file=/tmp/ubuntu/hive_job_log_ubuntu_201203202331_281981807.txt hive> create table doh(id int); OK Time taken: 12.458 seconds hive> show tables; OK doh Time taken: 0.283 seconds hive>
Running Mahout
Running Whirr
Please visit the link above to run some easy examples from the Bigtop distribution !
Provided at the link above are examples to run Hadoop 1.0.1 and nine other components from the Hadoop ecosystem (hive/hbase/zookeeper/pig/sqoop/oozie/mahout/whirr and flume).
See the
HTML |
---|
<a href="https://svn.apache.org/repos/asf/incubator/bigtop/trunk/bigtop.mk" target="_blank">Bigtop Make File</a> |
Where to go from here
It is highly recommended that you read documentation provided by the Hadoop project itself (http
...
...
...
...
- for Bigtop 0.
...
- 3
- or
...
...
...
...
- ) Bigtop 0.2
3 and that you browse through the Puppet deployment code that is shipped as part of the Bigtop release (bigtop-deploy/puppet/modules, bigtop-deploy/puppet/manifests).