Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Wiki Markup
h1. Introduction

Installing Bigtop Hadoop distribution artifacts lets you have an up and running Hadoop cluster complete with
various Hadoop ecosystem projects in just a few minutes. Be it a single node pseudo-distributed
configuration, or a fully distributed cluster, just make sure you install the packages, install the JDK,
format the namenode and have fun\! If Bigtop is not supported on your OS, you can install one of the supported 64-bit OSes
on a [virtual machine|https://cwiki.apache.org/confluence/display/BIGTOP/VM+installation]. There is known issues with 32-bit OSes.

h1. Getting the packages onto your box

h4. CentOS 5, CentOS 6, Fedora 15, RHEL5, RHEL6

# Make sure to grab the repo file:
{noformat}
wget -O /etc/yum.repos.d/bigtop.repo http://www.apache.org/dist/incubator/bigtop/stable/repos/[centos5|centos6|fedora]/bigtop.repo
{noformat}
# This step is optional, but recommended: enable the mirror that is closest to you (uncomment one and only one of the baseurl lines *and remove* the mirrorlist line). If the downloads are too slow, try another mirror
{noformat}
sudo vi /etc/yum.repos.d/bigtop.repo
{noformat}
# Browse through the artifacts
{noformat}
yum search hadoop
{noformat}
# Install the full Hadoop stack (or parts of it)
{noformat}
sudo yum install hadoop\* flume\* mahout\* oozie\* whirr\*
{noformat}

h4. SLES 11, OpenSUSE

# Make sure to grab the repo file:
{noformat}
wget -O  http://www.apache.org/dist/incubator/bigtop/stable/repos/suse/bigtop.repo
mv bigtop.repo  /etc/zypp/repos.d/bigtop.repo
{noformat}
# Enable the mirror that is closest to you (uncomment one and only one of the baseurl lines). If the downloads are too slow, try another mirror
{noformat}
As root:  vi /etc/zypp/repos.d/bigtop.repo
{noformat}
# Browse through the artifacts
{noformat}
zypper search hadoop
{noformat}
# Install the full Hadoop stack (or parts of it)
{noformat}
sudo zypper install hadoop\* flume\* mahout\* oozie\* whirr\*
{noformat}

h4. Ubuntu

# Install the Apache Bigtop GPG key
{noformat}
wget -O- http://www.apache.org/dist/incubator/bigtop/bigtop-0.2.0-incubating/repos/GPG-KEY-bigtop | sudo apt-key add -
{noformat}
# Make sure to grab the repo file:
{noformat}
sudo wget -O /etc/apt/sources.list.d/bigtop.list http://www.apache.org/dist/incubator/bigtop/bigtop-0.2.0-incubating/repos/ubuntu/bigtop.list
{noformat}
# Enable the mirror that is closest to you (uncomment one and only one pair of deb/deb-src lines). If the downloads are too slow, try another mirror
{noformat}
sudo vi /etc/apt/sources.list.d/bigtop.list
{noformat}
# Update the apt cache
{noformat}
sudo apt-get update
{noformat}
# Browse through the artifacts
{noformat}
apt-cache search hadoop
{noformat}
# Make sure that you have the latest JDK installed on your system as well. You can either get it from the official Oracle website ([http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html]) or follow the advice given by your Linux distribution. If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/hadoop file
{noformat}
export JAVA_HOME=XXXX
{noformat}
# Install the full Hadoop stack (or parts of it)
{noformat}
sudo apt-get install hadoop\* flume-* mahout\* oozie\* whirr-*
{noformat}

h1. Running Hadoop

After installing Hadoop packages onto your Linux box, make sure that:

# You have the latest JDK installed on your system as well. You can either get it from the official Oracle website ([http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html]) or follow the advice given by your Linux distribution (e.g. some Debian based Linux distributions have JDK packaged as part of their extended set of packages). If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/hadoop file
{noformat}
export JAVA_HOME=XXXX
{noformat}
# Format the namenode
{noformat}
sudo -u hdfs hadoop namenode -format
{noformat}
# Start the necessary Hadoop services. E.g. for the pseudo distributed Hadoop installation you can simply do:
{noformat}
for i in hadoop-namenode hadoop-datanode hadoop-jobtracker hadoop-tasktracker ; do sudo service $i start ; done
{noformat}
# Once your basic cluster is up and running it is a good idea to create a home directory on the HDFS:
{noformat}
sudo -u hdfs hadoop fs -mkdir /user/$USER
sudo -u hdfs hadoop fs -chown $USER /user/$USER
{noformat}
# Enjoy your cluster
{noformat}
hadoop fs -lsr /
hadoop jar /usr/lib/hadoop/hadoop-examples.jar pi 10 1000
{noformat}
# If you are using Amazon AWS it is important the IP address in /etc/hostname matches the Private IP Address in the AWS Management Console. If the addresses do not match Map Reduce programs will not complete.
\\  !Screen Shot 2012-03-22 at 12.05.50 AM.png|border=1!\\
{noformat}
ubuntu@ip-10-224-113-68:~$ cat /etc/hostname
ip-10-224-113-68
{noformat}
# If the IP address in /etc/hostname does not match then open the hostname file in a text editor, change and reboot




h1. Running Hadoop Components

One of the advantages of Bigtop is the ease of installation of the different Hadoop Components without having to hunt for a specific Hadoop Component distribution and matching it with a specific Hadoop version.


h1. Running Pig

# Install Pig
{noformat}
sudo apt-get install pig
{noformat}
# Create a tab delimited file using a text editor and import it into HDFS. Start the pig shell and verify a load and dump work. Make sure you have a space on both sides of the = sign. The statement using PigStorage('\t') tells Pig the columns in the text file are delimited using tabs.
{noformat}
$pig
grunt>A = load '/pigdata/PIGTESTA.txt' using PigStorage('\t');
grunt>dump A
{noformat}

h1. Running HBase

# Install HBase
{noformat}
sudo apt-get install hbase\*
{noformat}
# For bigtop-0.2.0 uncomment and set JAVA_HOME in /etc/hbase/conf/hbase-env.sh
# For bigtop-0.3.0 this shouldn't be necessary because JAVA_HOME is auto detected
{noformat}
sudo service hbase-master start
hbase shell
{noformat}
# Test the HBase shell by creating a HBase table named t1 with 3 columns f1, f2 and f3. Verify the table exists in HBase
{noformat}
hbase(main):001:0> create 't2','f1','f2','f3'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hbase/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
0 row(s) in 3.4390 seconds

hbase(main):002:0> list
TABLE
t2
2 row(s) in 0.0220 seconds

hbase(main):003:0>
{noformat}
you should see a verification from HBase the table t2 exists, the symbol t2 which is the table name should appear under list


h1. Running Hive

# This is for bigtop-0.2.0 where hadoop-hive, hadoop-hive-server, and hadoop-hive-metastore are installed automatically because the hive services start with the word hadoop. For bigtop-0.3.0 if you use the sudo apt-get install hadoop\* command you won't get the Hive components installed because the Hive Daemon names changed in Bigtop. For bigtop-0.3.0 you will have to do
{noformat}
sudo apt-get install hive hive-server hive-metastore
{noformat}
Create the HDFS directories Hive needs
The Hive Post install scripts should create the /tmp and /user/hive/warehouse directories. If they don't exist, create them in HDFS. The Hive post install script doesn't create these directories because HDFS is not up and running during the deb file installation because JAVA_HOME is buried in hadoop-env.sh and HDFS can't start to allow these directories to be created.
{noformat}
hadoop fs -mkdir /tmp
hadoop fs -mkdir /user/hive/warehouse
hadoop -chmod g+x /tmp
hadoop -chmod g+x /user/hive/warehouse
{noformat}
# If the post install scripts didn't create directories /var/run/hive and /var/lock/subsus, create directory /var/run/hive and  create directory /var/lock/subsys
{noformat}
sudo mkdir /var/run/hive
sudo mkdir /var/lock/subsys
{noformat}
# start the Hive Server
{noformat}
sudo /etc/init.d/hadoop-hive-server start
{noformat}
# create a table in Hive and verify it is there
{noformat}
ubuntu@ip-10-101-53-136:~$ hive
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Hive history file=/tmp/ubuntu/hive_job_log_ubuntu_201203202331_281981807.txt
hive> create table doh(id int);
OK
Time taken: 12.458 seconds
hive> show tables;
OK
doh
Time taken: 0.283 seconds
hive>
{noformat}

h1. Running Mahout


# Set bash environment variables HADOOP_HOME=/usr/lib/hadoop, HADOOP_CONF_DIR=$HADOOP_HOME/conf
# Go to /usr/share/doc/mahout/examples/bin and unzip cluster-reuters.sh.gz
{code}
export HADOOP_HOME=/usr/lib/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/conf
{code}
# modify the contents of cluster-reuters.sh, replace MAHOUT="../../bin/mahout" with MAHOUT="/usr/lib/mahout/bin/mahout"
# make sure the Hadoop file system is running
# ./cluster-reuters.sh will display a menu selection
{panel}
ubuntu@ip-10-224-109-199:/usr/share/doc/mahout/examples/bin$ ./cluster-reuters.sh
{panel}
{panel}
Please select a number to choose the corresponding clustering algorithm
1. kmeans clustering
2. fuzzykmeans clustering
3. lda clustering
4. dirichlet clustering
5. minhash clustering
Enter your choice : 1
ok. You chose 1 and we'll use kmeans Clustering
creating work directory at /tmp/mahout-work-ubuntu
Downloading Reuters-21578
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed
100 7959k  100 7959k    0     0   346k      0  0:00:22  0:00:22 --:--:-\-  356k
Extracting...
AFTER WAITING 1/2 HR...
Inter-Cluster Density: 0.8080922658756075
Intra-Cluster Density: 0.6978329770855537
CDbw Inter-Cluster Density: 0.0
CDbw Intra-Cluster Density: 89.38857003754612
CDbw Separation: 303.4892272989769
12/03/29 03:42:56 INFO clustering.ClusterDumper: Wrote 19 clusters
12/03/29 03:42:56 INFO driver.MahoutDriver: Program took 261107 ms (Minutes: 4.351783333333334)
{panel}
# run classify-20newsgroups.sh, first modify the ../bin/mahout to /usr/lib/mahout/bin/mahout. Do a find and replace using your favorite editor. There are several instances of ../bin/mahout which need to be replaced by /usr/lib/mahout/bin/mahout


h1. Running Whirr

# Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in .bashrc according to the values under your AWS account. Verify using echo $AWS_ACCESS_KEY_ID this is valid before proceeding. 
# run the zookeeper recipe as below. 
{panel}
\~/whirr-0.7.1:bin/whirr launch-cluster  --config recipes/hadoop-ec2.properties
{panel}
# if you get an error message like:
{panel}
Unable to start the cluster. Terminating all nodes.
org.apache.whirr.net.DnsException: java.net.ConnectException: Connection refused
	at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:83)
	at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:40)
	at org.apache.whirr.Cluster$Instance.getPublicHostName(Cluster.java:112)
	at org.apache.whirr.Cluster$Instance.getPublicAddress(Cluster.java:94)
	at org.apache.whirr.service.hadoop.HadoopNameNodeClusterActionHandler.doBeforeConfigure(HadoopNameNodeClusterActionHandler.java:58)
	at org.apache.whirr.service.hadoop.HadoopClusterActionHandler.beforeConfigure(HadoopClusterActionHandler.java:87)
	at org.apache.whirr.service.ClusterActionHandlerSupport.beforeAction(ClusterActionHandlerSupport.java:53)
	at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:100)
	at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:109)
	at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:63)
	at org.apache.whirr.cli.Main.run(Main.java:64)
	at org.apache.whirr.cli.Main.main(Main.java:97)

{panel}
apply Whirr patch 459: https://issues.apache.org/jira/browse/WHIRR-459
# When whirr is finished launching the cluster, you will see an entry under ~/.whirr to verify the cluster is running
# cat out the hadoop-proxy.sh command to find the EC2 instance address or you can cat out the instance file. Both will give you the Hadoop namenode address even though you started the mahout service using whirr. 
# ssh into the instance to verify you can login. Note: this login is different than a normal EC2 instance login. The ssh key is id_rsa and there is no user name for the instance IP address ~/.whirr/mahout:ssh -i ~/.ssh/id_rsa ec2-50-16-85-59.compute-1.amazonaws.com
#verify you can access the HDFS file system from the instance
{noformat}
dc@ip-10-70-18-203:~$ hadoop fs -ls /
Found 3 items
drwxr-xr-x   - hadoop supergroup          0 2012-03-30 23:44 /hadoop
drwxrwxrwx   - hadoop supergroup          0 2012-03-30 23:44 /tmp
drwxrwxrwx   - hadoop supergroup          0 2012-03-30 23:44 /user
{noformat}

h1. Running Oozie
# Stop the Oozie daemons using ps -ef | grep oozie to find them then sudo kill -i pid ( the pid from the ps -ef command)
# Stopping the Oozie daemons may not remove the oozie.pid file which tells the system an oozie process is running. You may have to manually remove the pid file using sudo rm -rf /var/run/oozie/oozie.pid 
# cd into /usr/lib/oozie and setup the oozie environment variables using bin/oozie-env.sh  
# Download ext-2.2.js from http://incubator.apache.org/oozie/QuickStart.html 
# Install ext-2.2.js using bin/oozie-setup.sh -hadoop 1.0.1 ${HADOOP_HOME} -extjs ext-2.2.zip 
# You will get an error message change the above to the highest Hadoop version available, 
sudo bin/oozie-setup.sh -hadoop 0.20.200 ${HADOOP_HOME} -extjs ext-2.2.zip 
# start oozie, sudo bin/oozie-start.sh 
# run oozie, sudo bin/oozie-run.sh you will get a lot of error messages, this is ok. 
# go to the public DNS EC2 address/oozie/11000, my address looked like: http://ec2-67-202-18-159.compute-1.amazonaws.com:11000/oozie/

# Stop the Oozie daemons using ps -ef | grep oozie to find them then sudo kill -i pid ( the pid from the ps -ef command)
# Stopping the Oozie daemons may not remove the oozie.pid file which tells the system an oozie process is running. You may have to manually remove the pid file using sudo rm -rf /var/run/oozie/oozie.pid 
# cd into /usr/lib/oozie and setup the oozie environment variables using bin/oozie-env.sh  
# Download ext-2.2.js from http://incubator.apache.org/oozie/QuickStart.html 
# Install ext-2.2.js using 
{noformat}
bin/oozie-setup.sh -hadoop 1.0.1 ${HADOOP_HOME} -extjs ext-2.2.zip 
{noformat}
# You will get an error message change the above to the highest Hadoop version available, 
sudo bin/oozie-setup.sh -hadoop 0.20.200 ${HADOOP_HOME} -extjs ext-2.2.zip 
# start oozie, sudo bin/oozie-start.sh 
# run oozie, sudo bin/oozie-run.sh you will get a lot of error messages, this is ok. 
# go to the public DNS EC2 address/oozie/11000, my address looked like: http://ec2-67-202-18-159.compute-1.amazonaws.com:11000/oozie/


h1. Running Zookeeper
h1. Running Sqoop
h1. Running Flume/FlumeNG


h1. Where to go from here

It is highly recommended that you read documentation provided by the Hadoop project itself ([http://hadoop.apache.org/common/docs/r0.20.205.0/]) Bigtop 0.2 or [https://hadoop.apache.org/common/docs/r1.0.0/] for Bigtop 0.3 and that you browse through the Puppet deployment code that is shipped as part of the Bigtop release (bigtop-deploy/puppet/modules, bigtop-deploy/puppet/manifests).