Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Wiki Markup
h1. Running Hadoop Components

One of the advantages of Bigtop is the ease of installation of the different Hadoop Components without having to hunt for a specific Hadoop Component distribution and matching it with a specific Hadoop version.


h1. Running Pig

# Install Pig
{noformat}
sudo apt-get install pig
{noformat}
# Create a tab delimited file using a text editor and import it into HDFS. Start the pig shell and verify a load and dump work. Make sure you have a space on both sides of the = sign. The statement using PigStorage('\t') tells Pig the columns in the text file are delimited using tabs.
{noformat}
$pig
grunt>A = load '/pigdata/PIGTESTA.txt' using PigStorage('\t');
grunt>dump A
{noformat}

h1. Running HBase

# Install HBase
{noformat}
sudo apt-get install hbase\*
{noformat}
# For bigtop-0.2.0 uncomment and set JAVA_HOME in /etc/hbase/conf/hbase-env.sh
# For bigtop-0.3.0 this shouldn't be necessary because JAVA_HOME is auto detected
{noformat}
sudo service hbase-master start
hbase shell
{noformat}
# Test the HBase shell by creating a HBase table named t1 with 3 columns f1, f2 and f3. Verify the table exists in HBase
{noformat}
hbase(main):001:0> create 't2','f1','f2','f3'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hbase/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
0 row(s) in 3.4390 seconds

hbase(main):002:0> list
TABLE
t2
2 row(s) in 0.0220 seconds

hbase(main):003:0>
{noformat}
you should see a verification from HBase the table t2 exists, the symbol t2 which is the table name should appear under list


h1. Running Hive

# This is for bigtop-0.2.0 where hadoop-hive, hadoop-hive-server, and hadoop-hive-metastore are installed automatically because the hive services start with the word hadoop. For bigtop-0.3.0 if you use the sudo apt-get install hadoop\* command you won't get the Hive components installed because the Hive Daemon names changed in Bigtop. For bigtop-0.3.0 you will have to do
{noformat}
sudo apt-get install hive hive-server hive-metastore
{noformat}
Create the HDFS directories Hive needs
The Hive Post install scripts should create the /tmp and /user/hive/warehouse directories. If they don't exist, create them in HDFS. The Hive post install script doesn't create these directories because HDFS is not up and running during the deb file installation because JAVA_HOME is buried in hadoop-env.sh and HDFS can't start to allow these directories to be created.
{noformat}
hadoop fs -mkdir /tmp
hadoop fs -mkdir /user/hive/warehouse
hadoop -chmod g+x /tmp
hadoop -chmod g+x /user/hive/warehouse
{noformat}
# If the post install scripts didn't create directories /var/run/hive and /var/lock/subsus, create directory /var/run/hive and  create directory /var/lock/subsys
{noformat}
sudo mkdir /var/run/hive
sudo mkdir /var/lock/subsys
{noformat}
# start the Hive Server
{noformat}
sudo /etc/init.d/hadoop-hive-server start
{noformat}
# create a table in Hive and verify it is there
{noformat}
ubuntu@ip-10-101-53-136:~$ hive
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Hive history file=/tmp/ubuntu/hive_job_log_ubuntu_201203202331_281981807.txt
hive> create table doh(id int);
OK
Time taken: 12.458 seconds
hive> show tables;
OK
doh
Time taken: 0.283 seconds
hive>
{noformat}

h1. Running Mahout


# Set bash environment variables HADOOP_HOME=/usr/lib/hadoop, HADOOP_CONF_DIR=$HADOOP_HOME/conf
# Go to /usr/share/doc/mahout/examples/bin and unzip cluster-reuters.sh.gz
{code}
export HADOOP_HOME=/usr/lib/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/conf
{code}
# modify the contents of cluster-reuters.sh, replace MAHOUT="../../bin/mahout" with MAHOUT="/usr/lib/mahout/bin/mahout"
# make sure the Hadoop file system is running
# ./cluster-reuters.sh will display a menu selection
{panel}
ubuntu@ip-10-224-109-199:/usr/share/doc/mahout/examples/bin$ ./cluster-reuters.sh
{panel}
{panel}
Please select a number to choose the corresponding clustering algorithm
1. kmeans clustering
2. fuzzykmeans clustering
3. lda clustering
4. dirichlet clustering
5. minhash clustering
Enter your choice : 1
ok. You chose 1 and we'll use kmeans Clustering
creating work directory at /tmp/mahout-work-ubuntu
Downloading Reuters-21578
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed
100 7959k  100 7959k    0     0   346k      0  0:00:22  0:00:22 --:--:-\-  356k
Extracting...
AFTER WAITING 1/2 HR...
Inter-Cluster Density: 0.8080922658756075
Intra-Cluster Density: 0.6978329770855537
CDbw Inter-Cluster Density: 0.0
CDbw Intra-Cluster Density: 89.38857003754612
CDbw Separation: 303.4892272989769
12/03/29 03:42:56 INFO clustering.ClusterDumper: Wrote 19 clusters
12/03/29 03:42:56 INFO driver.MahoutDriver: Program took 261107 ms (Minutes: 4.351783333333334)
{panel}
# run classify-20newsgroups.sh, first modify the ../bin/mahout to /usr/lib/mahout/bin/mahout. Do a find and replace using your favorite editor. There are several instances of ../bin/mahout which need to be replaced by /usr/lib/mahout/bin/mahout
# run the rest of the examples under this directory except the netflix data set which is no longer officially available


h1. Running Whirr

# Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in .bashrc according to the values under your AWS account. Verify using echo $AWS_ACCESS_KEY_ID this is valid before proceeding. 
# run the zookeeper recipe as below. 
{panel}
\~/whirr-0.7.1:bin/whirr launch-cluster  --config recipes/hadoop-ec2.properties
{panel}
# if you get an error message like:
{panel}
Unable to start the cluster. Terminating all nodes.
org.apache.whirr.net.DnsException: java.net.ConnectException: Connection refused
at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:83)
at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:40)
at org.apache.whirr.Cluster$Instance.getPublicHostName(Cluster.java:112)
at org.apache.whirr.Cluster$Instance.getPublicAddress(Cluster.java:94)
at org.apache.whirr.service.hadoop.HadoopNameNodeClusterActionHandler.doBeforeConfigure(HadoopNameNodeClusterActionHandler.java:58)
at org.apache.whirr.service.hadoop.HadoopClusterActionHandler.beforeConfigure(HadoopClusterActionHandler.java:87)
at org.apache.whirr.service.ClusterActionHandlerSupport.beforeAction(ClusterActionHandlerSupport.java:53)
at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:100)
at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:109)
at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:63)
at org.apache.whirr.cli.Main.run(Main.java:64)
at org.apache.whirr.cli.Main.main(Main.java:97)

{panel}
apply Whirr patch 459: [https://issues.apache.org/jira/browse/WHIRR-459]
# When whirr is finished launching the cluster, you will see an entry under \~/.whirr to verify the cluster is running
# cat out the hadoop-proxy.sh command to find the EC2 instance address or you can cat out the instance file. Both will give you the Hadoop namenode address even though you started the mahout service using whirr.
# ssh into the instance to verify you can login. Note: this login is different than a normal EC2 instance login. The ssh key is id_rsa and there is no user name for the instance IP address \~/.whirr/mahout:ssh \-i \~/.ssh/id_rsa ec2-50-16-85-59.compute-1.amazonaws.com
\#verify you can access the HDFS file system from the instance
{noformat}
dc@ip-10-70-18-203:~$ hadoop fs -ls /
Found 3 items
drwxr-xr-x   - hadoop supergroup          0 2012-03-30 23:44 /hadoop
drwxrwxrwx   - hadoop supergroup          0 2012-03-30 23:44 /tmp
drwxrwxrwx   - hadoop supergroup          0 2012-03-30 23:44 /user
{noformat}

h1. Running Oozie

# Stop the Oozie daemons using ps \-ef \| grep oozie to find them then sudo kill \-i pid ( the pid from the ps \-ef command)
# Stopping the Oozie daemons may not remove the oozie.pid file which tells the system an oozie process is running. You may have to manually remove the pid file using sudo rm \-rf /var/run/oozie/oozie.pid
# cd into /usr/lib/oozie and setup the oozie environment variables using bin/oozie-env.sh
# Download ext-2.2.js from [http://incubator.apache.org/oozie/QuickStart.html]
# Install ext-2.2.js using
{noformat}
bin/oozie-setup.sh -hadoop 1.0.1 ${HADOOP_HOME} -extjs ext-2.2.zip
{noformat}
# You will get an error message change the above to the highest Hadoop version available,
{noformat}
sudo bin/oozie-setup.sh -hadoop 0.20.200 ${HADOOP_HOME} -extjs ext-2.2.zip
{noformat}
# start oozie, sudo bin/oozie-start.sh
# run oozie, sudo bin/oozie-run.sh you will get a lot of error messages, this is ok.
# go to the public DNS EC2 address/oozie/11000, my address looked like: [http://ec2-67-202-18-159.compute-1.amazonaws.com:11000/oozie/]

{div: style=margin:0px 0px 0px 40px}
!https://cwiki.apache.org/confluence/download/attachments/27831258/Screen+Shot+2012-03-31+at+1.19.56+AM.png|border=1!
{div}
# go to the Oozie apache page and run the oozie examples

h1. Running Zookeeper

Zookeeper is installed as part of HBase. Add the zookeeper echo example

h1. Running Sqoop


h1. Running Flume/FlumeNG