Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Introduction

Installing Bigtop Hadoop distribution artifacts lets you have an up and running Hadoop cluster complete with
various Hadoop ecosystem projects in just a few minutes. Be it a single node pseudo-distributed
configuration, or a fully distributed cluster, just make sure you install the packages, install the JDK,
format the namenode and have fun! If Bigtop is not supported on your OS, you can install one of the supported 64-bit OSes
on a virtual machine. There is known issues with 32-bit OSes.

Getting the packages onto your box

CentOS 5, CentOS 6, Fedora 15, RHEL5, RHEL6

...

  1. Make sure to grab the repo file:
    No Format
  1. 
    wget -O /etc/yum.repos.d/bigtop.repo http://www.apache.org/dist/incubator/bigtop/stable/repos/[centos5|centos6|fedora]/bigtop.repo
    

...

  1. This step is optional,

...

  1. but

...

  1. recommended:

...

  1. enable

...

  1. the

...

  1. mirror

...

  1. that

...

  1. is

...

  1. closest

...

  1. to

...

  1. you

...

  1. (uncomment

...

  1. one

...

  1. and

...

  1. only

...

  1. one

...

  1. of

...

  1. the

...

  1. baseurl

...

  1. lines

...

  1. and

...

  1. remove

...

  1. the

...

  1. mirrorlist

...

  1. line).

...

  1. If

...

  1. the

...

  1. downloads

...

  1. are

...

  1. too

...

  1. slow,

...

  1. try

...

  1. another

...

  1. mirror

...

  1. No Format
    
    sudo vi /etc/yum.repos.d/bigtop.repo
    

...

  1. Browse through the artifacts
    No Format
    
    yum search hadoop
    

...

  1. Install the full Hadoop stack (or parts of it)
    No Format
    
    sudo yum install hadoop\* flume\* mahout\* oozie\* whirr\*
    

...

SLES 11,

...

OpenSUSE

...

  1. Make

...

  1. sure

...

  1. to

...

  1. grab

...

  1. the

...

  1. repo

...

  1. file:

...

  1. No Format

...

  1. 
    wget -O  http://www.apache.org/dist/incubator/bigtop/stable/repos/suse/bigtop.repo
    mv bigtop.repo  /etc/zypp/repos.d/bigtop.repo
    

...

  1. Enable the mirror that is closest to you (uncomment one and only one of the baseurl lines). If the downloads are too slow, try another mirror
    No Format
    
    As root:  vi /etc/zypp/repos.d/bigtop.repo
    

...

  1. Browse through the artifacts
    No Format
    
    zypper search hadoop
    

...

  1. Install the full Hadoop stack (or parts of it)
    No Format
    
    sudo zypper install hadoop\* flume\* mahout\* oozie\* whirr\*
    

...

Ubuntu

  1. Install the Apache Bigtop GPG key
    No Format
    
    wget -O- http://www.apache.org/dist/incubator/bigtop/bigtop-0.2.0-incubating/repos/GPG-KEY-bigtop | sudo apt-key add -
    

...

  1. Make sure to grab the repo file:
    No Format
    
    sudo wget -O /etc/apt/sources.list.d/bigtop.list http://www.apache.org/dist/incubator/bigtop/bigtop-0.2.0-incubating/repos/ubuntu/bigtop.list
    

...

  1. Enable the mirror that is closest to you (uncomment one and only one pair of deb/deb-src

...

  1. lines).

...

  1. If

...

  1. the

...

  1. downloads

...

  1. are

...

  1. too

...

  1. slow,

...

  1. try

...

  1. another

...

  1. mirror

...

  1. No Format

...

  1. 
    sudo vi /etc/apt/sources.list.d/bigtop.list
    

...

  1. Update the apt cache
    No Format
    
    sudo apt-get update
    

...

  1. Browse through the artifacts
    No Format
    
    apt-cache search hadoop
    

...

  1. Make sure that you have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html

...

  1. )

...

  1. or

...

  1. follow

...

  1. the

...

  1. advice

...

  1. given

...

  1. by

...

  1. your

...

  1. Linux

...

  1. distribution.

...

  1. If

...

  1. your

...

  1. JDK

...

  1. is

...

  1. installed

...

  1. in

...

  1. a

...

  1. non-standard

...

  1. location,

...

  1. make

...

  1. sure

...

  1. to

...

  1. add

...

  1. the

...

  1. line

...

  1. below

...

  1. to

...

  1. the

...

  1. /etc/default/hadoop

...

  1. file

...

  1. No Format

...

  1. 
    export JAVA_HOME=XXXX
    

...

  1. Install the full Hadoop stack (or parts of it)
    No Format
    
    sudo apt-get install hadoop\* flume-* mahout\* oozie\* whirr-*
    

...

Running Hadoop

After installing Hadoop packages onto your Linux box, make sure that:

  1. You have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html

...

  1. )

...

  1. or

...

  1. follow

...

  1. the

...

  1. advice

...

  1. given

...

  1. by

...

  1. your

...

  1. Linux

...

  1. distribution

...

  1. (e.g.

...

  1. some

...

  1. Debian

...

  1. based

...

  1. Linux

...

  1. distributions

...

  1. have

...

  1. JDK

...

  1. packaged

...

  1. as

...

  1. part

...

  1. of

...

  1. their

...

  1. extended

...

  1. set

...

  1. of

...

  1. packages).

...

  1. If

...

  1. your

...

  1. JDK

...

  1. is

...

  1. installed

...

  1. in

...

  1. a

...

  1. non-standard

...

  1. location,

...

  1. make

...

  1. sure

...

  1. to

...

  1. add

...

  1. the

...

  1. line

...

  1. below

...

  1. to

...

  1. the

...

  1. /etc/default/hadoop

...

  1. file

...

  1. No Format

...

  1. 
    export JAVA_HOME=XXXX
    

...

  1. Format the namenode
    No Format
    
    sudo -u hdfs hadoop namenode -format
    

...

  1. Start the necessary Hadoop services. E.g.

...

  1. for

...

  1. the

...

  1. pseudo

...

  1. distributed

...

  1. Hadoop

...

  1. installation

...

  1. you

...

  1. can

...

  1. simply

...

  1. do:

...

  1. No Format
    
    for i in hadoop-namenode hadoop-datanode hadoop-jobtracker hadoop-tasktracker ; do sudo service $i start ; done
    

...

  1. Once your basic cluster is up and running it is a good idea to create a home directory on the HDFS:
    No Format
    
    sudo -u hdfs hadoop fs -mkdir /user/$USER
    sudo -u hdfs hadoop fs -chown $USER /user/$USER
    

...

  1. Enjoy your cluster
    No Format
    
    hadoop fs -lsr /
    hadoop jar /usr/lib/hadoop/hadoop-examples.jar pi 10 1000
    

...

  1. If you are using Amazon AWS it is important the IP address in /etc/hostname

...

  1. matches

...

  1. the

...

  1. Private

...

  1. IP

...

  1. Address

...

  1. in

...

  1. the

...

  1. AWS Management Console. If the addresses do not match Map Reduce programs will not complete.
    Image Added
    No Format
    
    ubuntu@ip-10-224-113-68:~$ cat /etc/hostname
    ip-10-224-113-68
    

...

  1. If the IP address in /etc/hostname

...

  1. does

...

  1. not

...

  1. match

...

  1. then

...

  1. open

...

  1. the

...

  1. hostname

...

  1. file

...

  1. in

...

  1. a

...

  1. text

...

  1. editor,

...

  1. change

...

  1. and

...

  1. reboot

Running Hadoop Components

One of the advantages of Bigtop is the ease of installation of the different Hadoop Components without having to hunt for a specific Hadoop Component distribution and matching it with a specific Hadoop version.

Running Pig

  1. Install Pig
    No Format
    
    sudo apt-get install pig
    

...

  1. Create a tab delimited file using a text editor and import it into HDFS. Start the pig shell and verify a load and dump work. Make sure you have a space on both sides of the = sign. The statement using PigStorage('\t')

...

  1. tells

...

  1. Pig

...

  1. the

...

  1. columns

...

  1. in

...

  1. the

...

  1. text

...

  1. file

...

  1. are

...

  1. delimited

...

  1. using

...

  1. tabs.

...

  1. No Format

...

  1. 
    $pig
    grunt>A = load '/pigdata/PIGTESTA.txt' using PigStorage('\t');
    grunt>dump A
    

...

Running HBase

  1. Install HBase
    No Format
    
    sudo apt-get install hbase\*
    

...

  1. For bigtop-0.2.0

...

  1. uncomment

...

  1. and

...

  1. set

...

  1. JAVA_HOME

...

  1. in

...

  1. /etc/hbase/conf/hbase-env.sh

...

  1. For

...

  1. bigtop-0.3.0

...

  1. this

...

  1. shouldn't

...

  1. be

...

  1. necessary

...

  1. because

...

  1. JAVA_HOME

...

  1. is

...

  1. auto

...

  1. detected

...

  1. No Format

...

  1. 
    sudo service hbase-master start
    hbase shell
    

...

  1. Test the HBase shell by creating a HBase table named t1 with 3 columns f1, f2 and f3. Verify the table exists in HBase
    No Format
    
    hbase(main):001:0> create 't2','f1','f2','f3'
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/usr/lib/hbase/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    0 row(s) in 3.4390 seconds
    
    hbase(main):002:0> list
    TABLE
    t2
    2 row(s) in 0.0220 seconds
    
    hbase(main):003:0>
    

...

  1. you

...

  1. should

...

  1. see

...

  1. a

...

  1. verification

...

  1. from

...

  1. HBase

...

  1. the

...

  1. table

...

  1. t2

...

  1. exists,

...

  1. the

...

  1. symbol

...

  1. t2

...

  1. which

...

  1. is

...

  1. the

...

  1. table

...

  1. name

...

  1. should

...

  1. appear

...

  1. under

...

  1. list

Running Hive

  1. This is for bigtop-0.2.0

...

  1. where

...

  1. hadoop-hive,

...

  1. hadoop-hive-server,

...

  1. and

...

  1. hadoop-hive-metastore

...

  1. are

...

  1. installed

...

  1. automatically

...

  1. because

...

  1. the

...

  1. hive

...

  1. services

...

  1. start

...

  1. with

...

  1. the

...

  1. word

...

  1. hadoop.

...

  1. For

...

  1. bigtop-0.3.0

...

  1. if

...

  1. you

...

  1. use

...

  1. the

...

  1. sudo

...

  1. apt-get

...

  1. install

...

  1. hadoop

...

  1. *

...

  1. command

...

  1. you

...

  1. won't

...

  1. get

...

  1. the

...

  1. Hive

...

  1. components

...

  1. installed

...

  1. because

...

  1. the

...

  1. Hive

...

  1. Daemon

...

  1. names

...

  1. changed

...

  1. in

...

  1. Bigtop.

...

  1. For

...

  1. bigtop-0.3.0

...

  1. you

...

  1. will

...

  1. have

...

  1. to

...

  1. do

...

  1. No Format

...

  1. 
    sudo apt-get install hive hive-server hive-metastore
    

...

  1. Create

...

  1. the

...

  1. HDFS

...

  1. directories

...

  1. Hive

...

  1. needs

...


  1. The

...

  1. Hive

...

  1. Post

...

  1. install

...

  1. scripts

...

  1. should

...

  1. create

...

  1. the

...

  1. /tmp

...

  1. and

...

  1. /user/hive/warehouse

...

  1. directories.

...

  1. If

...

  1. they

...

  1. don't

...

  1. exist,

...

  1. create

...

  1. them

...

  1. in

...

  1. HDFS.

...

  1. The

...

  1. Hive

...

  1. post

...

  1. install

...

  1. script

...

  1. doesn't

...

  1. create

...

  1. these

...

  1. directories

...

  1. because

...

  1. HDFS

...

  1. is

...

  1. not

...

  1. up

...

  1. and

...

  1. running

...

  1. during

...

  1. the

...

  1. deb

...

  1. file

...

  1. installation

...

  1. because

...

  1. JAVA_HOME

...

  1. is

...

  1. buried

...

  1. in

...

  1. hadoop-env.sh

...

  1. and

...

  1. HDFS

...

  1. can't

...

  1. start

...

  1. to

...

  1. allow

...

  1. these

...

  1. directories

...

  1. to

...

  1. be

...

  1. created.

...

  1. No Format

...

  1. 
    hadoop fs -mkdir /tmp
    hadoop fs -mkdir /user/hive/warehouse
    hadoop -chmod g+x /tmp
    hadoop -chmod g+x /user/hive/warehouse
    

...

  1. If the post install scripts didn't

...

  1. create

...

  1. directories

...

  1. /var/run/hive

...

  1. and

...

  1. /var/lock/subsus,

...

  1. create

...

  1. directory

...

  1. /var/run/hive

...

  1. and

...

  1. create

...

  1. directory

...

  1. /var/lock/subsys

...

  1. No Format

...

  1. 
    sudo mkdir /var/run/hive
    sudo mkdir /var/lock/subsys
    

...

  1. start the Hive Server
    No Format
    
    sudo /etc/init.d/hadoop-hive-server start
    

...

  1. create a table in Hive and verify it is there
    No Format
    
    ubuntu@ip-10-101-53-136:~$ hive
    WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
    Hive history file=/tmp/ubuntu/hive_job_log_ubuntu_201203202331_281981807.txt
    hive> create table doh(id int);
    OK
    Time taken: 12.458 seconds
    hive> show tables;
    OK
    doh
    Time taken: 0.283 seconds
    hive>
    

...

Running Mahout

  1. Set bash environment variables HADOOP_HOME=/usr/lib/hadoop,

...

  1. HADOOP_CONF_DIR=$HADOOP_HOME/conf

...

  1. Go to /usr/share/doc/mahout/examples/bin

...

  1. and

...

  1. unzip

...

  1. cluster-reuters.sh.gz

...

  1. Code Block

...

  1. 
    export HADOOP_HOME=/usr/lib/hadoop
    export HADOOP_CONF_DIR=$HADOOP_HOME/conf
    

...

  1. modify the contents of cluster-reuters.sh,

...

  1. replace

...

  1. MAHOUT="../../bin/mahout"

...

  1. with

...

  1. MAHOUT="/usr/lib/mahout/bin/mahout"

...

  1. make

...

  1. sure

...

  1. the

...

  1. Hadoop

...

  1. file

...

  1. system

...

  1. is

...

  1. running

...

  1. ./cluster-reuters.sh

...

  1. will

...

  1. display

...

  1. a

...

  1. menu

...

  1. selection

...

  1. Panel

...

  1. ubuntu@ip-10-224-109-199:/usr/share/doc/mahout/examples/bin$

...

  1. ./cluster-reuters.sh

...

  1. Panel

    Please select a number to choose the corresponding clustering algorithm
    1. kmeans clustering
    2. fuzzykmeans clustering
    3. lda clustering
    4. dirichlet clustering
    5. minhash clustering
    Enter your choice : 1
    ok. You chose 1 and we'll use kmeans Clustering
    creating work directory at /tmp/mahout-work-ubuntu

...


  1. Downloading

...

  1. Reuters-21578

...


  1. %

...

  1. Total

...

  1. %

...

  1. Received

...

  1. %

...

  1. Xferd

...

  1. Average

...

  1. Speed

...

  1. Time

...

  1. Time

...

  1. Time

...

  1. Current

...


  1. Dload

...

  1. Upload

...

  1. Total

...

  1. Spent

...

  1. Left

...

  1. Speed

...


  1. 100

...

  1. 7959k

...

  1. 100

...

  1. 7959k

...

  1. 0

...

  1. 0

...

  1. 346k

...

  1. 0

...

  1. 0:00:22

...

  1. 0:00:22

...

  1. -:

...

  1. :

...

  1. -

...

  1. 356k

...


  1. Extracting...

...


  1. AFTER

...

  1. WAITING

...

  1. 1/2

...

  1. HR...

...


  1. Inter-Cluster

...

  1. Density:

...

  1. 0.8080922658756075

...


  1. Intra-Cluster

...

  1. Density:

...

  1. 0.6978329770855537

...


  1. CDbw

...

  1. Inter-Cluster

...

  1. Density:

...

  1. 0.0

...


  1. CDbw

...

  1. Intra-Cluster

...

  1. Density:

...

  1. 89.38857003754612

...


  1. CDbw

...

  1. Separation:

...

  1. 303.4892272989769

...


  1. 12/03/29

...

  1. 03:42:56

...

  1. INFO

...

  1. clustering.ClusterDumper:

...

  1. Wrote

...

  1. 19

...

  1. clusters

...


  1. 12/03/29

...

  1. 03:42:56

...

  1. INFO

...

  1. driver.MahoutDriver:

...

  1. Program

...

  1. took

...

  1. 261107

...

  1. ms

...

  1. (Minutes:

...

  1. 4.351783333333334)

...

  1. run classify-20newsgroups.sh,

...

  1. first

...

  1. modify

...

  1. the

...

  1. ../bin/mahout

...

  1. to

...

  1. /usr/lib/mahout/bin/mahout.

...

  1. Do

...

  1. a

...

  1. find

...

  1. and

...

  1. replace

...

  1. using

...

  1. your

...

  1. favorite

...

  1. editor.

...

  1. There

...

  1. are

...

  1. several

...

  1. instances

...

  1. of

...

  1. ../bin/mahout

...

  1. which

...

  1. need

...

  1. to

...

  1. be

...

  1. replaced

...

  1. by

...

  1. /usr/lib/mahout/bin/mahout

...

Running Whirr

  1. Set AWS_ACCESS_KEY_ID

...

  1. and AWS_SECRET_ACCESS_KEY

...

  1. in

...

  1. .bashrc

...

  1. according

...

  1. to

...

  1. the

...

  1. values

...

  1. under

...

  1. your

...

  1. AWS

...

  1. account.

...

  1. Verify

...

  1. using

...

  1. echo

...

  1. $AWS_ACCESS_KEY_ID

...

  1. this

...

  1. is

...

  1. valid

...

  1. before

...

  1. proceeding.

...

  1.  
  2. run the zookeeper recipe as below. 
    Panel

    ~/whirr-0.7.1:bin/whirr

...

  1. launch-cluster

...

  1.  --config

...

  1. recipes/hadoop-ec2.properties

...

  1. if you get an error message like:
    Panel

    Unable to start the cluster. Terminating all nodes.
    org.apache.whirr.net.DnsException:

...

  1. java.net.ConnectException:

...

  1. Connection

...

  1. refused

...


  1. at

...

  1. org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:83)

...


  1. at

...

  1. org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:40)

...


  1. at

...

  1. org.apache.whirr.Cluster$Instance.getPublicHostName(Cluster.java:112)

...


  1. at

...

  1. org.apache.whirr.Cluster$Instance.getPublicAddress(Cluster.java:94)

...


  1. at

...

  1. org.apache.whirr.service.hadoop.HadoopNameNodeClusterActionHandler.doBeforeConfigure(HadoopNameNodeClusterActionHandler.java:58)

...


  1. at

...

  1. org.apache.whirr.service.hadoop.HadoopClusterActionHandler.beforeConfigure(HadoopClusterActionHandler.java:87)

...


  1. at

...

  1. org.apache.whirr.service.ClusterActionHandlerSupport.beforeAction(ClusterActionHandlerSupport.java:53)

...


  1. at

...

  1. org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:100)

...


  1. at

...

  1. org.apache.whirr.ClusterController.launchCluster(ClusterController.java:109)

...


  1. at

...

  1. org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:63)

...


  1. at

...

  1. org.apache.whirr.cli.Main.run(Main.java:64)

...


  1. at

...

  1. org.apache.whirr.cli.Main.main(Main.java:97)

...

  1. apply Whirr patch 459:

...

  1. https://issues.apache.org/jira/browse/WHIRR-459

...

  1. Image Added
  2. When

...

  1. whirr

...

  1. is

...

  1. finished

...

  1. launching

...

  1. the

...

  1. cluster,

...

  1. you

...

  1. will

...

  1. see

...

  1. an

...

  1. entry

...

  1. under

...

  1. ~/.whirr

...

  1. to

...

  1. verify

...

  1. the

...

  1. cluster

...

  1. is

...

  1. running

...

  1. cat

...

  1. out

...

  1. the

...

  1. hadoop-proxy.sh

...

  1. command

...

  1. to

...

  1. find

...

  1. the

...

  1. EC2

...

  1. instance

...

  1. address

...

  1. or

...

  1. you

...

  1. can

...

  1. cat

...

  1. out

...

  1. the

...

  1. instance

...

  1. file.

...

  1. Both

...

  1. will

...

  1. give

...

  1. you

...

  1. the

...

  1. Hadoop

...

  1. namenode

...

  1. address

...

  1. even

...

  1. though

...

  1. you

...

  1. started

...

  1. the

...

  1. mahout

...

  1. service

...

  1. using

...

  1. whirr.

...

  1. ssh

...

  1. into

...

  1. the

...

  1. instance

...

  1. to

...

  1. verify

...

  1. you

...

  1. can

...

  1. login.

...

  1. Note:

...

  1. this

...

  1. login

...

  1. is

...

  1. different

...

  1. than

...

  1. a

...

  1. normal

...

  1. EC2

...

  1. instance

...

  1. login.

...

  1. The

...

  1. ssh

...

  1. key

...

  1. is

...

  1. id_rsa

...

  1. and

...

  1. there

...

  1. is

...

  1. no

...

  1. user

...

  1. name

...

  1. for

...

  1. the

...

  1. instance

...

  1. IP

...

  1. address

...

  1. ~/.whirr/mahout:ssh

...

  1. -i

...

  1. ~/.ssh/id_rsa

...

  1. ec2-50-16-85-59.compute-1.amazonaws.com

...


  1. #verify

...

  1. you

...

  1. can

...

  1. access

...

  1. the

...

  1. HDFS

...

  1. file

...

  1. system

...

  1. from

...

  1. the

...

  1. instance

...

  1. No Format

...

  1. 
    dc@ip-10-70-18-203:~$ hadoop fs -ls /
    Found 3 items
    drwxr-xr-x   - hadoop supergroup          0 2012-03-30 23:44 /hadoop
    drwxrwxrwx   - hadoop supergroup          0 2012-03-30 23:44 /tmp
    drwxrwxrwx   - hadoop supergroup          0 2012-03-30 23:44 /user
    

...

Running Oozie

  1. Stop the Oozie daemons using ps -ef | grep oozie to find them then sudo kill -i pid ( the pid from the ps -ef command)
  2. Stopping the Oozie daemons may not remove the oozie.pid file which tells the system an oozie process is running. You may have to manually remove the pid file using sudo rm -rf /var/run/oozie/oozie.pid

...

  1. cd

...

  1. into

...

  1. /usr/lib/oozie

...

  1. and

...

  1. setup

...

  1. the

...

  1. oozie

...

  1. environment

...

  1. variables

...

  1. using

...

  1. bin/oozie-env.sh

...

  1. Download

...

  1. ext-2.2.js

...

  1. from

...

  1. http://incubator.apache.org/oozie/QuickStart.html

...

  1. Image Added
  2. Install ext-2.2.js

...

  1. using

...

  1. No Format

...

  1. 
    bin/oozie-setup.sh -hadoop 1.0.1 ${HADOOP_HOME} -extjs ext-2.2.zip 
    

...

  1. You will get an error message change the above to the highest Hadoop version available,
    No Format
    
    sudo bin/oozie-setup.sh -hadoop 0.20.200 ${HADOOP_HOME} -extjs ext-2.2.zip 
    

...

  1. start

...

  1. oozie,

...

  1. sudo

...

  1. bin/oozie-start.sh

...

  1. run

...

  1. oozie,

...

  1. sudo

...

  1. bin/oozie-run.sh

...

  1. you

...

  1. will

...

  1. get

...

  1. a

...

  1. lot

...

  1. of

...

  1. error

...

  1. messages,

...

  1. this

...

  1. is

...

  1. ok.

...

  1. go

...

  1. to

...

  1. the

...

  1. public

...

  1. DNS

...

  1. EC2

...

  1. address/oozie/11000,

...

  1. my

...

  1. address

...

  1. looked

...

  1. like:

...

  1. http://ec2-67-202-18-159.compute-1.amazonaws.com:11000/oozie/

...

  1. Image Added

Running Zookeeper

Running Sqoop

Running Flume/FlumeNG

Where to go from here

It is highly recommended that you read documentation provided by the Hadoop project itself (http://hadoop.apache.org/common/docs/r0.20.205.0/

...

)

...

Bigtop

...

0.2

...

or

...

https://hadoop.apache.org/common/docs/r1.0.0/

...

for

...

Bigtop

...

0.3

...

and

...

that

...

you

...

browse

...

through

...

the

...

Puppet

...

deployment

...

code

...

that

...

is

...

shipped

...

as

...

part

...

of

...

the

...

Bigtop

...

release

...

(bigtop-deploy/puppet/modules,

...

bigtop-deploy/puppet/manifests).

...