Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. This is for bigtop-0.2.0 where hadoop-hive, hadoop-hive-server, and hadoop-hive-metastore are installed automatically because the hive services start with the word hadoop. For bigtop-0.3.0 if you use the sudo apt-get install hadoop* command you won't get the Hive components installed because the Hive Daemon names changed in Bigtop. For bigtop-0.3.0 you will have to do
    No Format
    sudo apt-get install hive hive-server hive-metastore
    
    Create the HDFS directories Hive needs
    The Hive Post install scripts should create the /tmp and /user/hive/warehouse directories. If they don't exist, create them in HDFS. The Hive post install script doesn't create these directories because HDFS is not up and running during the deb file installation because JAVA_HOME is buried in hadoop-env.sh and HDFS can't start to allow these directories to be created.
    No Format
    hadoop fs -mkdir /tmp
    hadoop fs -mkdir /user/hive/warehouse
    hadoop -chmod g+x /tmp
    hadoop -chmod g+x /user/hive/warehouse
    
  2. If the post install scripts didn't create directories /var/run/hive and /var/lock/subsus, create directory /var/run/hive and create directory /var/lock/subsys
    No Format
    sudo mkdir /var/run/hive
    sudo mkdir /var/lock/subsys
    
  3. start the Hive Server
    No Format
    sudo /etc/init.d/hadoop-hive-server start
    
  4. create a table in Hive and verify it is there
    No Format
    ubuntu@ip-10-101-53-136:~$ hive
    WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
    Hive history file=/tmp/ubuntu/hive_job_log_ubuntu_201203202331_281981807.txt
    hive> create table doh(id int);
    OK
    Time taken: 12.458 seconds
    hive> show tables;
    OK
    doh
    Time taken: 0.283 seconds
    hive>
    

Running Mahout

...

  1. Set bash environment variables HADOOP_HOME=/usr/lib/hadoop, HADOOP_CONF_DIR=$HADOOP_HOME/conf

...

  1. Go to /usr/share/doc/mahout/examples/bin and unzip cluster-reuters.sh.gz
    Code Block
    
    export HADOOP_HOME=/usr/lib/hadoop
    export HADOOP_CONF_DIR=$HADOOP_HOME/conf
    

...

  1. modify the contents of cluster-reuters.sh, replace MAHOUT="../../bin/mahout" with MAHOUT="/usr/lib/mahout/bin/mahout"
  2. make sure the Hadoop file system is running
  3. ./cluster-reuters.sh will display a menu selection
    ubuntu@ip-10-224-109-199:

...

  1. /usr/share/doc/mahout/examples/bin$ ./

...

  1. cluster-reuters.sh

...


  1. Please select a number to choose the corresponding clustering algorithm
    1. kmeans clustering
    2. fuzzykmeans clustering
    3. lda clustering
    4. dirichlet clustering
    5. minhash clustering
    Enter your choice : 1
    ok. You chose 1 and we'll use kmeans Clustering
    creating work directory at /tmp/mahout-work-ubuntu
    Downloading Reuters-21578
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    100 7959k 100 7959k 0 0 346k 0 0:00:22 0:00:22 -::- 356k
    Extracting...

...

Running Whirr

Where to go from here

...