...
- Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in .bashrc according to the values under your AWS account. Verify using echo $AWS_ACCESS_KEY_ID this is valid before proceeding.
- run the zookeeper recipe as below.
Panel ~/whirr-0.7.1:bin/whirr launch-cluster --config recipes/hadoop-ec2.properties
- if you get an error message like:
apply Whirr patch 459: https://issues.apache.org/jira/browse/WHIRR-459Panel Unable to start the cluster. Terminating all nodes.
org.apache.whirr.net.DnsException: java.net.ConnectException: Connection refused
at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:83)
at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:40)
at org.apache.whirr.Cluster$Instance.getPublicHostName(Cluster.java:112)
at org.apache.whirr.Cluster$Instance.getPublicAddress(Cluster.java:94)
at org.apache.whirr.service.hadoop.HadoopNameNodeClusterActionHandler.doBeforeConfigure(HadoopNameNodeClusterActionHandler.java:58)
at org.apache.whirr.service.hadoop.HadoopClusterActionHandler.beforeConfigure(HadoopClusterActionHandler.java:87)
at org.apache.whirr.service.ClusterActionHandlerSupport.beforeAction(ClusterActionHandlerSupport.java:53)
at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:100)
at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:109)
at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:63)
at org.apache.whirr.cli.Main.run(Main.java:64)
at org.apache.whirr.cli.Main.main(Main.java:97) - When whirr is finished launching the cluster, you will see an entry under ~/.whirr to verify the cluster is running
- cat out the hadoop-proxy.sh command to find the EC2 instance address or you can cat out the instance file. Both will give you the Hadoop namenode address even though you started the mahout service using whirr.
- ssh into the instance to verify you can login. Note: this login is different than a normal EC2 instance login. The ssh key is id_rsa and there is no user name for the instance IP address ~/.whirr/mahout:ssh -i ~/.ssh/id_rsa ec2-50-16-85-59.compute-1.amazonaws.com
#verify you can access the HDFS file system from the instanceNo Format dc@ip-10-70-18-203:~$ hadoop fs -ls / Found 3 items drwxr-xr-x - hadoop supergroup 0 2012-03-30 23:44 /hadoop drwxrwxrwx - hadoop supergroup 0 2012-03-30 23:44 /tmp drwxrwxrwx - hadoop supergroup 0 2012-03-30 23:44 /user
Running Oozie
Running Zookeeper
Running Sqoop
Running Flume/FlumeNG
Where to go from here
It is highly recommended that you read documentation provided by the Hadoop project itself (http://hadoop.apache.org/common/docs/r0.20.205.0/) Bigtop 0.2 or https://hadoop.apache.org/common/docs/r1.0.0/ for Bigtop 0.3 and that you browse through the Puppet deployment code that is shipped as part of the Bigtop release (bigtop-deploy/puppet/modules, bigtop-deploy/puppet/manifests).