Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

To bring the cluster up for the first time (disclaimer: independent execution of Puppet recipes on the cluster's nodes will automatically create HDFS structures and bring-up the services if all dependencies are satisfied, e..g configs are created, packages are installed, etc. If Puppet reports errors you might need to do the manual startup):

1) As root, run

...

Code Block
languagebash
# /etc/init.d/hadoop-hdfs-namenode init (omit unless you want to star with nothing in your HDFS)

...


# 

...

/etc/init.d/hadoop-hdfs-namenode start

...


# /etc/init.d/hadoop-hdfs-datanode start

...


# 

...

/usr/lib/hadoop/libexec/init-hdfs.sh (not needed after the first run)

...


# /etc/init.d/hadoop-yarn-resourcemanager start

...


# 

...

/etc/init.d/hadoop-yarn-proxyserver start

...


# 

...

/etc/init.d/hadoop-yarn-nodemanager start

on the master node. 

2) On each of the slave nodes, run

...

Code Block
languagebash
# /etc/init.d/hadoop-hdfs-datanode start

...


# 

...

/etc/init.d/hadoop-yarn-nodemanager start 

...

To bring the cluster down cleanly:

1) On each of the slave nodes, run

...

Code Block
languagebash
# /etc/init.d/hadoop-yarn-nodemanager stop

...


# 

...

/etc/init.d/hadoop-hdfs-datanode stop

2) On the master, run

...

Code Block
languagebash
# /etc/init.d/hadoop-yarn-nodemanager stop

...


# /etc/init.d/hadoop-yarn-proxyserver stop

...


# 

...

/etc/init.d/hadoop-yarn-resourcemanager stop

...


# 

...

/etc/init.d/hadoop-hdfs-datanode stop

...


# 

...

/etc/init.d/hadoop-hdfs-namenode 

...

stop