...
Code Block |
---|
|
su hdfs -c 'bin/ozone --config /etc/ozone/conf --daemon start datanode' |
Hadoop Integration
Shutdown Hadoop Cluster
Edit hadoop-env.sh in $HADOOP_CONF_DIR to include Ozone filesystem jar file in Hadoop classpath
Code Block |
---|
|
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$OZONE_HOME/share/ozone/lib/hadoop-ozone-filesystem-lib-current-$OZONE_VERSION.jar |
Edit core-site.xml, and update core-site.xml to include Ozone configuration
Code Block |
---|
|
<property>
<name>fs.o3fs.impl</name>
<value>org.apache.hadoop.fs.ozone.OzoneFileSystem</value>
</property>
<property>
<name>fs.AbstractFileSystem.o3fs.impl</name>
<value>org.apache.hadoop.fs.ozone.OzFs</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>o3fs://bucket.volume</value>
<final>true</final>
</property> |
Copy ozone-site.xml from $OZONE_CONF_DIR to $HADOOP_CONF_DIR
Code Block |
---|
|
cp $OZONE_CONF_DIR/ozone-site.xml $HADOOP_CONF_DIR/ozone-site.xml |
Update mapred-site.xml to include Ozone file system jar file
Code Block |
---|
|
<property>
<name>mapreduce.application.classpath</name>
<value>$HADOOP_HOME/share/hadoop/mapreduce/*:$HADOOP_HOME/share/hadoop/mapreduce/lib/*:$OZONE_HOME/share/ozone/lib/hadoop-ozone-filesystem-lib-current-$OZONE_VERSION.jar</value>
</property> |
Create volumes and buckets
Volume and bucket defined in core-site.xml will be used to store HDFS data. Use Ozone CLI to create the corresponding volume and bucket
Code Block |
---|
|
ozone sh volume create volume
ozone sh bucket create /volume/bucket |
These commands creates a volume named volume, and a bucket named bucket and attached to /volume.
Start YARN Services
YARN can be started and write data to Ozone File system after the volume and bucket have been created.
Code Block |
---|
|
$HADOOP_HOME/sbin/start-yarn.sh |
Mapreduce and YARN work load will run on Ozone file system in /volume/bucket bucket.
Stop Services
Run the following command on any cluster host.
...