The easiest to start an Ozone cluster is using prebuilt docker images uploaded to the docker hub.
Please note that the docker images are not provided by the Apache project, (yet, see flokkr projects.
for the official containers). This method uses third-party docker images from the
The only thing what you need is a docker-compose.yaml file:
version: "3" services: namenode: image: flokkr/hadoop:ozone hostname: namenode ports: - 50070:50070 - 9870:9870 environment: ENSURE_NAMENODE_DIR: /data/namenode env_file: - ./docker-config command: ["/opt/hadoop/bin/hdfs","namenode"] datanode: image: flokkr/hadoop:ozone ports: - 9864 env_file: - ./docker-config command: ["/opt/hadoop/bin/hdfs","datanode"] ksm: image: flokkr/hadoop:ozone ports: - 9874:9874 env_file: - ./docker-config command: ["/opt/hadoop/bin/hdfs","ksm"] scm: image: flokkr/hadoop:ozone ports: - 9876:9876 env_file: - ./docker-config command: ["/opt/hadoop/bin/hdfs","scm"]
And the configuration in the docker-config file:
CORE-SITE.XML_fs.defaultFS=hdfs://namenode:9000 OZONE-SITE.XML_ozone.ksm.address=ksm OZONE-SITE.XML_ozone.scm.names=scm OZONE-SITE.XML_ozone.enabled=True OZONE-SITE.XML_ozone.scm.datanode.id=/data/datanode.id OZONE-SITE.XML_ozone.scm.block.client.address=scm OZONE-SITE.XML_ozone.container.metadata.dirs=/data/metadata OZONE-SITE.XML_ozone.handler.type=distributed OZONE-SITE.XML_ozone.scm.client.address=scm HDFS-SITE.XML_dfs.namenode.rpc-address=namenode:9000 HDFS-SITE.XML_dfs.namenode.name.dir=/data/namenode LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
Save both the files to a new directory and run the containers with:
docker-compose up -d
You can check the status of the components:
docker-compose ps
You can check the output of the servers with:
docker-compose logs
As the webui ports are forwarded to the external machine, you can check the web UI:
* Storage Container Manager: http://localhost:9876/
* Key Space Manager: http://localhost:9874/
* Datanode: please check the ports with docker ps as each datanode has different port from the ephemeral port range (to avoid port conflicts in case of multiple datanodes)
You can start multiple datanodes with:
docker-compose scale datanode=3
You can test the commands from the OzoneShell page after opening a new shell in one of the containers:
docker-compose exec datanode bash
Notes:
Please note, that:
The containers could be configured by environment variables. We just moved out the env definitions to an external file to avoid duplication.
For more detailed explanation of the Configuration variables see the OzoneConfiguration page.
The flokkr base image contains a simple script to convert environment variables to files, based on naming convention. All of the environment variables will be converted to traditional Hadoop config XMLs or log4j configuration files