The easiest to start an Ozone cluster is using pre-created docker images uploaded to the docker hub.
The only thing what you need is a docker-compose.yaml file:
version: "3" services: datanode: image: apache/hadoop-runner volumes: - ../../ozone:/opt/hadoop ports: - 9864 command: ["/opt/hadoop/bin/ozone","datanode"] env_file: - ./docker-config ozoneManager: image: apache/hadoop-runner volumes: - ../../ozone:/opt/hadoop ports: - 9874:9874 environment: ENSURE_OM_INITIALIZED: /data/metadata/ozoneManager/current/VERSION env_file: - ./docker-config command: ["/opt/hadoop/bin/ozone","om"] scm: image: apache/hadoop-runner volumes: - ../../ozone:/opt/hadoop ports: - 9876:9876 env_file: - ./docker-config environment: ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION command: ["/opt/hadoop/bin/ozone","scm"]
And the configuration in the docker-config file:
OZONE-SITE.XML_ozone.om.address=ozoneManager OZONE-SITE.XML_ozone.scm.names=scm OZONE-SITE.XML_ozone.enabled=True OZONE-SITE.XML_ozone.scm.datanode.id=/data/datanode.id OZONE-SITE.XML_ozone.scm.block.client.address=scm OZONE-SITE.XML_ozone.metadata.dirs=/data/metadata OZONE-SITE.XML_ozone.handler.type=distributed OZONE-SITE.XML_ozone.scm.client.address=scm OZONE-SITE.XML_ozone.replication=1 HDFS-SITE.XML_rpc.metrics.quantile.enable=true HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300 LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
Save both the files to a new directory and run the containers with:
docker-compose up -d
You can check the status of the components:
docker-compose ps
You can check the output of the servers with:
docker-compose logs
As the webui ports are forwarded to the external machine, you can check the web UI:
* Storage Container Manager: http://localhost:9876/
* Key Space Manager: http://localhost:9874/
* Datanode: please check the ports with docker ps as each datanode has different port from the ephemeral port range (to avoid port conflicts in case of multiple datanodes)
You can start multiple datanodes with:
docker-compose scale datanode=3
You can test the commands from the OzoneShell page after opening a new shell in one of the containers:
docker-compose exec datanode bash
Notes:
Please note, that:
The containers could be configured by environment variables. We just moved out the env definitions to an external file to avoid duplication.
For more detailed explanation of the Configuration variables see the OzoneConfiguration page.
The flokkr base image contains a simple script to convert environment variables to files, based on naming convention. All of the environment variables will be converted to traditional Hadoop config XMLs or log4j configuration files