Building
To build Ozone and start a cluster, you can follow the Ozone Contributor Guide (moved)
mvn clean install -DskipTests Now you can check the SCM/OM web ui:
Running
With docker-compose
This is the easiest way to run the ozone s3 gateway:
cd hadoop-ozone/dist/target/ozone-0.3.0-SNAPSHOT/compose/ozone docker-compose up -d
You can check the standard OM/SCM web ui:
Run from IDE + docker cluster
This is a more advanced setup. It is only needed if you would like to debug something.
Note: This section is based on Linux experiences. OSX usage could be different.
First of all, you need a running s3g cluster (see the previous section), but please stop the s3g gateway (docker-compose stop s3g). This daemon will be started from IDE
a.) You need a log4j.properties as this is handled usually by the IDE
b.) You need to configure ozone
c.) you need to have proper DNS to access the cluster
log4j.properties:
Use a simple file:
log4j.rootLogger=INFO,stdout log4j.threshold=ALL log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
And activate it with VM option:
-Dlog4j.configuration=file:/home/elek/log4j.properties
b.) ozone-site.xml
You can create an ozone-site.xml and add it to the classpath, but usually it's enough to configure the om.address:
-Dozone.om.address=ozoneManager
Note: This is a program argument not a VM option.
c.) DNS
You need to access the datanode/ozoneManager with the docker names. You have multiple options. The first is just find the ip addresses of the containers (with docker inspect) and modify your host file.
The second option is to use a local DNS resolver:
docker run -d --name dns -p 53:53 -p 53:53/udp --network ozones3_default andyshinn/dnsmasq:2.76 -k -d
Note 1: You need to change the DNS server with adding 127.0.0.0 to you resolv.conf
Note2: You need to adjust the value of the network argument. This should be the network for the docker-compose setup (ususally the directory name + _default)
Man it the middle proxy
To check the functionality of the existing AWS api you can use aws cli or s3cmd.
To make it easier to check the original response I use mitm proxy
Install the mitmproxy as defined on the product page
Start it in proxy mode`
mitmproxy -p 1212
Set up HTTP proxy in the command line
export HTTP_PROXY=http://localhost:1212 export HTTPS_PROXY=http://localhost:1212
Now you can use the aws cmd with the custom endpoint
aws s3 --endpoint-url http://localhost:9878/vol1 cp docker-config s3://bucket/qwe/dir1/dir1/file2
Testing
Testing with aws cli
aws s3 cli can be used without any modification, just add an additional --endpoint-url all the time. For example:
aws s3api --endpoint http://localhost:9878 create-bucket --bucket bucket2
Executing s3a unit tests
First of all you need a running s3g cluster. See the previous section here to start it. TLDR;
cd hadoop-ozone/dist/target/ozone-0.3.0-SNAPSHOT/compose/ozones3 docker-compose up -d
Now you need to create the unit tests.
Adjust your local credentials
<configuration> <property> <name>test.fs.s3a.name</name> <value>s3a://buckettest/</value> </property> <property> <name>fs.contract.test.fs.s3a</name> <value>${test.fs.s3a.name}</value> </property> <property> <name>fs.s3a.access.key</name> <description>AWS access key ID. Omit for IAM role-based authentication. </description> <value>donotcommitthiskeytoscm</value> </property> <property> <name>fs.s3a.secret.key</name> <description>AWS secret key. Omit for IAM role-based authentication. </description> <value>donotcommitthiskeytoscm</value> </property> <property> <name>test.sts.endpoint</name> <description>Specific endpoint to use for STS requests.</description> <value>sts.amazonaws.com</value> </property> <property> <name>fs.s3a.endpoint</name> <value>localhost:9878</value> </property> <property> <name>fs.s3a.connection.ssl.enabled</name> <value>false</value> </property> <property> <name>fs.s3a.path.style.access</name> <value>true</value> </property> <property> <name>fs.s3a.proxy.host</name> <value>localhost</value> </property> <property> <name>fs.s3a.proxy.port</name> <value>1212</value> </property> </configuration>
Note: Delete the last two configuration parameter if you have no mitm proxy (or start mitmproxy -p 1212 to check the HTTP traffic)
After that, create the bucket which is defined in the auth-keys.xml with exactly the same credentials which is defined in the auth-keys.xml.
export AWS_ACCESS_KEY_ID=DONOTCOMMITTHISKEYTOSCM export AWS_SECRET_ACCESS_KEY=DONOTCOMMITTHISKEYTOSCM aws s3api --endpoint http://localhost:9878 create-bucket --bucket buckettest
Now you can execute the tests:
cd hadoop-tools/hadoop-aws echo "ITestS3AContract*" > includes mvn -Pparallel-tests failsafe:integration-test -Dfailsafe.includesFile=./includes