Ozone is a subproject of Apache Hadoop, as a main rule we follow all the practices which are defined by Hadoop project. But there are some subproject specific practices which are summarized in this page.
Development environment
To build ozone you should have the following requirements:
- git
- Java (OpenJDK 8)
- Maven (3.3 or later)
- protoc 2.5
Optional tools:
- docker and docker-compose: to start pseudo cluster locally
- robotframework: to execute acceptance tests and generate the report
- kubernetes tools (kubectl, kubens, flekszible) to deploy to kubernetes
- hugo (to update the documentation)
Build the project
To build the ozone/hdds subprojects, use the pom.ozone.xml from the main hadoop repository.
git clone https://github.com/apache/hadoop.git cd hadoop mvn clean install -f pom.ozone.xml -DskipTests
The result can be found under the hadoop-ozone/dist project:
cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT
The easiest way to start a cluster is using docker and docker compose
cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone docker-compose up -d firefox http://localhost:9876
IntelliJ setup
As the project is built by maven you can use any Java IDE which has maven support (which means all the IDEs practically). This section gives some advises to the user of Intellij, but this is only one of the possible choices. IntelliJ is not required to work on ozone.
Import project
When you import project to the Intellij it's recommended to open pom.ozone.xml instead of pom.xml. pom.ozone.xml is a reduced pom which includes only the Ozone specific subprojects, therefore it's faster to use as only the ozone/hdds project will be visible.
Running Ozone from IDE
Ozone can be started directly from Intellij. You need some Run Configurations to define the startup of the projects but the example definitions are committed with a helper script which can copy the configrations to the right place. Please see the Run Ozone cluster from IDE for more details.
Troubleshooting
Intellij may not picking up protoc generated classes as they can be very huge. If the protoc files can't be compiled try the following:
- There is a file under intellij – generally where you have installed/downloaded intellij.
- This file is called /Applications/idea/idea.app/Contents/bin/idea.properties under MacOS and generally under your home directory if you are on Linux.
- Edit this file, there is a property called
idea.max.intellisense.filesize=2500 -- set this value to 5000
- Restart Intellij, this assumes that your editor/ machine has enough RAM to process this file.
Creating patches
We prefer to use github pull requests instead of uploading patches to JIRA. The main contribution workflow is the following:
- Fork apache/hadoop github repository (first time)
- Create a new Jira in HDDS project (eg. HDDS-1234)
- Create a local branch for your contribution (eg. git checkout \-b HDDS-1234)
- Create your commits and push your branches to your personal fork.
- Create a pull request on github UI (please include the Jira ID in the summary)
- Set the Jira to "Patch Available" state
- Add ozone label to the PR
Using github
Pull requests are checked by Yetus and external script with executing the different checks. To enable the testing by the external script please add ozone label to your PR.
If you have no permission to add label you can add a simple command: /label ozone
Issues with ozone label will be picked by a test script executor, the results will be posted as github checks and saved to a dedicated repository.
If you would like to trigger a new test build, you can push a new commit or add a simple comment
/retest
Check your contribution
The easiest way to check your contribution is to use the simplified shell scripts under hadoop-ozone/dev-support/checks. The problems will be printed out the standard output:
For example:
hadoop-ozone/dev-support/checks/rat.sh hadoop-ozone/dev-support/checks/checkstyle.sh hadoop-ozone/dev-support/checks/findbugs.sh
Execution of rat and checkstyle are very fast. Findbugs is slightly slower. Executing unit.sh takes about 22 minutes.
The same scripts are executed by the github PR checker.
It's always good practice (and fast) to test with the related docker-compose based pseudo clusters:
cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone ./test.sh
(To test S3 use compose/ozones3, to test security use compose/ozonsecure, etc.
Testing
Ozone has multiple type of tests including unit tests, integration tests and acceptance tests. Please see the following page for more details: Ozone test tools
- Please prefer to use pure unit tests instead of MiniOzoneCluster based acceptance testing
- If you would like to test end2end workflows, please consider to write robotframework based acceptance tests.