...
Code Block | ||
---|---|---|
| ||
# Fetch remote pull requests $ git fetch origin # Checkout a remote pull request $ git checkout origin/pr/112 # Create a local branch from a remote pull request $ git checkout origin/pr/112 -b new-branch |
Running Individual Tests
Often it is useful to run individual tests in Maven or SBT.
Code Block | ||
---|---|---|
| ||
# sbt
$ build/sbt "test-only org.apache.spark.io.CompressionCodecSuite"
$ build/sbt "test-only org.apache.spark.io.*"
# Maven, run Scala test
$ build/mvn test -DwildcardSuites=org.apache.spark.io.CompressionCodecSuite -Dtest=none
$ build/mvn test -DwildcardSuites=org.apache.spark.io.* -Dtest=none
# The above will work, but will take time to iterate through each project. If you want
# to only run tests in one subproject, first run "install", then use "-pl <project>"
# with the tests
$ build/mvn <options> install
$ build/mvn <other options> -pl org.apache.spark:spark-hive_2.11 test -DwildcardSuites=org.apache.spark.sql.hive.execution.HiveTableScanSuite -Dtest=none
# Maven, run Java test
$ build/mvn test -DwildcardSuites=none -Dtest=org.apache.spark.streaming.JavaAPISuite |
Generating Dependency Graphs
...
- "[test-maven]" - signals to test the pull request using maven
- "[test-hadoop1.0]" - signals to test using Spark's Hadoop 1.0 profile (other options include Hadoop 2.0, 2.2, and 2.3)
Running Docker integration tests
...
Organizing Imports
You can use a IntelliJ Imports Organizer from Aaron Davidson to help you organize the imports in your code. It can be configured to match the import ordering from the style guide.
IDE Setup
IntelliJ
While many of the Spark developers use SBT or Maven on the command line, the most common IDE we use is IntelliJ IDEA. You can get the community edition for free (Apache committers can get free IntelliJ Ultimate Edition licenses) and install the JetBrains Scala plugin from Preferences > Plugins.
...