Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

 

Reducing Build Times

Spark's default build strategy is to assemble a jar including all of its dependencies. This can be cumbersome when doing iterative development. When developing locally, it is possible to create an assembly jar including all of Spark's dependencies and then re-package only Spark itself when making changes.

Code Block
languagebash
titleFast Local Builds
$ sbt/sbt clean assembly # Create a normal assembly
$ ./bin/spark-shell # Use spark with the normal assembly
$ export SPARK_PREPEND_CLASSES=true
$ ./bin/spark-shell # Now it's using compiled classes
# ... do some local development ... #
$ sbt/sbt compile
# ... do some local development ... #
$ sbt/sbt compile
$ unset SPARK_PREPEND_CLASSES
$ ./bin/spark-shell # Back to normal, using Spark classes from the assembly jar
 
# You can also use ~ to let sbt do incremental builds on file changes without running a new sbt session every time
$ sbt/sbt ~compile
Note

Note: in some earlier versions of Spark, fast local builds used a sbt task called assemble-depsSPARK-1843 removed assemble-deps and introduced the environment variable described above. For those older versions:

Code Block
languagebash
titleFast Local Builds
$ sbt/sbt clean assemble-deps
$ sbt/sbt package
# ... do some local development ... #
$ sbt/sbt package
# ... do some local development ... #
$ sbt/sbt package
# ...
 
# You can also use ~ to let sbt do incremental builds on file changes without running a new sbt session every time
$ sbt/sbt ~package

Checking Out Pull Requests

Git provides a mechanism for fetching remote pull requests into your own local repository. This is useful when reviewing code or testing patches locally. If you haven't yet cloned the Spark Git repository, use the following command:

Code Block
$ git clone https://github.com/apache/spark.git
$ cd spark

To enable this feature you'll need to configure the git remote repository to fetch pull request data. Do this by modifying the .git/config file inside of your Spark directory. The remote may not be named "origin" if you've named it something else:

Code Block
languagetext
title.git/config
[remote "origin"]
  url = git@github.com:apache/spark.git
  fetch = +refs/heads/*:refs/remotes/origin/*
  fetch = +refs/pull/*/head:refs/remotes/origin/pr/*   # Add this line

Once you've done this you can fetch remote pull requests

Code Block
languagebash
# Fetch remote pull requests
$ git fetch origin
# Checkout a remote pull request
$ git checkout origin/pr/112
# Create a local branch from a remote pull request
$ git checkout origin/pr/112 -b new-branch

Running Individual Tests

Often it is useful to run individual tests in Maven or SBT.

Code Block
languagebash
# sbt
$ sbt/sbt "test-only org.apache.spark.io.CompressionCodecSuite"
$ sbt/sbt "test-only org.apache.spark.io.*"

# Maven
$ mvn test -DwildcardSuites=org.apache.spark.io.CompressionCodecSuite
$ mvn test -DwildcardSuites=org.apache.spark.io.*

Generating Dependency Graphs

Code Block
$ # sbt
$ sbt/sbt dependency-tree

$ # Maven
$ mvn -DskipTests install
$ mvn dependency:tree

Running Build Targets For Individual Projects

Code Block
$ # sbt
$ sbt/sbt assembly/assembly
$ # Maven
$ mvn package -DskipTests -pl assembly

Building Spark in IntelliJ IDEA

Many Spark developers use IntelliJ for day-to-day development and testing in Spark. Importing Spark into IntelliJ requires a few special steps due to the complexity of the Spark build.

...

  1. Add sources to the following modules: Under “Sources” tab add a source on the right. 
    1. spark-hive: add v0.13.1/src/main/scala
    2. spark-hive-thriftserver v0.13.1/src/main/scala
    3. spark-repl: scala-2.10/src/main/scala and scala-2.10/src/test/scala

...

Moved permanently to http://spark.apache.org/developer-tools.html