You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Current »

Starting with Spark 1.0.0, the Spark project will follow the semantic versioning guidelines (http://semver.org/) with a few deviations. These small differences account for Spark's nature as a multi-module project.

Spark Versions

Each Spark release will be versioned:
[MAJOR].[FEATURE].[MAINTENANCE]

MAJOR: All releases with the same major version number will have API compatibility, defined as [1]. Major version numbers will remain stable over long periods of time. For instance, 1.X.Y may last 1 year or more.

FEATURE: Feature releases will typically contain new features, improvements, and bug fixes. The target frequency for feature releases is every 3-4 months. One change we'd like to make is to announce fixed release dates and merge windows for each release, to facilitate coordination. Each feature release will have a merge window where new patches can be merged, a QA window when only fixes can be merged, then a final period where voting occurs on release candidates. These windows will be announced immediately after the previous feature release to give people plenty of time, and over time, we might make the whole release process more regular (similar to Ubuntu). The current merge window is listed here.

MAINTENANCE: Maintenance releases will occur more frequently and depend on specific patches introduced (e.g. bug fixes) and their urgency. In general these releases are designed to patch bugs. However, higher level libraries may introduce small features, such as a new algorithm, provided they are entirely additive and isolated from existing code paths. Spark core may not introduce any features.

Alpha Components

When new components are added to Spark, they may initially be marked as "alpha". Alpha components do not have to abide by the above guidelines, however, to the maximum extent possible, they should try to. Once they are marked "stable" they have to follow these guidelines.

API compatibility

An API is any public class or interface exposed in Spark that is not marked as "developer API" or "experimental". Release A is API compatible with release B if code compiled against release A compiles cleanly against B. Currently, does not guarantee that a compiled application that is linked against version A will link cleanly against version B without re-compiling. Link-level compatibility is something we'll try to guarantee in future releases. 

Note, however, that even for features "developer API" and "experimental", we strive to maintain maximum compatibility. Code should not be merged into the project as "experimental" if there is a plan to change the API later, because users expect the maximum compatibility from all available APIs.

  • No labels