This document describes how to release Apache Kafka from trunk.
It is a work in progress and should be refined by the Release Manager (RM) as they come across aspects of the release process not yet documented here.
NOTE: For the purpose of illustration, this document assumes that the version being released is 0.10.0.0 and the following development version will become 0.10.1.0.
Go over JIRA for the release and make sure that blockers are marked as blockers and non-blockers are non-blockers. This JIRA filter may be handy:
project = KAFKA AND fixVersion = 0.10.0.0 AND resolution = Unresolved AND priority = blocker ORDER BY due ASC, priority DESC, created ASC |
It is important that between the time that the release plan is voted to the time when the release branch is created, no experimental or potentially destabilizing work is checked into the trunk. While it is acceptable to introduce major changes, they must be thoroughly reviewed and have good test coverage to ensure that the release branch does not start off being unstable. If necessary the RM can discuss if certain issues should be fixed on the trunk in this time, and if so what is the gating criteria for accepting them.
sftp <your-apache-id>@home.apache.org
`; if you get authentication failures, login to id.apache.org and add your public ssh key to your Apache account. If you need a new ssh key, generate one with `ssh-keygen -t rsa -b 4096 -C <your-apache-id>@apache.org
` and saving the key in `~/.ssh/apache_rsa`
, add the key locally with `ssh-add ~/.ssh/apache_rsa
`, add the public SSH key (contents of `~/.ssh/apache_rsa.pub
`) to your account using id.apache.org, and verify you can connect with sftp (may require up to 10 minutes for account changes to synchronize). See more detailed instructions.easy_install jira==1.0.15
Host *.apache.org IdentityFile ~/.ssh/<apache-ssh-key> |
You will need to upload your maven credentials and signatory credentials for the release script by editing your `~/.gradle/gradle.properties
` with:
mavenUrl=https://repository.apache.org/service/local/staging/deploy/maven2 mavenUsername=your-apache-id mavenPassword=your-apache-passwd signing.keyId=your-gpgkeyId signing.password=your-gpg-passphrase signing.secretKeyRingFile=/Users/your-id/.gnupg/secring.gpg |
If you don't already have a secret key ring under ~/.gnupg (which will be the case with GPG 2.1 and beyond), you will need to manually create it with `gpg --export-secret-keys -o ~/.gnupg/secring.gpg`.
Obviously, be careful not to publicly upload your passwords. You should be editing the `gradle.properties` file under your home directory, not the one in Kafka itself.
Make sure your `~/.m2/settings.xml` is configured for pgp signing and uploading to the apache release maven:
<servers> <server> <id>apache.releases.https</id> <username>your-apache-id</username> <password>your-apache-passwd</password> </server> <server> <id>your-gpgkeyId</id> <passphrase>your-gpg-passphrase</passphrase> </server> </servers> <profiles> <profile> <id>gpg-signing</id> <properties> <gpg.keyname>your-gpgkeyId</gpg.keyname> <gpg.passphraseServerId>your-gpgkeyId</gpg.passphraseServerId> </properties> </profile> </profiles> |
You may also need to update some gnupgp configs:
echo "allow-loopback-pinentry" >> ~/.gnupg/gpg-agent.conf echo "use-agent" >> ~/.gnupg/gpg.conf echo "pinentry-mode loopback" >> ~/.gnupg/gpg.conf echo RELOADAGENT | gpg-connect-agent |
Skip this section if you are releasing a bug fix version (e.g. 2.2.1).
docs/js/templateData.js
gradle.properties
kafka-merge-pr.py
streams/quickstart/java/pom.xml
streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
streams/quickstart/pom.xml
tests/kafkatest/__init__.py
tests/kafkatest/version.py
Send email announcing the new branch:
To: dev@kafka.apache.org Subject: New release branch 0.10.0 Hello Kafka developers and friends, As promised, we now have a release branch for 0.10.0 release (with 0.10.0.0 as the version). Trunk has been bumped to 0.10.1.0-SNAPSHOT. I'll be going over the JIRAs to move every non-blocker from this release to the next release. From this point, most changes should go to trunk. *Blockers (existing and new that we discover while testing the release) will be double-committed. *Please discuss with your reviewer whether your PR should go to trunk or to trunk+release so they can merge accordingly. *Please help us test the release! * Thanks! $RM |
Note: Unlike the Kafka sources (kafka repo), the content of the Apache Kafka website kafka.apache.org is backed by a separate git repository (kafka-site repo). Today, any changes to the content and docs must be kept manually in sync between the two repositories.
We should improve the release script to include these steps. In the meantime, for new releases:
releaseTarGz
generates the Kafka website content including the Kafka documentation (with the exception of a few pages like project-security.html, which are only tracked in the kafka-site repository). This build target also auto-generates the configuration docs of the Kafka broker/producer/consumer/etc. from their respective Java sources. The build output is stored in ./core/build/distributions/kafka_2.13-2.8.0-site-docs.tgz
. site-docs/
folder to 28/
(or, if the latter already exists, replace its contents). That's because the docs for a release are stored in a separate folder (e.g., 27/
for Kafka v2.7 and 28/
for Kafka v2.8), which ensures the Kafka website includes the documentation for the current and all past Kafka releases.aggregatedJavadoc
, with output under ./build/docs/javadoc/
.javadoc
folder to 28/
(i.e., the full path is 28/javadoc/
). If this is bug fix release, do this after the vote has passed to avoid showing an unreleased version number in the published javadocs.It's nice to thank as many people as we can identify. The best I could come up with is this:
# get a list of all committers, contributors, and reviewers git log 2.7..2.8 | grep 'Author\|Reviewer\|Co-authored-by:' | tr ',' '\n' | sed 's/^.*:\s*//' | sed 's/^\s*//' | sed 's/\s*<.*$//' | sort | uniq > /tmp/contributors # and then manually clean up the list, removing different versions of people's names and stripping out any fragments of commit messages that made it in vim /tmp/contributors # then copy the list (don't forget to drop the trailing comma) cat /tmp/contributors | tr '\n' ',' | sed 's/,/, /g' | xclip -selection clipboard |
Send a vote closing email:
To: dev@kafka.apache.org Subject: [RESULTS] [VOTE] Release Kafka version 0.10.0.0 This vote passes with 7 +1 votes (3 bindings) and no 0 or -1 votes. +1 votes PMC Members: * $Name * $Name * $Name Committers: * $Name * $Name Community: * $Name * $Name 0 votes * No votes -1 votes * No votes Vote thread: http://markmail.org/message/faioizetvcils2zo I'll continue with the release process and the release announcement will follow in the next few days. $RM |
Make sure the KEYS file in the svn repo includes the committer who signed the release.
The KEYS must be in https://www.apache.org/dist/kafka/KEYS and not just in http://kafka.apache.org/KEYS.
svn commit -m "Release 0.10.0.0"
Check and update the Scala versions if necessary
The Apache Kafka community is pleased to announce the release for Apache Kafka <version>. This is a bug fix release and it includes fixes and improvements from <number-of-JIRAs> JIRAs, including a few critical bugs. All of the changes in this release can be found in the release notes: https://www.apache.org/dist/kafka/<version>/RELEASE_NOTES.html You can download the source and binary release (Scala 2.11 and Scala 2.12) from: https://kafka.apache.org/downloads#<version> --------------------------------------------------------------------------------------------------- Apache Kafka is a distributed streaming platform with four core APIs: ** The Producer API allows an application to publish a stream records to one or more Kafka topics. ** The Consumer API allows an application to subscribe to one or more topics and process the stream of records produced to them. ** The Streams API allows an application to act as a stream processor, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams. ** The Connector API allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to a table. With these APIs, Kafka can be used for two broad classes of application: ** Building real-time streaming data pipelines that reliably get data between systems or applications. ** Building real-time streaming applications that transform or react to the streams of data. Apache Kafka is in use at large and small companies worldwide, including Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank, Target, The New York Times, Uber, Yelp, and Zalando, among others. A big thank you for the following <number-of-contributors> contributors to this release! <list-of-contributors> We welcome your help and feedback. For more information on how to report problems, and to get involved, visit the project website at https://kafka.apache.org/ Thank you! Regards, $RM |