Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state:  [One of "Under Discussion", "Accepted", "Rejected"] Accepted (2.2)

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]

JIRA: here [Change the link from KAFKA-1 to your own ticket]5692

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Describe the problems you are trying to solve.

Public Interfaces

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

  • Binary log format

  • The network protocol and api behavior

  • Any class in the public packages under clientsConfiguration, especially client configuration

    • org/apache/kafka/common/serialization

    • org/apache/kafka/common

    • org/apache/kafka/common/errors

    • org/apache/kafka/clients/producer

    • org/apache/kafka/clients/consumer (eventually, once stable)

  • Monitoring

  • Command line tools and arguments

  • Anything else that will likely break existing users in some way when they upgrade

Proposed Changes

Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Rejected Alternatives

Similarly to KIP-179, the kafka-preferred-replica-election.sh tool takes a --zookeeper option which means users of the tool must have access to the ZooKeeper cluster backing the Kafka cluster. There is no AdminClient API via which the preferred leader can be elected, so it is only this tool which can be used to do this job. This KIP will provide an AdminClient API for electing the preferred leader, add an option to the kafka-preferred-replica-election.sh tool to use this new API and deprecate the --zookeeper option.

Public Interface

The kafka-preferred-replica-election.sh tool will gain a --bootstrap-server option and the existing --zookeeper option will be deprecated.

The AdminClient will gain a new method:

  • electPreferredLeaders(Collection<TopicPartition> partitions)

A new network protocol will be added:

  • ElectPreferredLeadersRequest and ElectPreferredLeadersResponse

Proposed Changes

kafka-preferred-replica-election.sh

The --zookeeper option will be retained and will:

  1. Cause a deprecation warning to be printed to standard error. The message will say that the --zookeeper option will be removed in a future version and that --bootstrap-server is the replacement option.
  2. Perform the election via ZooKeeper, as currently.

A new --bootstrap-server option will be added and will:

  1. Perform the election by calling AdminClient.electPreferredLeaders() on an AdminClient instance bootstrapped from the via the given --bootstrap-server.

Using both options in the same command line will produce an error message and the tool will exit without doing the intended operation.

It is anticipated that a future version of Kafka would remove support for the --zookeeper option.

When the --bootstrap-server option is used new further new options will be available:

  • admin.config — "Admin client config properties file to pass to the admin client when --bootstrap-server is given."

The --help output of the tool will be updated to explain what the preferred replica *is*, because this is currently not discoverable from the command line tool help, only from the documentation on the Kafka website.

The --help output for the tool will be updated to note that the command is not necessary if the broker is configured with auto.leader.rebalance.enable=true.

AdminClient: electPreferredLeaders()

The following methods will be added to AdminClient:

Code Block
/**
 * Elect the preferred replica of the given {@code partitions} as leader, or
 * elect the preferred replica for all partitions as leader if the argument to {@code partitions} is null.
 *
 * This operation is supported by brokers with version 1.0 or higher.
 */
ElectPreferredLeadersResult electPreferredLeaders(Collection<TopicPartition> partitions, ElectPreferredLeadersOptions options)
ElectPreferredLeadersResult electPreferredLeaders(Collection<TopicPartition> partitions)

Where

Code Block
public class ElectPreferredLeadersOptions extends AbstractOptions<ElectPreferredLeadersOptions> {
}
public class ElectPreferredLeadersResult {
    // package access constructor

    /**
     * Get the result of the election for the given TopicPartition.
     * If there was not an election triggered for the given TopicPartition, the
     * returned future will complete with an error.
     */
    public KafkaFuture<Void> partitionResult(TopicPartition partition) { ... }

    /**
     * <p>Get the topic partitions for which a leader election was attempted.
     * The presence of a topic partition in the Collection obtained from 
     * the returned future does not indicate the election was successful: 
     * A partition will appear in this result if an election was attempted
     * even if the election was not successful.</p>
     *
     * <p>This method is provided to discover the partitions when
     * {@link AdminClient#electPreferredLeaders(Collection)} is called 
     * with a null {@code partitions} argument.</p>
     */
    public KafkaFuture<Set<TopicPartition>> partitions();

    /**
     * Return a future which succeeds if all the topic elections succeed.
     */
    KafkaFuture<Void> all() { ... }
 }

A call to electPreferredLeaders() will send a ElectPreferredLeadersRequest to the controller broker.

NetworkProtocol: ElectPreferredLeadersRequest and ElectPreferredLeadersResponse

No Format
ElectPreferredLeadersRequest => [TopicPartitions] TimeoutMs
  TopicPartitions => Topic PartitionId
    Topic => string
    PartitionId => [int32]
  TimeoutMs => int32 

Where

FieldDescription
partitionId

The partitions of this topic whose preferred leader should be elected

timeoutMs

The time in ms to wait for the election to complete.

The request will require AlterCluster on the Cluster resource, since it is a change that affects the whole cluster.

Note: It is not an error if there is a duplicate (topic, partition)-pair in the request.

Note that a ElectPreferredLeadersRequest must be sent to the controller of the cluster.

No Format
ElectPreferredLeadersResponse => ThrottleTimeMs [ReplicaElectionResult]
  ThrottleTimeMs => int32
  ReplicaElectionResult => Topic [PartitionResult]
    Topic => string
    PartitionResult => PartitionId ErrorCode ErrorMessage
      PartitionId => int32
      ErrorCode => int16
      ErrorMessage => string

Where

FieldDescription
ThrottleTimeMs

The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota

Topic

The topic name

PartitionId

The partition id

ErrorCode

The result error, or zero if there was no error.

ErrorMessage

The result message, or null if there was no error.

Anticipated errors:

  • UNKNOWN_TOPIC_OR_PARTITION (3) If the topic or partition doesn't exist on any broker in the cluster. Note that the use of this code is not precisely the same as it's usual meaning of "This server does not host this topic-partition".

  • NOT_CONTROLLER (41) If the request is sent to a broker that is not the controller for the cluster.

  • CLUSTER_AUTHORIZATION_FAILED (31) If the user didn't have Alter access to the topic.
  • PREFERRED_LEADER_NOT_AVAILABLE (80) If the preferred lead could not be elected (for example because it is not currently in the ISR)

  • NONE (0) The elections were successful.

Broker-side election algorithm

The broker-side handling of ElectPreferredLeadersRequest will be somewhat different than currently:

  1. On receipt of ElectPreferredLeadersRequest the controller enqueue a PreferredReplicaLeaderElection with the ControllerManager
  2. After the batch of elections has been started, a callback will either return the responses to the client (if they're available immediately, for example all the leaders were already the preferred ones), or use a purgatory to await the completion of the all of the elections.

  3. Each UpdateMetadataRequest will try to complete the election purgatory.
  4. Successful or timed-out completion of the PreferredReplicaLeaderElection will result in a ElectPreferredLeadersResponse being returned to the client

This change means that the ElectPreferredLeadersResponse is sent when the election is actually complete, rather than when the /admin/preferred_replica_election znode has merely been updated. Thus if the election fails, the ElectPreferredLeadersResponse's error_code will provide a reason.

When support for the --zookeeper option is eventually removed, the need for the /admin/preferred_replica_election znode will disappear and consequently the code managing it will be removed.

Compatibility, Deprecation, and Migration Plan

Existing users of the kafka-preferred-replica-election.sh will receive a deprecation warning when they use the --zookeeper option. The option will be removed in a future version of Kafka. If this KIP is introduced in version 1.0.0 the removal could happen in 2.0.0.

Rejected Alternatives

One alternative is to do nothing: Let the tool continue to communicate with ZooKeeper directly.

Another alternative is to do exactly this KIP, but without the deprecation of --zookeeper. That would have a higher long term maintenance burden, and would prevent any future plans to, for example, provide alternative cluster technologies than ZooKeeperIf there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.