You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state: Draft

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]

JIRA: here [Change the link from KAFKA-1 to your own ticket]

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

When creating topics or partitions, the Kafka controller has to pick brokers to host the new partitions. The current assignment logic is based on a round robin algorithm and supports rack awareness. While this works relatively well in many scenarios as it's not aware of the state of the clusters, in a few cases the assignments it generates is not optimal. Many cluster administrators then rely on tools like Cruise Control to move partitions to better brokers. This process is expensive as often data has to be copied between brokers.

It would be great to allow custom logic for the assignor to better understand the state of the cluster and minimize the number of partition reassignments necessary. It would enable administrators to build assignment goals (similar to Cruise Control goals) for their clusters.

Some scenarios that could benefit greatly from this feature:

  • When adding brokers to a cluster, Kafka currently does not necessarily place new partitions on new brokers
  • When removing brokers from a cluster, as Kafka currently will keep placing partitions on all existing brokers
  • When some brokers are near their storage/throughput limit, the assignor could avoid putting new partitions on them
  • When we want to place partitions differently for different users

Public Interfaces

1) New public interface:

ReplicaAssignor
package org.apache.kafka.server;

public interface ReplicaAssignor {

    /**
     * Computes replica assignments for the specified partitions
     * 
     * If an assignment can't be computed, for example if the state of the cluster does not satisfy a requirement,
     * implementations can throw ReplicaAssignorException to prevent the topic/partition creation.
     * @param partitions The partitions being created
     * @param cluster The cluster metadata
     * @param principal The principal of the user initiating the request
     * @return The computed replica assignments
     * @throw ReplicaAssignorException
     */
    public ReplicaAssignment computeAssignment(
            NewPartitions partitions,
            Cluster cluster,
            KafkaPrincipal principal) throws ReplicaAssignorException;

    /**
     * Computed replica assignments for the specified partitions
     */
    public class ReplicaAssignment {

        private final Map<Integer, List<Integer>> assignment;

        public ReplicaAssignment(Map<Integer, List<Integer>> assignment) {
            this.assignment = assignment;
        }

        /**
         * @return a Map with the list of replicas for each partition
         */
        Map<Integer, List<Integer>> assignment() {
            return assignment;
        }
    }

    /**
     * Partitions which require an assignment to be computed
     */
    public interface NewPartitions {

        /**
         * The name of the topic for these partitions
         */
        String topicName();

        /**
         * The list of partition ids
         */
        List<Integer> partitionIds();

        /**
         * The replication factor of the topic
         */
        short replicationFactor();

        /**
         * The configuration of the topic
         */
        Map<String, String> configs();
    }

}


2) New broker configuration:

Name: replica.assignor.class.name

Type: class

Doc: The fully qualified class name that implements ReplicaAssignor. This is used by the broker to determine replicas when topics or partitions are created. This defaults to DefaultReplicaAssignor.


3) New exception and error code

A new exception, org.apache.kafka.common.errors.ReplicaAssignorException, will be defined. It will be non retriable.

When ReplicaAssignor implementations throw this exception, it will be mapped to a new error code:

REPLICA_ASSIGNOR_FAILED: The replica assignor could not compute an assignment for the topic or partition.

Proposed Changes

The existing assignment logic will be extracted into a class, DefaultReplicaAssignment, that implements the ReplicaAssignor interface. It will stay the default implementation and a private class. Instead of throwing AdminOperationException, it will be updated to throw ReplicaAssignorException so users get a better error.

AdminManager will create an instance of the specified ReplicaAssignor implementation or if none were set of DefaultReplicaAssignor. When creating topics or partitions, for each topic, it will call assignReplicasToBrokers(). If multiple topics are present in the request, AdminManager will update the Cluster object so the ReplicaAssignor class has access to the up to date cluster metadata.

Compatibility, Deprecation, and Migration Plan

The current behaviour stays the same. This is just an additional feature administrators can opt-in.

Rejected Alternatives

  • Computing assignments for the whole batch: Instead of computing assignment for each topic in the CreateTopics/CreatePartitions request one at a time, we looked at computing assignment for all of them in a single call. We rejected this approach for the following reasons:
    • All logic (validation, policies, creation in ZK) in AdminManager works on a single topic at a time. Grouping the replica assignment computation created very complicated logic
    • It's not clear if having all topics at once would significantly improve computed assignments. This is especially true for the 4 scenarios listed in the Motivation section


  • No labels