You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

The goals behind the command line shell are fundamentally to provide a centralized management for Kafka operations.

Status

Current stateUnder Discussion

Discussion thread: here

JIRA: here

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Folks have created dozens of different systems to work with Kafka. If we can provide a wire protocol that allows the brokers to execute administrative code then clients can more simply execute commands. We have a lot of ties with zookeeper making it near impossible for some code to-do nothing better/more than shell script wrapping kafka-topics.sh etc. With the wire protocol we should be able to have client in any language work with administrating Kafka. If someone wants a REST interface (for example) then can write that in whatever language they like. We should have a client from the project that is not only an example but a fully functionality replacement for the kafka-topics, reassign-partitions, consumer offset tools.

Public Interfaces

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

  • Command line tools and arguments

Proposed Changes

Proposed changes include 4 parts:

  1. Wire Protocol extensions - to add new Admin messages
  2. Server-side Admin commands handlers (TopicCommand-like)
  3. Admin Client - an out-of-box client for performing administrative commands
  4. Interactive Shell / CLI tool supporting administrative commands

 

Some open questions and items under discussion are marked with [x]. Please find Open Questions section for more details.

 

1. Wire Protocol Extensions

Overview

For each type of Admin Request a separate type of Wire protocol message is created.

Currently there are 3 types of requests:

  • Topic commands which include CreateTopic(Request | Response)AlterTopicDeleteTopicDescribeTopicListTopics.
  • Replication tools - ReassingPartition, VerifyReassingPartitions; PreferredReplicaLeaderElection, VerifyPreferredReplicaLeaderElection
  • A special type of request to support Admin commands - ClusterMetadata

Please find details under specific RQ/RP schema proposal.

Schema

The same notation as in  A Guide To The Kafka Protocol is used here. The only difference - new Kafka Protocol metatype - MaybeOf ("?" in notation), when used means value is optional in message. To define value existence special control byte is prepended before each value (0 - field is absent, otherwise - read value normally) [1].

 

All admin messages listed below are required to be sent only to Controller broker. Only controller will process such messages. If Admin message is sent to an ordinary broker a special error code is returned (code 22). In case of other failure during processing message AdminRequestFailedError is returned [2].

Error

Code

Description

AdminRequestFailed

21

Unexpected error occurred while processing Admin request.

NotControllerForAdminRequest
22Target broker (id=<this_broker_id>) is not serving a controller's role.

ClusterMetadata Schema

Cluster Metadata Request

 

ClusterMetadataRequest =>
Cluster Metadata Response

 

ClusterMetadataResponse => ErrorCode [Broker] ?(Controller)
ErrorCode => int16
Broker => NodeId Host Port
NodeId => int32
Host => string
Port => int32
 Controller => Broker

ClusteMetadataRequest is a request with no arguments.

ClusterMetadataResponse holds error code (0 in case of successful result), list of brokers in cluster and optionally broker serving a Controller's role (returning empty Controller most likely means either error during request processing or cluster being in some intermediate state).

ClusterMetadataRequest is required for admin clients to get the Kafka brokers, specifically the controller's location, as only controller may execute admin command [2].

Topic Admin Schema

Create Topic Request

 

CreateTopicRequest => TopicName ?(Partitions) ?(Replicas) ?(ReplicaAssignment) [ConfigEntry]
TopicName => string
Partitions => int32
Replicas => int32
ReplicaAssignment => string
ConfigEntry => ConfigKey ConfigValue
ConfigKey => string
ConfigValue => string

 

Create Topic Response

 

CreateTopicResponse => ErrorCode ?(ErrorDescription)
ErrorCode => int16
ErrorDescription => string

CreateTopicRequest requires topic name and either (partitions+replicas) or replicas assignment to create topic (validation is done on server side). You can also specify topic-level configs to create topic with (to use default set an empty array).

CreateTopicResponse is fairly simple - you receive error code (0 as always identifies NO_ERROR) and optionally error description. Usually it will hold the higher level exception that happened during command execution.

Alter Topic Request

 

AlterTopicRequest => TopicName ?(Partitions) ?(ReplicaAssignment) [AddedConfigEntry] [DeletedConfig]
TopicName => string
Partitions => int32
ReplicaAssignment => string
AddedConfigEntry => ConfigKey ConfigValue
 ConfigKey => string
 ConfigValue => string
 DeletedConfig => string

 

Alter Topic Response

 

AlterTopicResponse => ErrorCode ?(ErrorDescription)
ErrorCode => int16
ErrorDescription => string

AlterTopicRequest is similar to previous, to specify topic level settings that should be removed, use DeletedConfig array (just setting keys). User can provide new partitions value, replica assignment or both.

AlterTopicResponse is similar to CreateTopicResponse.

Delete Topic Request

 

DeleteTopicRequest => TopicName
TopicName => string

 

Delete Topic Response

 

DeleteTopicResponse => ErrorCode ?(ErrorDescription)
ErrorCode => int16
ErrorDescription => string

DeleteTopicRequest requires only topic name which should be deleted.

DeleteTopicResponse is similar to CreateTopicResponse.

Describe Topic Request

 

DescribeTopicRequest => TopicName
TopicName => string

 

Describe Topic Response

 

DescribeTopicResponse => ErrorCode ?(ErrorDescription) ?(TopicDescription)
ErrorCode => int16
ErrorDescription => string
TopicDescription => TopicName TopicConfigDetails [TopicPartitionDetails]
TopicName => string
TopicConfigDetails => Partitions ReplicationFactor [ConfigEntry]
Partitions => int32
ReplicationFactor => int32
ConfigEntry => string string
TopicPartitionsDetails => PartitionId ?(Leader) [Replica] [ISR]
PartitionId => int32
Leader => int32
Replica => int32
ISR => int32
DescribeTopicRequest requires only topic name.

DescribeTopicResponse besides errorCode and optional errorDescription which are used in the same way as in previous messages, holds optional (non empty if execution was successful) TopicDescription structure. See table below for details:

Field

Description

TopicName

The name of the topic for which description is provided.

TopicConfigDetails

A structure that holds basic replication details.

Partitions

Number of partitions in give topic.

Config

Topic-level setting and value which was overridden.

TopicPartitionDetails

List describing replication details for each partition.

PartitionId

Id of the partition.

LeaderOptional broekr-leader id for the described partition.
ReplicasList of broker ids serving a replica's role for the partition.
ISRSame as replicas but includes only brokers that are known to be "in-sync"

 

List Topics Request

 

ListTopicsRequest =>

 

List Topics Response

 

ListTopicsResponse => ErrorCode ?(ErrorDescription) ?(TopicsList)
ErrorCode => int16
ErrorDescription => string
TopicsList => [TopicName]
TopicName => string
ListTopicsRequest is a request with no arguments.

ListTopicsResponse besides errorCode and optional errorDescription which are used in the same way as in previous messages, holds optional (non empty if execution was successful) two list of topic names - one for deleted topics (marked for deletion) and the second one for ordinary, alive topics.

Replication Commands Schema

Reassign Partitions
Reassign Partitions Request

 

ReassignPartitionRequest => ManualAssignment
ManualAssignment => string

 

Reassign Partitions Response

 

ReassignPartitionResponse => ErrorCode ?(ErrorDescription)
ErrorCode => int16
ErrorDescription => string

ReassignPartitionsRequest requires manual partition assignment string. Parsing / validation is done on server. This request will only initiate partition reassignment and return immediately. It is client's responsibility to block the user continually sending VerifyReassignPartitionsRequest to check reassignment status. The format is the following:

{

"partitions": [

{"topic": "foo",
 "partition": 1,
 "replicas": [1,2,3] }

],
 "version":1
}

ReassignPartitionResponse is similar to CreateTopicResponse.


Verify Reassign Partitions Request

 

VerifyReassignPartitionRequest => ManualAssignment
ManualAssignment => string

 

Verify Reassign Partitions Response

 

VerifyReassignPartitionResponse => [ReasignmnetResult] ErrorCode ?(ErrorDescription)
ReasignmnetResult => TopicAndPartition ResultCode
TopicAndPartition => string int32
 ResultCode => int16
 ErrorCode => int16
ErrorDescription => string

VerifyReassignPartitionsRequest requires manual partition assignment string as with ReassignPartitionsRequest which status is verified by this request.

VerifyReassignPartitionResponse as with other Admin request may return error code and optional error description in case of failure. Otherwise a reassignment result map is returned. It holds reassignment status (-1 - reassignment failed, 0 - in progress, 1 - completed successfully).


Preferred Replica Leader Election
Preferred Replica Leader Election Request

 

PreferredReplicaLeaderElectionRequest => PartitionsSerialized
PartitionsSerialized => string

 

Preferred Replica Leader Election Response

 

PreferredReplicaLeaderElectionResponse => ErrorCode ?(ErrorDescription)
ErrorCode => int16
ErrorDescription => string

PreferredReplicaLeaderEleactionRequest initiates preferred replica leader election procedure, similar to ReassignPartitionsRequest this request in intended to be non-blocking. The schema consist of one optional field - partitions in serialized form (json) for which procedure should be started. The format is the following:

{"partitions":[

{"topic": "foo", "partition": 1},
{"topic": "foobar", "partition": 2}

]
}

PreferredReplicaLeaderElectionResponse is similar to CreateTopicResponse.

Status of the procedure may be checked with DescribeTopicRequest  - the head of replicas list field and leader broker should be the same.

2. Server-side Admin Request handlers

All incoming request will be handled by specific helper classes called from KafkaApis - TopicCommandHelper for topic admin commands, ReassignPartitionsCommandHelper and PreferredReplicaLeaderElectionCommandHelper.

All these commands are already implemented as standalone CLI tools, so there is no need to re-implement them. Unfortunately most of command classes are strongly coupled with CLI logic and can hardly be refactored, so for now (before we remove standalone CLI commands)  most likely the logic from those classes will be extracted and copied  to separate classes (as proposed - TopicCommandHelper etc).

3. Admin Client

This component is intended to be a Kafka out-of-box client implementation for Admin commands.

Admin client will use Kafka NetworkClient facility from /clients for cluster communication. Besides Admin commands, client will handle cluster metadata cache and will provide user with a convenient way of handling long running commands (as e.g. reassign partitions).

Proposed API:

 

AdminClient
 public class AdminClient {
    
	/**
     * A producer is instantiated by providing a set of key-value pairs as configuration. Most
	 * of the settings will be related to NetworkClient
     *
     * @param properties settings related to Network client and at least one broker from KafkaCluster to connect to
     */
    public AdminClient(Properties properties) 
    
	/**
     * Create topic with given number of partitions and replication factor, replica assignment will be handled by Kafka cluster
     *
     * @throws ApiException
     */
    public void createTopic(String topicName, int partitions, int replicationFactor, List<ConfigEntry> configs) throws ApiException;
    
	/**
     * Create topic with specified replica assignment (number of partitions and replication factor will be taken
     * from replica assignment string)
     *
     * @throws ApiException
     */
    public void createTopic(String topicName, String replicaAssignment, List<ConfigEntry> configs) throws ApiException;

    /**
     * Alter existing topic partitions and/or replica assignment among Kafka brokers
     *
     * @throws ApiException
     */
    public void alterTopic(String topicName, Integer partitions, String replicaAssignment,
                                    List<ConfigEntry> addedConfigs, List<String> deletedConfigs) throws ApiException;
    /**
     * Delete Kafka topic by name
     *
     * @throws ApiException
     */
    public void deleteTopic(String topicName) throws ApiException;
    
	/**
     * List all existing topics in Kafka cluster
     *
     * @throws ApiException
     */
    public List<String> listTopics() throws ApiException;
    
	/**
     * Request replication information about Kafka topic
     *
     * @throws ApiException
     */
    public DescribeTopicOutput describeTopic(String topicName) throws ApiException;
    
	/**
     * Initiate long-running reassign partitions procedure
     *
     * @param partitionsReassignment manual partitions assignment string (according to ReassignPartitionsCommand)
     * @return future of the reassignment result which is completed once server-side partitions reassignment has succeeded or
     * an error occurred so that partitions reassignment cannot be started
     * @throws ApiException
     */
    public Future<ReassignPartitionsResponse> reassignPartitions(String partitionsReassignment) throws ApiException;

    /**
     * Check the interim status of the partitions reassignment
     *
     * @param partitionsReassignment manual partitions assignment string (according to ReassignPartitionsCommand)
     * @return partition to reassignment result code (completed, in-progress, failed)
     * @throws ApiException
     */
    public Map<TopicPartition, Short> verifyReassignPartitions(String partitionsReassignment) throws ApiException;
    
	/**
     * Initiate long-running preferred replica leader election procedure
     *
     * @param partitions serialized partitions for which preferred replica leader election will be started
     *                   (according to PreferredReplicaLeaderElectionCommand)
     * @return future of the election result which is completed once server-side preferred replica is elected for provided partitions or
     * an error has occurred
     * @throws ApiException
     */
    public Future<PreferredReplicaLeaderElectionResponse> preferredReplicaLeaderElection(String partitions) throws ApiException;

    /**
     * Check the interim status of the preferred replica leader election
     *
     * @param partitions for which preferred replica leader election was started (according to PreferredReplicaLeaderElectionCommand)
     * @return partition to reassignment result code (completed, in-progress, failed)
     * @throws ApiException
     */
    public VerifyPreferredReplicaLeaderElectionResponse verifyPreferredReplicaLeaderElection(String partitions)
            throws ApiException;
	/**
     * A generic facility to send Admin request and return response counterpart
     *
     * @param adminRequest AdminRequest message
     * @param <T>          concrete AdminRequest type
     * @return response counterpart
     * @throws ApiException
     */
    private <T extends AbstractAdminResponse> T sendAdminRequest(AbstractAdminRequest<T> adminRequest) throws ApiException;

 
	/**
     * Refreshes cluster metadata cache - list of brokers and controller
     * 
     * @throws ApiException
     */
    private void updateClusterMetadata() throws Exception;

}

4. Interactive Shell / CLI tool

This component will wrap AdminClient and provide an interactive shell-like environment for executing administrative commands. The goal of these changes is let people use existing standalone tools but from a single script, optionally running commands in shell, so commands arguments/names are not changed comparing to existing tools, where possible.

The tool will be run in two modes:

  • command line interface

  • Shell-like mode

Installation

This is an instruction how to build and start Kafka Command Line Tool (hereinafter - Shell). The implementation is in progress under KAFKA-1694.

To start Shell you need to have a running Kafka Cluster built from the given patch (attached under KAFKA-1694) and build the Shell itself.

  1. Get the code.
    Get the KAFKA-1772_1802_1775_1774_v2.patch attached to KAFKA-1694.
    The patch was built against trunk, on top of revision 7e9368b . So reset to this commit and then run to apply the patch:

    git am KAFKA-1772_1802_1775_1774_v2.patch


  2. Build the code. Run:

    ./gradlew releaseTarGz_2_10_4


  3. Start somewhere Kafka Cluster from archive under ./core/build/distributions/kafka_2.10-0.8.3-SNAPSHOT.tgz

  4. Unpack build archive and start Shell:
    #cd <kafka_home>/core/build/distributions/ && rm -rf kafka_2.10-0.8.3-SNAPSHOT && tar -xf kafka_2.10-0.8.3-SNAPSHOT.tgz

  5. Start the Shell:
    sudo <kafka_home>/core/build/distributions/kafka_2.10-0.8.3-SNAPSHOT/bin/kafka.sh --shell --broker <host : port>
    Where <host : port> is location of one of the running brokers from the Cluster.

  6. To get Shell help run:
    sudo <kafka_home>/core/build/distributions/kafka_2.10-0.8.3-SNAPSHOT/bin/kafka.sh --help

Sample usage

You can use Kafka Command Line Tool in two ways: 1) as a interactive shell 2) as a simple CLI.

E.g. to get list of topics you can:

1) Start Shell and run:

sudo bin/kafka.sh --shell --broker <host : port>

kafka> list-topics

Or

2) Run right from kafka.sh:

sudo bin/kafka.sh --list-topics --broker <host : port>


Open questions:

 

Compatibility, Deprecation, and Migration Plan

  • When will we remove the existing behavior?

I don't know if that has to be decided now. Folks have already built wrapper tools, they can still keep using them if they want. We should code freeze them though.

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.

  • No labels