...
The decommission command removes the registration of a specific broker ID. It will use make an UnregisterBrokerRequest in order to do this.
kafka-shell.sh
The Kafka Metadata shell is a new command which allows users to interactively examine the metadata stored in a KIP-500 cluster.
It can read the metadata from the controllers directly, by connecting to them, or from a metadata snapshot on disk. In the former case, the quorum voters must be specified by passing the --controllers flag; in the latter case, the snapshot file should be specified via --snapshot.
Code Block |
---|
$ ./bin/kafka-shell.sh -h
usage: metadata-tool [-h] [--controllers CONTROLLERS] [--config CONFIG] [--snapshot SNAPSHOT] [command [command ...]]
The Apache Kafka metadata tool
positional arguments:
command The command to run.
optional arguments:
-h, --help show this help message and exit
--controllers CONTROLLERS, -C CONTROLLERS
The quorum voter connection string to use.
--config CONFIG, -c CONFIG
The configuration file to use.
--snapshot SNAPSHOT, -s SNAPSHOT
The snapshot file to read. |
The metadata tool works by replaying the log and storing the state into in-memory nodes. These nodes are presented in a fashion similar to filesystem directories. For browsing the nodes, several commands are supported:
Code Block |
---|
>> help
Welcome to the Apache Kafka metadata shell.
usage: {cat,cd,exit,find,help,history,ls,man,pwd} ...
positional arguments:
{cat,cd,exit,find,help,history,ls,man,pwd}
cat Show the contents of metadata nodes.
cd Set the current working directory.
exit Exit the metadata shell.
find Search for nodes in the directory hierarchy.
help Display this help message.
history Print command history.
ls List metadata nodes.
man Show the help text for a specific command.
pwd Print the current working directory. |
The interface of the metadata tool is currently considered unstable and may change when KIP-500 becomes production-ready.
Configurations
Configuration Name | Possible Values | Notes |
---|---|---|
process.roles | null broker controller broker,controller | If this is null (absent) then we are in legacy mode. Otherwise, we are in KIP-500 mode and this configuration determines what roles this process should play: broker, controller, or both. |
controller.listener.names | If non-null, this must be a comma-separated list of listener names. When communicating with the controller quorum, the broker will always use the first listener in this list. | A comma-separated list of the names of the listeners used by the KIP-500 controller. This configuration is required if this process is a KIP-500 controller. The legacy controller will not use this configuration Despite the similar name, note that this is different from the "control plane listener" introduced by KIP-291. |
listeners | A comma-separated list of the configured listeners. For example, INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093, CONTROLLER://192.1.1.8:9094 | This configuration is now required. |
sasl.mechanism.controller.protocol | SASL mechanism used for communication with controllers. Default is GSSAPI. | This is analogous to sasl.mechanism.inter.broker.protocol, but for communication with the controllers. |
controller.quorum.voters | If non-null, this must be a comma-separated list of all the controller voters, in the format: {controller-id}@{controller-host):{controller-port} | When in KIP-500 mode, each node must have this configuration, in order to find out how to communicate with the controller quorum. Note that this replaces the "quorum.voters" config described in KIP-595. This configuration is required for both brokers and controllers. |
node.id | a 32-bit ID | This configuration replaces `broker.id` for zk-based Kafka processes in order to reflect its more general usage. It serves as the ID associated with each role that the process is acting as. For example, a configuration with `node.id=0` and `process.roles=broker,controller` defines two nodes: `broker-0` and `controller-0`. |
initial.broker.registration.timeout.ms | 60000 | When initially registering with the controller quorum, the number of milliseconds to wait before declaring failure and exiting the broker process. |
broker.heartbeat.interval.ms | 3000 | The length of time between broker heartbeats. |
broker.session.timeout.ms | 18000 | The length of time that a broker lease lasts if no heartbeats are made. |
metadata.log.dir | If set, this must be a path to a log directory. | This configuration determines where we put the metadata log. if it is not set, the metadata log is placed in the first log directory from log.dirs. |
controller.quorum.fetch.timeout.ms | Maximum time without a successful fetch from the current leader before a new election is started. | New name for quorum.fetch.timeout.ms |
controller.quorum.election.timeout.ms | Maximum time without collected a majority of votes during the candidate state before a new election is retried | New name for quorum.election.timeout.ms |
controller.quorum.election.backoff.max.ms | Maximum exponential backoff time (based on the number if retries) after an election timeout, before a new election is triggered. | New name for quorum.election.backoff.max.ms |
controller.quorum.request.timeout.ms | Maximum time before a pending request is considered failed and the connection is dropped | New name for quorum.request.timeout.ms |
controller.quorum.retry.backoff.ms | Initial delay between request retries. This config and the one below is used for retriable request errors or lost connectivity and are different from the election.backoff configs above | New name for quorum.retry.backoff.ms |
controller.quorum.retry.backoff.max.ms | Max delay between requests. Backoff will increase exponentially beginning from quorum.retry.backoff.ms | New name for quorum.retry.backoff.max.ms |
...