This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state"Under Discussion"

Discussion thread: <none>

JIRA: <none>

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Broker discovery, registration and liveness in Kafka currently relies very heavily on ZooKeeper.
The flow for registration is roughly the following - when a broker starts up, it registers itself to ZooKeeper. The active Controller picks up on this registration (by following the /brokers/ids znode path for changes) and sends UpdateMetadata requests to all other brokers in the cluster in order to inform them of the new broker.
Liveness is maintained by leveraging ZooKeeper sessions - each broker maintains a ZooKeeper session (via heartbeats). Once a broker disconnects from ZooKeeper, its session is terminated which deletes the ephemeral /broker/id znode it was an owner of. The Kafka Controller notices this znode deletion and sends UpdateMetadata requests to inform the brokers of the new live broker set.

While this mechanism has worked well, it is not without its problems. One notable gap is where a broker can be partitioned away from the controller (not able to receive metadata updates) but still remain part of the cluster because its ZooKeeper session is kept intact.

Given that with KIP-500: Replace ZooKeeper with a Self-Managed Metadata Quorum Kafka is moving away from ZooKeeper, it seems worthwhile to introduce a new broker registration mechanism that is less reliant on ZooKeeper.


{{ TODO: We need the metadata update RPC/mechanism to rely on it as a heartbeat }}

Abstract

Introduce an inter-broker registration mechanism where brokers actively register themselves to the controller and maintain heartbeat sessions.

Public Interfaces

Register Broker Request

RegisterBrokerRequest
{
  "apiKey": 100,
  "type": "request",
  "name": "RegisterBrokerRequest",
  "validVersions": "0",
  "flexibleVersions": "0+",
  "fields": [
    { "name": "BrokerId", "type": "int32", "versions": "0+", "entityType": "brokerId",
      "about": "The ID of the broker that is being registered" },
    { "name": "RackId", "type": "string", "versions": "0+", "ignorable": true, "default": "null",
      "about": "The rack of the broker, or null if it has not been assigned to a rack." },
    { "name": "Endpoints", "type": "[]BrokerEndpoint",
      "about": "The broker endpoints.", "fields": [
        { "name": "Port", "type": "int32",
          "about": "The port of this endpoint." },
        { "name": "Host", "type": "string",
          "about": "The hostname of this endpoint." },
        { "name": "Listener", "type": "string",
          "about": "The listener name of this endpoint." },
        { "name": "SecurityProtocol", "type": "int16",
          "about": "The security protocol type of this endpoint." }
      ]
    }
  ]
}

Set State?

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

  • Binary log format

  • The network protocol and api behavior

  • Any class in the public packages under clientsConfiguration, especially client configuration

    • org/apache/kafka/common/serialization

    • org/apache/kafka/common

    • org/apache/kafka/common/errors

    • org/apache/kafka/clients/producer

    • org/apache/kafka/clients/consumer (eventually, once stable)

  • Monitoring

  • Command line tools and arguments

  • Anything else that will likely break existing users in some way when they upgrade

Proposed Changes

ZNode Changes

We will add a new persistent znode for each broker. The Controller will be tasked with maintaining these znodes.

cluster/brokers/id znode
{
  "settings": {
    "listener_security_protocol_map":{
      "INTERNAL":"PLAINTEXT",
      "REPLICATION":"PLAINTEXT",
      "EXTERNAL":"SASL_SSL"
    },
    "endpoints":[
      "INTERNAL://kafka-0.svc.cluster.local:9033",
      "REPLICATION://kafka-0.svc.cluster.local:9011",
      "EXTERNAL://b0.us-west-2.svc.cluster.local:9092"
    ],
    "rack":"0",
    "jmx_port":7203,
  },

  "epoch": 3
  "state": "OFFLINE",
  "version":1
}

Compared to the old /brokers/id znode, here we've removed the version 1 top-level fields of "host", "port" and "timestamp". "host" and "port" were deprecated in favor of the endpoints and "timestamp" was not used at all.

A "state" field is added denoting the broker's global state (clarified below) and an "epoch".

Broker States

To maintain backwards compatibility, we will differentiate between two states - internal and global.
The internal state includes more sub-states, like RecoveringFromUncleanShutdown. This is the BrokerStates mechanism that already exists.

We are introducing a new, global state for each broker. To start with, each broker can have one out of 4 such states

  • Offline - when the broker process is in the Offline state, it is either not running at all, or in the process of performing single-node tasks needed to starting up such as initializing the JVM or performing log recovery.
  • Fenced - when the broker is in the Fenced state, it will not respond to RPCs from clients.  The broker will be in the fenced state when starting up and attempting to fetch the newest metadata.  It will re-enter the fenced state if it can't contact the active controller.  Fenced brokers should be omitted from the metadata sent to clients
  • Online - when a broker is online, it is ready to respond to requests from clients.
  • Stopping - brokers enter the stopping state when they receive a SIGINT.  This indicates that the system administrator wants to shut down the broker. When a broker is stopping, it is still running, but we are trying to migrate the partition leaders off of the broker. Eventually, the active controller will ask the broker to finally go offline, by returning a special result code in the MetadataFetchResponse.  Alternately, the broker will shut down if the leaders can't be moved in a predetermined amount of time.

Register Flow

Once a broker starts up it goes through its single-node tasks - initializing certain classes and performing log recovery.
After that is over, it sends a Register request to the controller. The controller creates a znode for the broker (or changes its state if a znode already exists), setting it to a FENCED state and responds to the broker. Upon receiving a response, it checkpoints its current metadata and enters its fenced state where metadata catch up begins.

At this point, the normal metadata fetches act both as heartbeats and a way to propagate information.

Once the Controller determines the broker as having caught up to the latest metadata, it persists the state change in ZooKeeper and also writes it in the metadata log. By way of metadata fetches, the new broker learns that it has caught up once the metadata it's reading indicates so.

If metadata heartbeats are to get missed, the broker is transitioned back into a fenced state

Controller Action

From the controller's perspective, a broker comes online only once it catches up to the latest cluster metadata. Once a broker comes online, the controller goes through its normal `onBrokerStartup` flow - propagating the new broker (metadata) across the cluster, triggering online partition state changes, resuming reassignments, etc

Once a broker fails its heartbeats, the Controller goes through its `onBrokerFailure` flow - propagating metadata across the cluster, sending LeaderAndIsr requests, etc


Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?
  • Compatibility with znodes
  • Upgrade path to no ZK?

Rejected Alternatives

  • Have each broker maintain its persistent znode
    • trickier to manage
    • no single source of truth
    • harder to migrate off of ZK later


  • No labels