Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Updating metrics to reflect what has been implemented

Table of Contents

Status

Current state: In DiscussionAccepted

Discussion thread: https://lists.apache.org/thread/phnrz31dj0jz44kcjmvzrrmhhsmbx945

...

MigrationIneligibleBrokerCount not eligible metric KRaft while in the "MigrationInelgible" ZkMigrationState If not in that state, it will report zero.MigrationIneligibleControllerCountA count of KRaft quorum controllers that are not eligible for migration while in the "MigrationInelgible" ZkMigrationState If not in that state, it will report zero.ZooKeeperBlockingKRaftMillis a write to KRaft has been blocked due to lagging ZooKeeper writes. This metric will only be reported by active .
MBean nameDescription
kafka.server:type=KafkaServer,name=MetadataType

An enumeration of: ZooKeeper (1) or KRaft (2). Each broker reports this.

kafka.controller:type=KafkaController,name=MetadataTypeAn enumeration of: ZooKeeper (1), KRaft (2), or Dual (3). The active controller reports this.
kafka.controller:type=KafkaController,name=Features,feature={feature},level={level}The finalized set of features with their level as seen by the controller. Used to help operators see the cluster's current metadata.version
kafka.controller:type=KafkaController,name=ZkMigrationStateAn enumeration of the possible migration states the cluster can be in. This is only reported by the active controller. 
kafka.controller:type=KafkaController,name=MigratingZkBrokerCountA count of ZK brokers that are registered with KRaft and ready for migration. This will only be reported by the active controller.
kafka.controller:type=KafkaController,name=ZkWriteBehindLagThe amount of lag in records that ZooKeeper is behind relative to the highest committed record in the metadata log. This metric will only be reported by the active KRaft controller.
kafka.controller:type=KafkaController,name=ZooKeeperWriteBehindLagZkWriteSnapshotTimeMsThe amount of lag in records that ZooKeeper is behind relative to the highest committed record in the metadata log. This metric will only be reported by the active KRaft controller.number of milliseconds the KRaft controller took reconciling a snapshot into ZK
kafka.controller:type=KafkaController,name=ZkWriteDeltaTimeMsThe number of milliseconds the KRaft controller took writing a delta into ZK

MetadataVersion (IBP)

A new MetadataVersion in the 3.4 line will be added. This version will be used for a few things in this design.

  • Enable forwarding on all brokers (KIP-590: Redirect Zookeeper Mutation Protocols to The Controller)
  • Usage of new ZkMigrationHeartbeat requestBrokerRegistration RPC version
  • Usage of new controller RPC versions
  • Usage of new ApiVersions RPC version (by KRaft controller only)
  • Usage of new ZkMigrationStateRecord
  • Enable the migration components on KRaft controller and special migration behavior on ZK brokers

All brokers must be running at least this MetadataVersion before the migration All brokers must be running at least this MetadataVersion before the migration can begin. ZK brokers will specify their MetadataVersion using the inter.broker.protocol.version as usual. The KRaft controller will bootstrap with the same MetadataVersion (which is stored in the metadata log as a feature flag – see KIP-778: KRaft to KRaft Upgrades).

...

For the three ZK controller RPCs UpdateMetadataRequest, LeaderAndIsrRequest, and StopReplicaRequest a new KRaftControllerId IsKRaftController field will be added. This field will point is used to indicate that the active KRaft controller and will only be set when the controller is in KRaft mode. If this field is set, the ControllerId field should be -1. controller sending this RPC is a KRaft controller.

Code Block
{
  "apiKey": 4,
  "type": "request",
  "listeners": ["zkBroker"],
  "name": "LeaderAndIsrRequest",
  "validVersions": "0-7",  // <-- New version 7
  "flexibleVersions": "4+",
  "fields": [
    { "name": "ControllerId", "type": "int32", "versions": "0+", "entityType": "brokerId",
      "about": "The controller id." },
-->     
    { "name": "KRaftControllerIdisKRaftController", "type": "int32bool", "versions": "7+", "entityTypedefault": "brokerIdfalse",
      "about": "TheIf KRaft controller id, is used during migration. See KIP-866" }, <-- New field
    { "name": "ControllerEpoch", "type": "int32", "versions": "0+",
      "about": "The controller epoch." },
    ...
   ]
}

...

Code Block
{
  "apiKey": 5,
  "type": "request",
  "listeners": ["zkBroker"],
  "name": "StopReplicaRequest",
  "validVersions": "0-4",  // <-- New version 4
  "flexibleVersions": "2+",
  "fields": [
    { "name": "ControllerId", "type": "int32", "versions": "0+", "entityType": "brokerId",
      "about": "The controller id." },
-->     
    { "name": "KRaftControllerIdisKRaftController", "type": "int32bool", "versions": "4+", "entityTypedefault": "brokerIdfalse",
      "about": "TheIf KRaft controller id, is used during migration. See KIP-866" }, // <-- New field
    { "name": "ControllerEpoch", "type": "int32", "versions": "0+",
      "about": "The controller epoch." },
    ...
   ]
}

...

Code Block
{
  "apiKey": 6,
  "type": "request",
  "listeners": ["zkBroker"],
  "name": "UpdateMetadataRequest",
  "validVersions": "0-8",  // <-- New version 8
  "flexibleVersions": "6+",
  "fields": [
    { "name": "ControllerId", "type": "int32", "versions": "0+", "entityType": "brokerId",
      "about": "The controller id." },
-->     
    { "name": "KRaftControllerIdisKRaftController", "type": "int32bool", "versions": "8+", "entityTypedefault": "brokerIdfalse",
      "about": "TheIf KRaft controller id, is used during migration." }, See KIP-866" }, // <-- New field
    { "name": "ControllerEpoch", "type": "int32", "versions": "0+",
      "about": "The controller epoch." },
    ...
   ]
}

...


Migration Metadata Record

A new tagged field on ApiVersionsResponse will be added to allow KRaft controllers to indicate their ability to perform the migrationmetadata record is added to indicate if a ZK migration has been started or finalized. 

Code Block
{
  "apiKey": 1821,
  "type": "responsemetadata",
  "name": "ApiVersionsResponseZkMigrationStateRecord",
  "validVersions": "0-4",   // <-- New version 4
  "flexibleVersions": "30+",
  "fields": [
    ...
       { "name": "ZkMigrationReadyZkMigrationState", "type": "int8", "versions": "40+",
 "taggedVersions": "4+", "tag": 3, "ignorable": true,
      "     "about": "SetOne by a KRaft controller if the required configurations for ZK migration are presentof the possible migration states." },
  ]
}

This field will only be set by the KRaft controller when sending ApiVersionsResponse to other KRaft controllers. Since this migration does not support combined mode KRaft nodes, this field will never be seen by clients when receiving ApiVersionsResponse sent by brokers.

Migration Metadata Record

The possible values for ZkMigrationState are: None (0), Pre-Migration (1), Migration (2), and Post-Migration (3). A int8 type is used to give the possibility of additional states in the future.

Broker Registration RPC

A new version of the broker registration RPC will be added to support ZK brokers registering with the KRaft quorum. A new boolean field is added to indicate that the sender of the RPC is a ZK broker that is ready for migration. The usage of this RPC by a ZK broker indicates that it has "zookeeper.metadata.migration.enable" and quorum connection configs properly set. The values of this field are the same as the equivalent field in ApiVersionsRequest.A new metadata record is added to indicate if a ZK migration has been started or finalized. 


Code Block
{
  "apiKey": <NEXT KEY>62,
  "type": "metadatarequest",
  "namelisteners": ["ZkMigrationRecordcontroller"],
  "validVersionsname": "0BrokerRegistrationRequest",
  "flexibleVersionsvalidVersions": "0+-1",
 // <-- New version 1
  "flexibleVersions": "0+",
  "fields": [
    // ...     
    { "name": "ZkMigrationStateIsMigratingZkBroker", "type": "int8bool", "versions": "0+1+", "default": "false",
      "about": "One of the possibleIf the required configurations for ZK migration states."are },
  ]
}

The possible values for ZkMigrationState are: Started (0) and Finalized (1). A int8 type is used to give the possibility of additional states in the future.

Migration Heartbeat RPC

present, this value is set to true" }    
  ]
}

RegisterBrokerRecord

A new field is added to signify that a registered broker is a ZooKeeper brokerA new RPC will be introduced that is used by ZK brokers to periodically contact the KRaft quorum. The presence of this request indicates that a given broker has correctly configured its connection to the quorum. The contents of this request indicate the readiness of this broker's configuration regarding a migration. By examining incoming ZkMigrationHeartbeat requests, the KRaft controller can determine if the migration is able to begin based on the state of the ZK brokers.

Code Block
{
  "apiKey": <NEXT KEY>0,
  "type": "requestmetadata",
  "listenersname": ["controllerRegisterBrokerRecord"],   
  "namevalidVersions": "ZkMigrationHeartbeat0-2",  // <-- New version 2
  "validVersionsflexibleVersions": "0",
  "flexibleVersions": "0+",
  "fields": [
    { "name": "ClusterIdBrokerId", "type": "uuidint32", "versions": "0+", "entityType": "brokerId",
      "about": "The Clusterbroker ID according to the requesting broker."}id." },     
    { "name": "BrokerIdIsMigratingZkBroker", "type": "int32bool", "entityTypeversions": "brokerId2+", "versionsdefault": "0+false",
      "about": "The ID of the requesting broker."},
    { "name": "BrokerEpoch", "type": "int64", "versions": "0+",
      "about": "The epoch of the requesting broker."},
    { "name": "IsReady", "type": "bool", "versions": "0+",
      "about": "True if this broker has the migration enable config set."},
    { "name": "InterBrokerProtocolVersion", "type": "string", "versions": "0+",
      "name": "The IBP currently in use by the requesting broker."}
  ]
}

This request will be sent periodically by ZK controllers to the KRaft quorum during the migration. This heartbeat will be mainly used for debugging and informational purposes. If a subsequent heartbeat comes with an invalid state after the migration has started, the broker will be fenced off from receiving controller RPCs. See the section on "Misconfigurations" below.

Migration State ZNode

As part of the propagation of KRaft metadata back to ZooKeeper while in dual-write mode, we need to keep track of what has been synchronized. A new ZNode will be introduced to keep track of which KRaft record offset has been written back to ZK. This will be used to recover the synchronization state following a KRaft controller failover. 

True if the broker is a ZK broker in migration mode. Otherwise, false" },  // <-- New field
    // ...
  ]
}


Migration State ZNode

As part of the propagation of KRaft metadata back to ZooKeeper while in dual-write mode, we need to keep track of what has been synchronized. A new ZNode will be introduced to keep track of which KRaft record offset has been written back to ZK. This will be used to recover the synchronization state following a KRaft controller failover. 

Code Block
ZNode /migration

{
  "version": 0,
  "kraft_controller_id": 3000,
  "kraft_controller_epoch": 1,
  "kraft_metadata_offset": 1234,
  "kraft_metadata_epoch": 10
}

By using conditional updates on this ZNode, will can fence old KRaft controllers from synchronizing data to ZooKeeper if there has been a new election.

Controller ZNodes

The two controller ZNodes "/controller" and "/controller_epoch" will be managed by the KRaft quorum during the migration. More details in "Controller Leadership" section below. 

A new version of the JSON schema for "/controller" will be added to include a "kraftControllerEpoch" field.

Code Block
{
  "version": 2, // <-- New version 2
  "brokerid
Code Block
ZNode /migration

{
  "version": 0,
  "kraft_controller_id": 3000,
  "kraft_controller_epochtimestamp": 11234567890,
  "kraft_metadata_offsetkraftControllerEpoch": 1234,
  "kraft_metadata_epoch": 10
}

By using conditional updates on this ZNode, will can fence old KRaft controllers from synchronizing data to ZooKeeper if there has been a new election.

Controller ZNodes

The two controller ZNodes "/controller" and "/controller_epoch" will be managed by the KRaft quorum during the migration. More details in "Controller Leadership" section below.

Operational Changes

Forwarding Enabled on Brokers

42     // <-- New field
}

This field is intended to be informational to aid with debugging.

Operational Changes

Forwarding Enabled on Brokers

As detailed in KIP-500 and KIPAs detailed in KIP-500 and KIP-590, all brokers (ZK and KRaft) must forward administrative requests such as CreateTopics to the active KRaft controller once the migration has started. When running the new metadata.version defined in this KIP, all brokers will enable forwarding.

...

Here is a state machine description of the migration. There will likely be more internal states that the controller uses, but these four will be exposed as the ZkMigrationState metric.


not .

State

Enum

Description

None

0This cluster started out as KRaft

The cluster is in KRaft mode and was

never migrated

from ZooKeeper

MigrationIneligiblePreMigration

1

The brokers and controllers do not meet the migration criteria. The cluster is operating in ZooKeeper mode.

A KRaft controller has been provisioned and has migration enabled.

MigrationMigratingZkData

2

The KRaft controller is copying data from ZooKeeper into KRaft.has begun the data migration, brokers are being restarted, dual-writes are in progress.

PostMigrationDualWriteMetadata

3

The controller cluster is in KRaft mode making dual writes to ZooKeeper.

MigrationFinalized

4

The cluster has been migrated to KRaft mode.


The active ZooKeeper controller always reports "MigrationIneligible" while the will not report this metric, only the active KRaft controller reports the state corresponding to the state of the migration.

...

A new set of nodes will be provisioned to host the controller quorum. These controllers will be started with zookeeper.metadata.migration.enable set to “true”. Once the quorum is established and a leader is elected, the active controller will check that the whole quorum is ready to begin the migration. This is done by examining the new tagged field on ApiVersionsResponse that is exchanged between controllers. Following this, the controller will examine the state of the ZK broker registrations determine the set of extant ZK brokers and wait for incoming ZkMigrationHeartbeat requests. BrokerRegistration requests (see section on ZK Broker Presence). Once all known ZK brokers are have contacted registered with the KRaft controller , (and they are in a valid state, ) the migration process will begin.

...

The metadata copied from ZK will be encapsulated in a single metadata transaction (KIP-868). A MigrationRecord ZkMigrationStateRecord will also be included in this transaction. 

...

Once the operator has decided to commit to KRaft mode, the final step is to restart the controller quorum and take it out of migration mode by setting zookeeper.metadata.migration.enable to "false" (or unsetting it). The active controller will only finalize the migration once it detects that all members of the quorum have signaled that they are finalizing the migration (again, using the tagged field in ApiVersionsResponse). Once the controller leaves migration mode, it will write a MigrationRecord ZkMigrationStateRecord to the log and no longer perform writes to ZK. It will also disable its special handling of ZK RPCs.

...

UpdateMetadata: for metadata changes, the KRaft controller will need to send UpdateMetadataRequests to the ZK brokers. Instead of ControllerId, the KRaft controller will specify itself using KRaftControllerId field.

StopReplicas: following reassignments and topic deletions, we will need to send StopReplicas to ZK brokers for them to stop managing certain replicas. 

Each of these RPCs will include a new KRaftControllerId IsKRaftController field that points to the active KRaft controller. When this field is present, it acts as a signal to the brokers that the controller is in KRaft modeindicates if the sending controller is a KRaft controller. Using this field, and the zookeeper.metadata.migration.enable config, the brokers can enable migration specific behavior. 

...

In order to prevent further writes to ZK, the first thing the new KRaft quorum must do is take over leadership of the ZK controller. This can be achieved by unconditionally overwriting two values in ZK. The "/controller" ZNode indicates the current active controller. By overwriting it, a watch will fire on all the ZK brokers to inform them of a new controller election. The active KRaft controller will write its node ID (e.g., 3000) and epoch into this ZNode to claim controller leadership. This write will be persistent rather than the usual ephemeral write used by the ZK controller election algorithm. This will ensure that no ZK broker can claim leadership during a KRaft controller failover.

...

While running in migration mode, we must synchronize broker registration information from ZK to KRaft. 

The KRaft controller will send UpdateMetadataRequests to ZK brokers to inform them of the other brokers in the cluster. This information is used by the brokers for the replication protocols. Similarly, the KRaft controller must know about ZK and KRaft brokers when performing operations like assignments and leader election.

ZK brokers, KRaft brokers, and the KRaft controller must know about all brokers in the cluster.

In order to discover which ZK brokers exist, the KRaft controller will need to read the “/brokers” state from ZK and copy it into the metadata log.

the KRaft controller must know about KRaft brokers as well as ZK brokers. This will be accomplished by having the ZK brokers send the broker lifecycle RPCs to the KRaft controller.

A new version of the BrokerRegistration RPC will be used by the ZK brokers to register themselves with KRaft. The ZK brokers will set the new IsMigrationZkBroker field and populate the Features field with a "metadata.version" min and max supported equal to their IBP. The KRaft controller will only accept the registration if the given "metadata.version" is equal to the IBP/MetadataVersion of the quorum. 

After successfully registering, the ZK brokers will send BrokerHeartbeat RPCs to indicate liveness. The ZK brokers will learn about other brokers in the usual way through UpdateMetadataRequest.

If a ZK broker attempts to register with an invalid node ID, cluster ID, or IBP, the KRaft controller will reject the registration and the broker will terminateIf a ZK broker comes online and registers itself with a nodeId of an existing KRaft broker, we will log en error and fence the errant ZK broker by not sending it UpdateMetadataRequests.

If a KRaft broker attempts to register itself with a nodeId the node ID of an existing ZK broker, the controller will refuse will reject the registration and the broker will terminate.

AdminClient, MetadataRequest, and Forwarding

When a client bootstraps metadata from the cluster, it must receive the same metadata regardless of the type of broker it is bootstrapping from. Normally, ZK brokers return the active ZK controller as the ControllerId and KRaft brokers return a random alive KRaft broker. In both cases, this ControllerId is internally read from the MetadataCache on the broker.

Since we require controller forwarding for this KIP, we can use the KRaft approach of returning a random broker (ZK or KRaft) as the ControllerId for clients via MetadataResponse and rely on forwarding for write operations.

For inter-broker requests such as AlterPartitions and ControlledShutdown, we do not want to add the overhead of forwarding so we'll want to include the actual controller in the UpdateMetadataRequest. However, we cannot simply include the KRaft controller as the ControllerId. The ZK brokers connect to a ZK controller by using the "inter.broker.listener.name" config and the node information from LiveBrokers in the UpdateMetadataRequest. For connecting to a KRaft controller, the ZK brokers will need to use the "controller.listener.names" and "controller.quorum.voters" configs. To allow this, we will use the new KRaftControllerId field in UpdateMetadataRequest.

Topic Deletions

The ZK migration logic will need to deal with asynchronous topic deletions when migrating data. Normally, the ZK controller will complete these asynchronous deletions via TopicDeletionManager. If the KRaft controller takes over before a deletion has occurred, we will need to complete the deletion as part of the ZK to KRaft state migration. Once the migration is complete, we will need to finalize the deletion in ZK so that the state is consistent.

Rollback to ZK

As mentioned above, it should be possible for the operator to rollback to ZooKeeper at any point in the migration process prior to taking the KRaft controllers out of migration mode. The procedure for rolling back is to reverse the steps of the migration that had been completed so far. 

  • Brokers should be restarted one by one in ZK mode
  • The KRaft controller quorum should be cleanly shutdown
  • Operator can remove the persistent "/controller" and "/controller_epoch" nodes allowing for ZK controller election to take place

A clean shutdown of the KRaft quorum is important because there may be uncommitted metadata waiting to be written to ZooKeeper. A forceful shutdown could let some metadata be lost, potentially leading to data loss.

Failure Modes

There are a few failure scenarios to consider during the migration. The KRaft controller can crash while initially copying the data from ZooKeeper, the controller can crash some time after the initial migration, and the controller can fail to write new metadata back to ZK.

Initial Data Migration

For the initial migration, the controller will utilize KIP-868 Metadata Transactions to write all of the ZK metadata in a single transaction. If the controller fails before this transaction is finalized, the next active controller will abort the transaction and restart the migration process.

Controller Crashes

Once the data has been migrated and the cluster is the MigrationActive or MigrationFinished state, the KRaft controller may fail. If this happens, the Raft layer will elect a new leader which update the "/controller" and "/controller_epoch" ZNodes and take over the controller leadership as usual.

Unavailable ZooKeeper

KRaft Controller Pre-Migration State

When the KRaft quorum is first established prior to starting a migration, it should not handle most RPCs until the initial data migration from ZooKeeper has completed. This is necessary to prevent divergence of metadata during the initial data migration. The controller will need to process RPCs related to Raft as well as BrokerRegistration and BrokerHeartbeat. Other RPCs (such as CreateTopics) will be rejected with a NOT_CONTROLLER error.

Once the metadata migration is complete, the KRaft controller will begin operating normally.

ZK Broker Presence

When the KRaft controller comes up in migration mode, it will wait for all known ZK brokers to register themselves before starting the migration. The problem with this is we cannot know precisely what ZK brokers exist. The broker registrations in ZK are ephemeral and only show the brokers that are currently alive. If an operator had the brokers offline and started a migration, this would lead the controller to think no brokers exist. To improve on this, we can add a heuristic based on the cluster metadata to better capture the full set of ZK brokers. If we look at the topic assignments and configurations, we can calculate a set of brokers which have partitions assigned to them or have a dynamic config. This approach is still imperfect since brokers could be offline and have no assignments, but it will at least prevent any partition unavailability due to a broker running old software and not being able to participate in the migration.

AdminClient, MetadataRequest, and Forwarding

When a client bootstraps metadata from the cluster, it must receive the same metadata regardless of the type of broker it is bootstrapping from. Normally, ZK brokers return the active ZK controller as the ControllerId and KRaft brokers return a random alive KRaft broker. In both cases, this ControllerId is internally read from the MetadataCache on the broker.

Since we require controller forwarding for this KIP, we can use the KRaft approach of returning a random broker (ZK or KRaft) as the ControllerId for clients via MetadataResponse and rely on forwarding for write operations.

For inter-broker requests such as AlterPartitions and ControlledShutdown, we do not want to add the overhead of forwarding so we'll want to include the actual controller in the UpdateMetadataRequest. However, we cannot simply include the KRaft controller as the ControllerId. The ZK brokers connect to a ZK controller by using the "inter.broker.listener.name" config and the node information from LiveBrokers in the UpdateMetadataRequest. For connecting to a KRaft controller, the ZK brokers will need to use the "controller.listener.names" and "controller.quorum.voters" configs. To allow this, we will use the new IsKRaftController field in UpdateMetadataRequest to indicate different controller types to the channel managers.

Topic Deletions

The ZK migration logic will need to deal with asynchronous topic deletions when migrating data. Normally, the ZK controller will complete these asynchronous deletions via TopicDeletionManager. If the KRaft controller takes over before a deletion has occurred, we will need to complete the deletion as part of the ZK to KRaft state migration. Once the migration is complete, we will need to finalize the deletion in ZK so that the state is consistent.

Meta.Properties

Both ZK and KRaft brokers maintain a meta.properties file in their log directories to store the ID of the node and the cluster. Each broker type uses a different version of this file.

v0 is used by ZK brokers:

Code Block
#
#Tue Nov 29 10:15:56 EST 2022
broker.id=0
version=0
cluster.id=L05pbYc6Q4qlvxLk3rTO9A

v1 is used by KRaft brokers and controllers:

Code Block
#
#Tue Nov 29 10:16:40 EST 2022
node.id=2
version=1
cluster.id=L05pbYc6Q4qlvxLk3rTO9A

Since these two versions contain the same data, but with different field names, we can simply support v0 and v1 in KRaft brokers and avoid modifying the file on disk. By leaving this file unchanged, we better facilitate a downgrade to ZK during the migration. Once the controller has completed the migration and written the final ZkMigrationStateRecord, the brokers can rewrite their meta.properties files as v1 in their log directories.

Rollback to ZK

As mentioned above, it should be possible for the operator to rollback to ZooKeeper at any point in the migration process prior to taking the KRaft controllers out of migration mode. The procedure for rolling back is to reverse the steps of the migration that had been completed so far. 

  • Brokers should be restarted one by one in ZK mode
  • The KRaft controller quorum should be cleanly shutdown
  • Operator can remove the persistent "/controller" and "/controller_epoch" nodes allowing for ZK controller election to take place

A clean shutdown of the KRaft quorum is important because there may be uncommitted metadata waiting to be written to ZooKeeper. A forceful shutdown could let some metadata be lost, potentially leading to data loss.

Failure Modes

There are a few failure scenarios to consider during the migration. The KRaft controller can crash while initially copying the data from ZooKeeper, the controller can crash some time after the initial migration, and the controller can fail to write new metadata back to ZK.

Initial Data Migration

For the initial migration, the controller will utilize KIP-868 Metadata Transactions to write all of the ZK metadata in a single transaction. If the controller fails before this transaction is finalized, the next active controller will abort the transaction and restart the migration process.

Controller Crashes

Once the data has been migrated and the cluster is the MigrationActive or MigrationFinished state, the KRaft controller may fail. If this happens, the Raft layer will elect a new leader which update the "/controller" and "/controller_epoch" ZNodes and take over the controller leadership as usual.

Unavailable ZooKeeper

While in the dual-write mode, it is possible for a write to ZK to fail. While in the dual-write mode, it is possible for a write to ZK to fail. In this case, we will want to stop making updates to the metadata log to avoid unbounded lag between KRaft and ZooKeeper. Since ZK brokers will be reading data like ACLs and dynamic configs from ZooKeeper, we should limit the amount of divergence between ZK and KRaft brokers by setting a bound on the amount of lag between KRaft and ZooKeeper.

...

If a migration has been started, but a KRaft controller is elected that is misconfigured (does not have that is misconfigured (does not have zookeeper.metadata.migration.enable or ZK configs) this controller should resign. When replaying the metadata log during its initialization phase, this controller can see that a migration is in progress by seeing the initial ZkMigrationStateRecord. Since it does not have the required configs, it can resign leadership and throw an error.

If a migration has been finalized, but the KRaft quroum comes up with zookeeper.metadata.migration.enable or ZK configs) this controller should resign. When replaying the metadata log during its initialization phase, this controller can see that a migration is in progress by seeing the initial MigrationRecord. Since it does not have the required configs, it can resign leadership and throw an error.

If a migration has been finalized, but the KRaft quroum comes up with zookeeper.metadata.migration.enable, we must not re-enter the migration mode. In this case, while replaying the log, the controller can see the second MigrationRecord and know that the migration is finalized and should not be resumed. This should result in errors being thrown, but the quorum can continue operating as normal.

Other scenarios likely exist and will be examined as the migration feature is implemented. 

Test Plan

In addition to basic "happy path" tests, we will also want to test that the migration can tolerate failures of brokers and KRaft controllers. We will also want to have tests for the correctness of the system if ZooKeeper becomes unavailable during the migration. Another class of tests for this process is metadata consistency at the broker level. Since we are supporting ZK and KRaft brokers simultaneously, we need to ensure their metadata does not stay inconsistency for very long.

Rejected Alternatives

Offline Migration

enable, we must not re-enter the migration mode. In this case, while replaying the log, the controller can see the second ZkMigrationStateRecord and know that the migration is finalized and should not be resumed. This should result in errors being thrown, but the quorum can continue operating as normal.

Other scenarios likely exist and will be examined as the migration feature is implemented. 

Test Plan

In addition to basic "happy path" tests, we will also want to test that the migration can tolerate failures of brokers and KRaft controllers. We will also want to have tests for the correctness of the system if ZooKeeper becomes unavailable during the migration. Another class of tests for this process is metadata consistency at the broker level. Since we are supporting ZK and KRaft brokers simultaneously, we need to ensure their metadata does not stay inconsistency for very long.

Rejected Alternatives

Offline Migration

The main alternative to this design is to do an offline migration. While this would be much simpler, it would be a non-starter for many Kafka users who require minimal downtime of their cluster. By allowing for an online migration from ZK to KRaft, we can provide a path towards KRaft for all Kafka users – even ones where Kafka is critical infrastructure. 

Online Broker Migration

Once KRaft has taken over leadership of the controller and migrated the ZK data, the design calls for a restart of the ZK brokers into KRaft mode. An alternative to this is to dynamically switch the brokers from using controller RPCs (UpdateMetadata and LeaderAndISR) to the metadata log. This would alleviate the need for a rolling restart of the brokers to bring them into KRaft mode. The difficulty with this approach is that there is a vast difference in the implementations between KafkaServer (ZK) and BrokerServer (KRaft). It is possible to reconcile these differences, but the effort would be very large. This option would also increase the risk of the migration since we would be modifying the "safe" state of the broker code. By leaving the ZK implementation mostly unchanged, we give ourselves a safety net for rolling back during the migration.The main alternative to this design is to do an offline migration. While this would be much simpler, it would be a non-starter for many Kafka users who require minimal downtime of their cluster. By allowing for an online migration from ZK to KRaft, we can provide a path towards KRaft for all Kafka users – even ones where Kafka is critical infrastructure. 

No Dual Writes

Another simplifying alternative would be to only write metadata into KRaft while in the migration mode. This has a few disadvantages. Primarily, it makes rolling back to ZK much more difficult, it at all possible. Secondly, we actually have a few remaining ZK read usages on the brokers that need the data in ZK to be up-to-date (see above section on Dual Metadata Writes). 

...

Another way to start the migration would be to have an operator issue a special command or send a special RPC. Adding human-driven manual steps like this to the migration may make it more difficult to integrate with orchestration software such as Anisble, Chef, Kubernetes, etc. By sticking with a "config and reboot" approach, the migration trigger is still simple, but easier to integrate into other control systems.

Write-ahead ZooKeeper data synchronization

...

RPC. Adding human-driven manual steps like this to the migration may make it more difficult to integrate with orchestration software such as Anisble, Chef, Kubernetes, etc. By sticking with a "config and reboot" approach, the migration trigger is still simple, but easier to integrate into other control systems.

Write-ahead ZooKeeper data synchronization

An alternative to write-behind for ZooKeeper would be to write first to ZooKeeper and then write to the metadata log. The main problem with this approach is that it will make KRaft writes much slower since ZK will always be in the write path. By doing a write-behind with offset tracking, we can amortize the ZK write latency and possibly be more efficient about making bulk writes to ZK. 

Combined Mode Migration Support

Since combined mode is primarily intended for developer environments, support for migrations under combined mode was not considered a priority for this design. By excluding it from this initial design, we can simply the implementation and exclude an entire system configuration from the testing matrix. The migration design is already complex, so any reduction in scope is beneficial. In the future, it is possible that we could add support for combined mode migrations based on this design.