Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Status

Current state: Under Discussion

...

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Background

The KRaft controller was designed to be isolated from Kafka clients. This isolation helps prevent misbehaving clients from compromising the performance of the system. It also clarifies node roles: brokers are responsible for client traffic. However, there are certain edge cases where it is reasonable for clients to communicate with KRaft controllers.

Controllers as Bootstrap Servers

In some casesSometimes, we would like to use the controller quorum in place of "bootstrap servers." While this is not recommended for most clients, there are certain Kafka clients for whom this might make sense. For example, a metrics plugin running on the controller itself may use a KafkaProducer to publish its records. It would be very helpful if it could use the controller on which it was running as a bootstrap server. This would avoid the need to supply broker hostnames and ports through a plugin configuration.

Controllers as Targets

Sometimes, we would like to target controllers directly. Typically this is so that we can perform an administrative operation without involving the brokers. DESCRIBE_QUORUM is a great example. This operation has nothing to do with the brokers, and may indeed be useful for debugging when other parts of the system are down. Another good example is using INCREMENTAL_ALTER_CONFIGS to make log4j level changes on a KRaft controller.

Proposed Changes

Overview

Controllers as Bootstrap Servers

New Kafka clients which support KIP-919 will be able to use KRaft controllers as bootstrap servers. This applies to all of the client types: consumers, producers, and admin clients. When using controllers as bootstrap servers, the broker endpoints that are returned will be those of the configured inter-broker listener.

It's worth noting here that we continue to recommend putting KRaft controllers on a separate network from Kafka clients. This is always a good idea in order to have the best security and isolation. Therefore, in a well-run cluster, only certain internal clients should use the KRaft controllers as bootstrap servers.

Controllers as Targets

New Kafka clients which support KIP-919 will also be able to target KRaft controllers directly. This only applies to admin clients, since it is not possible to produce or consume from a quorum controller. In this mode, the endpoints that are returned are the appropriate ones for the controller listener that was contacted.

Public Interfaces

Configuration

Controllers as Bootstrap Servers

No new client-side configuration will be required to use controllers as bootstrap servers.

Controllers as Targets

There will be a new AdminClient  configuration, bootstrap.controllers . This configuration contains a comma-separated series of entries in the following form:

Code Block
[controller-id]@[hostname]:[port]

This format is the same as that of controller.quorum.voters . Indeed, the contents of that configuration can be copied to this one if desired.

It is an error to set both bootstrap.controllers  and bootstrap.servers . Only one can be set at a time.

When this configuration is specified, the AdminClient will talk directly to the controller quorum and the brokers will not be involved.

KafkaProducer  and KafkaConsumer  will not support bootstrap.controllers .

Command-line Changes

New Arguments

The following command-line tools will get a new --bootstrap-controllers argument:

  • kafka-acls.sh
  • kafka-cluster.sh
  • kafka-configs.sh
  • kafka-delegation-tokens.sh
  • kafka-features.sh
  • kafka-metadata-quorum.sh
  • kafka-metadata-shell.sh

  • kafka-reassign-partitions.sh

When the --bootstrap-controllers  argument is used:

  • --bootstrap-servers must not be specified
  • The tool will only communicate with the controller quorum.

Changes to kafka-metadata-shell.sh

The metadata shell will now have these arguments:

Code Block
The Apache Kafka metadata tool

positional arguments:
  command                The command to run.

optional arguments:
  -h, --help             show this help message and exit
  --directory DIRECTORY, -d DIRECTORY
                         The __cluster_metadata-0 directory to read.
  --bootstrap-controllers CONTROLLERS, -q CONTROLLERS
                         The bootstrap.controllers, used to communicate directly with the metadata quorum.
  --config CONFIG        Path to a property file containing a Kafka configuration

Note that:

  • The --snapshot  argument has been replaced by a --directory  argument that reads the whole directory, not just a snapshot file
  • There is no need for a --cluster-id  flag, since we will query the controller for its cluster ID prior to creating the Raft client.
  • There is now a --config  argument which can be used to pass a configuration file.

Since kafka-metadata-shell.sh  is at an "evolving" level of interface stability, these changes should be OK to make without a deprecation period.

New Error Codes

...

target controllers directly. Typically this is so that we can perform an administrative operation without involving the brokers. DESCRIBE_QUORUM is a great example. This operation has nothing to do with the brokers, and may indeed be useful for debugging when other parts of the system are down. Another good example is using INCREMENTAL_ALTER_CONFIGS to make log4j level changes on a KRaft controller.

Proposed Changes

New Kafka clients which support KIP-919 will be able to target KRaft controllers directly. This only applies to admin clients, since it is not possible to produce or consume from a quorum controller.

Public Interfaces

bootstrap.controllers configuration

There will be a new AdminClient  configuration, bootstrap.controllers . This configuration contains a comma-separated series of hostname:port entries. When this configuration is specified, the AdminClient will talk directly to the controller quorum and the brokers will not be involved.

KafkaProducer and KafkaConsumer will not support bootstrap.controllers. Only AdminClient  will support it.

It is an error to set both bootstrap.controllers  and bootstrap.servers . Only one can be set at a time. It is also an error to include broker endpoints in --bootstrap-controllers . If we contact a broker via this mechanism, the command will fail.

Just as with bootstrap.servers, the supplied server list doesn't need to be exhaustive. As long as we can contact one of the provided controllers the RPC can proceed. 

Command-line Changes

New Arguments

The following command-line tools will get a new --bootstrap-controllers argument:

  • kafka-acls.sh
  • kafka-cluster.sh
  • kafka-configs.sh
  • kafka-delegation-tokens.sh
  • kafka-features.sh
  • kafka-metadata-quorum.sh
  • kafka-metadata-shell.sh

  • kafka-reassign-partitions.sh

When the --bootstrap-controllers  argument is used --bootstrap-servers must not be specified.

The --bootstrap-controllers  flag will set the bootstrap.controllers configuration described above. It will also clear the bootstrap.servers configuration if that has been set in some other way (for example, via a configuration file provided to the command-line tool).

Changes to kafka-metadata-shell.sh

The metadata shell will now have these arguments:

Code Block
The Apache Kafka metadata tool

positional arguments:
  command                The command to run.

optional arguments:
  -h, --help             show this help message and exit
  --directory DIRECTORY, -d DIRECTORY
                         The __cluster_metadata-0 directory to read.
  --bootstrap-controllers CONTROLLERS, -q CONTROLLERS
                         The bootstrap.controllers, used to communicate directly with the metadata quorum.
  --config CONFIG        Path to a property file containing a Kafka configuration

Note that:

  • The --snapshot  argument has been replaced by a --directory  argument that reads the whole directory, not just a snapshot file
  • There is no need for a --cluster-id  flag, since we will query the controller for its cluster ID prior to creating the Raft client.
  • There is now a --config  argument which can be used to pass a configuration file.

Since kafka-metadata-shell.sh  is at an "evolving" level of interface stability, these changes should be OK to make without a deprecation period.

MetadataRequest Changes

There will be a new version of MetadataRequest. It will contain an additional field, TargetKRaftControllerQuorum. This field defaults to false.

Code Block
diff --git a/clients/src/main/resources/common/message/MetadataRequest.json b/clients/src/main/resources/common/message/MetadataRequest.json
index 5da95cfed6..8e5e765d11 100644
--- a/clients/src/main/resources/common/message/MetadataRequest.json
+++ b/clients/src/main/resources/common/message/MetadataRequest.json
@@ -18,7 +18,7 @@
   "type": "request",
   "listeners": ["zkBroker", "broker"],
   "name": "MetadataRequest",
-  "validVersions": "0-12",
+

MetadataRequest Changes

There will be a new version of MetadataRequest. It will contain an additional field, TargetKRaftControllerQuorum. This field defaults to false.

...

Ignore unknown record types

This KIP proposes to ignore unknown record keys which allows the downgraded coordinator to proceed with loading the rest of the partition. As we cannot write tombstones for unknown keys, these records will be stored in the logs until the coordinators are upgraded. However, this KIP prioritizes the simplicity of ignoring them because we expect downgrades to be non-permanent.

Bump non-flexible Value record types to flexible versions

In this KIP we propose to bump each of these records to a flexible version and backport this to earlier releases, specifically 3.0.3, 3.1.3, 3.2.4, 3.3.3, and 3.4.1 (note: we will need commitment from release managers to perform the minor releases). We will also apply this to 3.5 if the patch is merged to trunk before the code freeze date. Backporting this patch to earlier releases is acceptable because the change is small and is very low risk. One limitation is that we will be unable to downgrade to a version lower than 3.X. We will mention in future release notes where a new record type or a new field is introduced that we can only downgrade to the aforementioned versions. Note that once a new tagged field is introduced in a later version, that version can never be downgraded to below the listed versions.

We will only deserialize with the flexible version but serialize with the highest known non-flexible version. This is because users upgrading to one of these versions may want to downgrade but if we serialize with the flexible version they won't be able to downgrade back to an earlier version. Deserializing a flexible version to a non-flexible version will fail.

We will rely on tagged fields (introduced in KIP-482) which allows additions and deletions of new fields without needing to bump versions. Once a version is flexible, deserializing tagged fields is straightforward as they are automatically ignored. We do not touch Key record types because keys are considered fixed and optional fields in keys do not make much sense.

This KIP opens the door to backward compatible changes to the record schemas through the use of tagged fields. Any future tagged fields will not require a version bump and older brokers can simply ignore the tagged fields they do not understand. Note that introducing a new non-tagged field or removing an existing non-tagged field in the future will not be backward compatible.

Compatible changes

Compatible changes are straightforward; the added tagged field is truly optional and is not required for the correctness of the coordinator. These fields will be ignored by the downgraded coordinator.

Incompatible changes

There are two cases: 1) we introduce a tagged field that is required for correctness, i.e. a new field that enforces correct access and without it results in incorrect coordinator behavior / data loss or 2) a non-tagged field is added. For both incompatible changes, we propose to have a version bump (already required for non-tagged fields) and add a new isBackwardCompatible field to the MetadataVersion enum so that the operator, if they decide, can downgrade with the --force option knowing that the downgrade is not backward compatible. 

Public Interfaces

__consumer_offsets

GroupMetadataValue.json

Bump to flexible version

Code Block
linenumberstrue
// KIP-915: bumping the version will no longer make this record backward compatible.
{
  "type": "data",
  "name": "GroupMetadataValue",
  "validVersions": "0-4",
  "flexibleVersions": "4+",
  "fields": [
    { "name": "protocolType", "versions": "0+", "type": "string"},
    { "name": "generation", "versions": "0+", "type": "int32" },
    { "name": "protocol", "versions": "0+", "type": "string", "nullableVersions": "0+" },
    { "name": "leader", "versions": "0+", "type": "string", "nullableVersions": "0+" },
    { "name": "currentStateTimestamp", "versions": "2+", "type": "int64", "default": -1, "ignorable": true},
    { "name": "members", "versions": "0+", "type": "[]MemberMetadata" }
  ],
  "commonStructs": [
    {
      "name": "MemberMetadata",
      "versions": "0-4",
      "fields": [
        { "name": "memberId", "versions": "0+", "type": "string" },
        { "name": "groupInstanceId", "versions": "3+", "type": "string", "default": "null", "nullableVersions": "3+", "ignorable": true},
        { "name": "clientId", "versions": "0+", "type": "string" },
        { "name": "clientHost", "versions": "0+", "type": "string" },
        { "name": "rebalanceTimeout", "versions": "1+", "type": "int32", "ignorable": true},
        { "name": "sessionTimeout", "versions": "0+", "type": "int32" },
        { "name": "subscription", "versions": "0+", "type": "bytes" },
        { "name": "assignment", "versions": "0+", "type": "bytes" }
      ]
    }
  ]
}

OffsetCommitValue.json

KIP-848 bumps to a flexible version 4 and adds the topicId field. This KIP proposes to solely bump to flexible version and for KIP-848 to add topicId as a tagged field instead.

Code Block
linenumberstrue
// KIP-915: bumping the version will no longer make this record backward compatible.
{
  "type": "data",  
  "name": "OffsetCommitValue",  
  "validVersions": "0-413",
  
  "flexibleVersions": "49+", 
 
  "fields": [
    { "name": "offset", "type": "int64", "versions": "0+" },    
    { "name": "leaderEpoch", "type": "int32", "versions": "3+", "default": -1, "ignorable": true},    
     // In version 0, an empty array indicates "request metadata for all topics."  In version 1 and
@@ -50,6 +50,8 @@
     { "name": "metadataIncludeClusterAuthorizedOperations", "type": "stringbool", "versions": "0+" },8-10",
    { "name": "commitTimestamp", "typeabout": "int64", "versions": "0+" },    
Whether to include cluster authorized operations." },
     { "name": "expireTimestampIncludeTopicAuthorizedOperations", "type": "int64bool", "versions": "1", "default": -1, "ignorable": true}
  ]
}

__transaction_state

TransactionLogValue.json

Bump to flexible version

Code Block
linenumberstrue
// KIP-915: bumping the version will no longer make this record backward compatible.
{
  "type": "data",
  : "8+",
-      "about": "Whether to include topic authorized operations." }
+      "about": "Whether to include topic authorized operations." },
+    { "name": "TransactionLogValueDirectToKRaftControllerQuorum",
  "validVersionstype": "0-1bool",
  "flexibleVersionsversions": "113+",
+  "fields": [
    { "nameabout": "ProducerId", "type": "int64", "versions": "0+",Whether to target the KRaft controller quorum." }
   ]
 }
diff --git  "about": "Producer id in use by the transactional id"},
    { "name": "ProducerEpoch", "type": "int16", "versions": "0+",
      "about": "Epoch associated with the producer id"},
    { "name": "TransactionTimeoutMs", "type": "int32", "versions": "0+",
      "about": "Transaction timeout in milliseconds"},
a/clients/src/main/resources/common/message/MetadataResponse.json b/clients/src/main/resources/common/message/MetadataResponse.json
index 928d905152..085c0d919f 100644
--- a/clients/src/main/resources/common/message/MetadataResponse.json
+++ b/clients/src/main/resources/common/message/MetadataResponse.json
@@ -42,7 +42,7 @@
   // Version 11 deprecates ClusterAuthorizedOperations. This is now exposed
   // by the DescribeCluster API (KIP-700).
   // Version 12 supports topicId.
-  "validVersions": "0-12",
+  "validVersions": "0-13",
   "flexibleVersions": "9+",
   "fields": [
     { "name": "TransactionStatusThrottleTimeMs", "type": "int8int32", "versions": "03+", "ignorable": true,
@@ -94,6 +94,8 @@
         "about": "TransactionState32-bit thebitfield transactionto is in"},
    { "name": "TransactionPartitions", "type": "[]PartitionsSchema", "versions": "0+", "nullableVersions": "0+"represent authorized operations for this topic." }
     ]},
     { "aboutname": "Set of partitions involved in the transactionClusterAuthorizedOperations", "fieldstype": [
      { "name"int32", "versions": "Topic8-10", "typedefault": "string",-2147483648",
-      "versionsabout": "0+"},
 32-bit bitfield to represent authorized operations for this cluster." }
+     { "nameabout": "PartitionIds", "type": "[]int32", "versions": "0+"}]},
32-bit bitfield to represent authorized operations for this cluster." },
+    { "name": "TransactionLastUpdateTimestampMsFromKRaftController", "type": "int64bool", "versions": "013+",
      "aboutdefault": "Time the transaction was last updated"}false",
+    {  "nametaggedVersions": "TransactionStartTimestampMs13+", "type": "int64"tag": 0, "versionsabout": "0+",
      "about": "Time the transaction was started"}
  ]
}

Compatibility, Deprecation, and Migration Plan

The compatibility plan is explored in proposed changes. We will backport this to all minor 3.X versions: 3.0.3, 3.1.3, 3.2.4, 3.3.3, and 3.4.1 . Downgrades to lower versions will be incompatible and will be explicitly stated in future release notes when new fields/records are introduced.

Test Plan

Rejected Alternatives

Rejected Alternative: upgraded coordinator deletes new and downgrades existing record types

Instead of the downgraded coordinator deleting the new record types when loading the partition, we can have the new coordinator delete the new record types before shutting down. This is possible with KIP-584 (feature flag) versioning approach: the operator downgrades the coordinator version which triggers coordinators to perform deletions for the new record types. We can ensure that all partitions will be compacted even if a broker is down since the partitions will have migrated to an online broker. Once coordinators append tombstones for the new record types they can explicitly trigger compaction. This introduces additional time spent cleaning up during downgrades. More importantly, coordinators need to downgrade Value records so that the downgraded coordinator can load committed offsets. This means group coordinators need to rewrite all offset commits with the old format, including transactional offset commits. 

Rewriting transactional offset commits complicates the downgrade path:

  • If a transactional offset commit is in progress, we need to abort it before reformatting but we don't have a mechanism in place to trigger a server side abort. Furthermore, we will need to add logic so that the coordinator is notified when a transaction is aborted to proceed with the rewrite.
  • Producers perspective: we would either have to make the rewrite completely invisible to the producer or have the producer retry after aborting it from the server side. Both paths are complex and require additional investigation.
  • Definition of a rewrite: should we consider translating the transaction start time / deadline when rewriting?

We also need a separate logic to downgrade the __transaction_state Value record, TransactionLogValue, but it should be simpler. 

Whether the response was sent back from a KRaft controller." }
   ]
 }

TargetKRaftControllerQuorum Handling Matrix

DirectToKRaftControllerQuorumIf Received by BrokerIf Received by KRaft Controller
falsetraditional broker MetadataResponseUNSUPPORTED_VERSION MetadataResponse
trueNOT_CONTROLLER MetadataResponsecontroller MetadataResponse described below

KRaft Controller MetadataRequest

When sending a METADATA request to the controller, the following request fields must be set to false:

  • AllowAutoTopicCreation
  • IncludeClusterAuthorizedOperations
  • IncludeTopicAuthorizedOperations

If any of them are set to true, an INVALID_REQUEST MetadataResponse will be returned.

Topics must be set to an empty array, indicating no interest in any topics.

KRaft Controller MetadataResponse

The KRaft controller MetadataResponse will always set:

  • FromKRaftController to true
  • ClusterId to the Kafka cluster ID
  • The ControllerId to the true Raft leader ID



Response typeTopics Section"Brokers" SectionComments
Successful responseemptyController endpoint information as given in controller.quorum.voters The "direct to controller" case.
Error response if topics were given in requestthe given topics, with an INVALID_REQUEST error for eachEmptyIt is an error to ask about specific topics.
Error response if no topics were given in requestthe __cluster_metadata topic with the expected error codeEmptyThere is no top-level error code in MetadataResponse, so we use the __cluster_metadata topic to send back our error.



Compatibility, Deprecation, and Migration Plan

There should be no compatibility impact since current controllers don't handle MetadataRequest.

It's worth noting here that we continue to recommend putting KRaft controllers on a separate network from Kafka clients. This is always a good idea in order to have the best security and isolation.

Rejected Alternatives

bootstrap.controllers versus direct.to.controller configuration

Rather than having a bootstrap.controllers  configuration, we could have a separate configuration like direct.to.controller  and put the controller servers into bootstrap.servers . Similarly, we could reuse --bootstrap-server erather than adding --bootstrap-controllers.

We decided to go with the scheme proposed above to make it clearer when a tool was going directly to the controller. This also makes it clearer which command-line tools have this capability and which do not.

For example, kafka-console-consumer.sh  does not have the capability to go direct to the controller, since the controller does not handle produces. Therefore, it's intuitive that kafka-console-consumer.sh lacks the --bootstrap-controllers flag.

Another issue is that in the future, we may want to support using the controllers as bootstrap servers for the brokers. The scheme above leaves the door open for this, whereas a scheme that reused existing configurations would not. The benefit of this approach is that future record types are deleted. The proposed approach to ignore new records only works because the coordinator deletes new record types when a group is converted from new to old. However, we may introduce new record types that are not deleted during this conversion. Another benefit is that there are no strict requirements for Value records. We don't have to only add taggedFields (which this KIP requires) since these records will be rewritten anyways. Having the upgraded coordinator explicitly rewrite new record types and downgrade is future proof and there are no version downgrade barriers like we do for the proposed design.