Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: authorization for new rpc

Table of Contents

Status

Current state:  Under DiscussionAccepted

Discussion thread: https://lists.apache.org/thread/8dqvfhzcyy87zyy12837pxx9lgsdhvft 

Vote thread: https://lists.apache.org/thread/4pqjp8r7n94lnymv3xc689mfw33lz3mj

JIRA:

Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyKAFKA-14127

...

Command line tools

The format sub-command in the the the kafka-storage.sh tool will include two new properties when producing meta.properties: directory.id and directory.ids. This sub-command already supports  formatting formatting more than one log directory — by expecting a list of configured log.dirs  —  and "formatting" only the ones that need so.  All configured log directories must be available for kafka-storage.sh format to run successfully.A new property will be included in meta.properties

...

Two new properties  — directory.id — which will identify each log directory with a UUID. The UUID is randomly generated for each log directory.

meta.properties

A new property — directory.id — will be expected in  and directory.ids will be added to the meta.properties file in each log directory , including the metadata.configured under log.dirdirs. The first property, directory.id indicates  The property indicates the UUID for the log directory where the file is located, the second property, directory.ids  lists all the UUIDs for all the configured log directories. If any of the meta.properties  file doesn't exist for the metadata.log.dir  the Kafka node will fail to start. If the meta.properties  file exists but it doesn't contain these two properties a new one will be generated and the meta.properties  files will be updated files does not contain directory.id one will be randomly generated and the file will be updated upon Broker startup. The kafka-storage.sh  tool  tool will be extended to generate or update the two properties this property as described in the previous section.

Metadata records

RegisterBrokerRecord and BrokerRegistrationChangeRecord will both have two new fields:

Reserved UUIDs

The following UUIDs are excluded from the random pool when generating a log directory UUID:

  • UUID.UNASSIGNED_DIR – new Uuid(0L, 0L) – used to identify new or unknown assignments.
  • UUID.LOST_DIR - new Uuid(0L, 1L) – used to represent unspecified offline directories.
  • UUID.MIGRATING_DIR - new Uuid(0L, 2L) – used when transitioning from a previous state where directory assignment was not available, to designate that some directory was previously selected to host a partition, but we're not sure which one yet.

The first 100 UUIDs, minus the three listed above are also reserved for future use.

Metadata records

RegisterBrokerRecord and BrokerRegistrationChangeRecord will have a new field:

{ "name": "LogDirs", "type":  "[]uuid", "{ "name": "OnlineLogDirs", "type":  "[]uuid", "versions":  "3+", "taggedVersions": "3+", "tag": "0",
"about": "Log directories configured in this broker which are available." },
{ "name": "OfflineLogDirs", "type": "[]uuid", "versions": "3+", "taggedVersions": "3+", "tag": "10",
"about": "Log directories configured in this broker which are not available." }

PartitionRecord and PartitionChangeRecord will both have a new Assignment field which replaces the current Replicas field: Directories field

{ "name": "ReplicasDirectories", "type":  "[]int32uuid", "versions":  "01+",
"entityTypeabout": "brokerId",
"about": "The replicas of this partition, sorted by preferred order." },
(...)
The log directory hosting each replica, sorted in the same exact order as the Replicas field."}

Although not explicitly specified in the schema, the default value for Directory is Uuid.UNASSIGNED_DIR (Uuid.ZERO), as that's the default default value for UUID types.

Footnote

Yes, double default, not a typo. The default setting, for the default value of the field.

A directory assignment to Uuid.UNASSIGNED_DIR conveys that the log directory is not yet known, the hosting Broker will eventually determine the hosting log directory and use AssignReplicasToDirs to update this the assignment.

RPC requests

BrokerRegistrationRequest will include the following new field:

{ "name": "LogDirs", "type":  "[]uuid", "versions":  "2{ "name": "Assignment", "type": "[]ReplicaAssignment", "versions": "1+",
"about": "TheLog directories replicasconfigured ofin this partition,broker sortedwhich byare preferred orderavailable.", "fields": [ }

BrokerHeartbeatRequest will include the following new field:


{ "name": "BrokerOfflineLogDirs", "type": "int32[]uuid", "versions": "1+", "entityTypetaggedVersions": "brokerId1+",
"tag": "0",
"about": "TheLog directories brokerthat IDfailed hostingand thewent replicaoffline." }

A new RPC named AssignReplicasToDirs will be introduced with the following request and response:

{
"apiKey": <TBD>,
{ "nametype": "Directoryrequest",
"typelisteners": ["uuid"controller],
"versionsname": "1+AssignReplicasToDirsRequest",
"taggedVersionsvalidVersions": "+10",
"tagflexibleVersions": "0+",
"aboutfields": "The[
log directory hosting the{ replica"name" }
]}

Although not explicitly specified in the schema, the default value for Directory is Uuid.ZERO, as that's the default default value for UUID types.

Footnote

Yes, double default, not a typo. The default setting, for the default value of the field.

RPC requests

BrokerRegistrationRequest will include the following two new fields:

{ "name: "BrokerId", "type": "OnlineLogDirsint32", "typeversions":  "[]uuid0+", "versionsentityType":  "2+brokerId",
"taggedVersions": "2+", "tag": "0",
"about": "LogThe directoriesID configuredof inthe thisrequesting broker which are available." },
{ "name": "OfflineLogDirsBrokerEpoch", "type": "[]uuidint64", "versions": "20+", "taggedVersionsdefault": "2+-1", "tag": "1",

"about": "LogThe directoriesepoch configuredof inthe thisrequesting broker" which},
are not available." }

BrokerHeartbeatRequest will include the following new field:

{ "name": "OfflineLogDirsDirectories", "type":  "[]uuidDirectoryData", "versions": "10+", "taggedVersions"fields":  [
{ "name": "Id", "type": "1+uuid", "tagversions": "0+",
"about": "LogThe directoriesID thatof failed and went offline.the directory" }

A new RPC named AssignReplicasToDirs will be introduced with the following request and response:

{
"apiKey": <TBD>,
,
{ "name": "Topics", "type": "request",
"listeners": ["controller],
"name": "AssignReplicasToDirsRequest",
"validVersions]TopicData", "versions": "0+",
"flexibleVersionsfields": "0+",[
"fields": [
{ "name": "BrokerIdTopicName", "type": "int32uuid", "versions": "0+",
"entityType": "brokerId",
"about": "The IDname of the requestingassigned brokertopic" },
{ "name": "BrokerEpochPartitions", "type": "int64[]PartitionData", "versions": "0+", "defaultfields": "-1",[
"about": "The epoch of the requesting broker" },
{ "name": "DirectoriesPartitionIndex", "type": "[]DirectoryDataint32", "versions": "0+", "fields":
[
{ "name": "Id", "typeabout": "uuid", "versions": "0+", "about": "The ID of the directory" },The partition index" }
]}
]}
]}
]
}
{
"apiKey": <TBD>,
"type": "response",
"name": "TopicsAssignReplicasToDirsResponse",
"typevalidVersions": "[]TopicData0",
"versionsflexibleVersions": "0+",
"fields": [
{ "name": "NameThrottleTimeMs", "type": "stringint32", "versions": "0+",
"entityTypeabout": "topicName",
The duration in milliseconds for which the request was "about": "The name of the assigned topicthrottled due to a quota violation, or zero if the request did not violate any quota." },
{ "name": "PartitionsErrorCode", "type": "[]PartitionDataint16", "versions": "0+", "fields": [
{ "name"about": "PartitionIndex", "The top level response error code" },
{ "name": "Directories", "type": "int32[]DirectoryData", "versions": "0+",
"fields": [
{ "name": "aboutId", "type": "The partition index" }
]}uuid", "versions": "0+", "about": "The ID of the directory" },
]}
]}
]
}{
"apiKey": <TBD>,
"type": "response",
"name": "AssignReplicasToDirsResponseTopics",
"validVersionstype": "0[]TopicData",
"flexibleVersionsversions": "0+",
"fields": [
{ "name": "ThrottleTimeMsTopicId", "type": "int32uuid", "versions": "0+",
"about": "The durationname inof millisecondsthe forassigned which the request was throttled due to a quota violation, or zero if the request did not violate any quota." },
topic" },
{ "name": "ErrorCodePartitions", "type": "int16[]PartitionData", "versions": "0+",
"aboutfields": "The[
top level response error code" },
{ "name": "DirectoriesPartitionIndex", "type": "[]DirectoryDataint32", "versions": "0+", "fields":
[
{ "nameabout": "Id", "type": "uuid", "versions": "0+", "about": "The ID of the directory" },
The partition index" },
{ "name": "TopicsErrorCode", "type": "[]TopicDataint16", "versions": "0+",
"fields": [
{ "nameabout": "Name", "type": "string", "versions": "0+", "entityType": "topicName",
"about": "The name of the assigned topic" },
The partition level error code" }
{ "name": "Partitions", "type": "[]PartitionData", "versions": "0+", "fields": [
{ "name": "PartitionIndex", "type": "int32", "versions": "0+",
"about": "The partition index" },
{ "name": "ErrorCode", "type": "int16", "versions": "0+",
"about": "The partition level error code" }
]}
]}
]}
]
}

Proposed changes

Metrics

...

]}
]}
]}
]
}

A AssignReplicasToDirs request including an assignment to Uuid.LOST_DIR conveys that the Broker is wanting to correct a replica assignment into a offline log directory, which cannot be identified.

This request is authorized with CLUSTER_ACTION on CLUSTER.

Proposed changes

Metrics

MBean nameDescription
kafka.server:type=KafkaServer,name=QueuedReplicaToDirAssignments

The number of replicas hosted by the broker that are either missing a log directory assignment in the cluster metadata or are currently found in a different log directory and are queued to be sent to the controller in a AssignReplicasToDirs request.

Configuration

The following configuration option is introduced

Name

Description

Default

Valid Values

Priority

log.dir.failure.timeout.ms

If the broker is unable to successfully communicate to the controller that some log directory has failed for longer than this time, and there's at least one partition with leadership on that directory, the broker will fail and shut down.

30000 (30 seconds)

[1, …]

low


Storage format command

The format subcommand will be updated to ensure each log directory has an assigned UUID and it will persist a new property directory.id in the meta.properties  file. The value is base64 encoded, like the cluster UUID.

The meta.properties  version field will stay set to 1, to allow for a downgrade after an upgrade on a non JBOD KRaft cluster.

Footnote

If an existing, non JBOD KRaft cluster is upgraded to the first version that includes the changes described in this KIP, which write these new fields, and is later downgraded, the meta.properties file needs to still be readable. There's currently a hard check on the version number which would fail for a new version number. 

The UUIDs for each log directory are automatically generated by the tool if there isn't one assigned already in an existing meta.properties  file.

Having a persisted UUID at the root of each log directory allows the broker to identify the log directory regardless of the mount path.

Example

Given the following server.properties:

(... other non interesting properties omitted ...)
process.roles=broker
node.id=8
metadata.log.dir=/var/lib/kafka/metadata
log.dirs=/mnt/d1,/mnt/d2

The command ./bin/kafka-storage.sh format -c /tmp/server.properties --cluster-id 41QSStLtR3qOekbX4ZlbHA  would generate three meta.properties  files that could look like the following:

/var/lib/kafka/metadata/meta.properties :

#
#Thu Aug 18 15:23:07 BST 2022
node.id=8
version=1
cluster.id=41QSStLtR3qOekbX4ZlbHA
directory.id=e6umYSUsQyq7jUUzL9iXMQ
/mnt/d1/meta.properties :
#
#Thu Aug 18 15:23:07 BST 2022
node.id=8
version=1
cluster.id=41QSStLtR3qOekbX4ZlbHA
directory.id=b4d9ExdORgaQq38CyHwWTA
/mnt/d2/meta.properties :
#
#Thu Aug 18 15:23:07 BST 2022
node.id=8
version=1
cluster.id=41QSStLtR3qOekbX4ZlbHA
directory.id=P2aL9r4sSqqyt7bC0uierg

Each directory, including the directory that holds the cluster metadata topic — metadata.log.dir  — has a different and respective value as the directory ID.

In the example above, we can identify the following directory mapping:

  • /var/lib/kafka/metadata  has log directory UUID e6umYSUsQyq7jUUzL9iXMQ 
  • /mnt/d1  has log directory UUID b4d9ExdORgaQq38CyHwWTA 
  • /mnt/d2 has log directory UUID P2aL9r4sSqqyt7bC0uierg 

Brokers

Broker lifecycle management

When the broker starts up and initializes LogManager, it will load the UUID for each log directory (directory.id ) by reading the meta.properties file at the root of each of them.

  • If there are any two log directories with the same UUID, the Broker will fail at startup
  • If there are any meta.properties files missing directory.id, a new UUID is generated, and assigned to that directory by updating the meta.properties file.

The set of all loaded log directory UUIDs is sent along in the broker registration request to the controller as the LogDirs field. 

Metadata caching

Currently, Replicas are considered offline if the hosting broker is offline. Additionally, replicas will also be considered offline if the replica references a log directory UUID (in the new field partitionRecord.Directories) that is not present in the hosting Broker's latest registration under LogDirs and either:

  • the log directory UUID is UUID.LOST_DIR
  • the hosting broker's registration indicates multiple online log directories. i.e. brokerRegistration.LogDirs.length > 1

If neither of the above conditions are true, we assume that there is only one log directory configured, the broker is not configured with multiple log directories, replicas all live in the same directory and neither log directory assignments nor log directory failures shall be communicated to the Controller. 

Handling log directory failures

When multiple log directories are configured, and some (but not all) of them

Storage format command

The format subcommand will be updated to ensure each log directory has an assigned UUID and it will persist two new properties in the meta.properties  file:

  • A property named directory.id indicating the UUID for the log directory where the meta.properties file is located. The value is base64 encoded, like the cluster UUID.
  • A property named directory.ids indicating the complete list of all UUIDs for each configured log directory. Values are base64 encoded and comma-separated. The order does not matter.

The meta.properties  version field will stay set to 1, to allow for a downgrade after an upgrade on a non JBOD KRaft cluster.

Footnote

If an existing, non JBOD KRaft cluster is upgraded to the first version that includes the changes described in this KIP, which write these new fields, and is later downgraded, the meta.properties file needs to still be readable. There's currently a hard check on the version number which would fail for a new version number. 

The UUIDs for each log directory are automatically generated by the tool if there isn't one assigned already in an existing meta.properties  file.

Having a persisted UUID at the root of each log directory allows the broker to identify the log directory regardless of the mount path.
Having a persisted list of all UUIDs for all configured log directories allows the broker to determine the UUIDs of unavailable (offline) log directories, as the meta.properties  files for the offline log directories are likely to be unavailable.

Example

Given the following server.properties:

...

The command ./bin/kafka-storage.sh format -c /tmp/server.properties --cluster-id 41QSStLtR3qOekbX4ZlbHA  would generate three meta.properties  files that could look like the following:

/var/lib/kafka/metadata/meta.properties :

#
#Thu Aug 18 15:23:07 BST 2022
node.id=8
version=1
cluster.id=41QSStLtR3qOekbX4ZlbHA
directory.id=e6umYSUsQyq7jUUzL9iXMQ
directory.ids=e6umYSUsQyq7jUUzL9iXMQ,b4d9ExdORgaQq38CyHwWTA,P2aL9r4sSqqyt7bC0uierg
/mnt/d1/meta.properties :
#
#Thu Aug 18 15:23:07 BST 2022
node.id=8
version=1
cluster.id=41QSStLtR3qOekbX4ZlbHA
directory.id=b4d9ExdORgaQq38CyHwWTA
directory.ids=e6umYSUsQyq7jUUzL9iXMQ,b4d9ExdORgaQq38CyHwWTA,P2aL9r4sSqqyt7bC0uierg
/mnt/d2/meta.properties :
#
#Thu Aug 18 15:23:07 BST 2022
node.id=8
version=1
cluster.id=41QSStLtR3qOekbX4ZlbHA
directory.id=P2aL9r4sSqqyt7bC0uierg
directory.ids=e6umYSUsQyq7jUUzL9iXMQ,b4d9ExdORgaQq38CyHwWTA,P2aL9r4sSqqyt7bC0uierg

Each directory, including the directory that holds the cluster metadata topic — metadata.log.dir  — has a different and respective value as the directory ID. The full set of directory IDs — for all log dirs in log.dirs  but also metadata.log.dir — is persisted in all three metadata files.

In the example above, we can identify the following directory mapping:

  • /var/lib/kafka/metadata  has log directory UUID e6umYSUsQyq7jUUzL9iXMQ 
  • /mnt/d1  has log directory UUID b4d9ExdORgaQq38CyHwWTA 
  • /mnt/d2 has log directory UUID P2aL9r4sSqqyt7bC0uierg 

If some but not all log directories are unavailable, the broker is able to identify which UUIDs refer to offline log directories by diffing the set of loaded directory.id from each available log directory with the loaded value from directory.ids.

Brokers

Broker lifecycle management

When the broker starts up and initializes LogManager, for each configured log directory (in log.dirs ) it will load the UUID for each log directory (directory.id ) and the list of all log directory UUIDs (directory.ids), by reading the meta.properties file at the root of each log directory.

  • If there are any two log directories with the same UUID, the broker will fail at startup
  • If there are any meta.properties files missing directory.id, a new UUID is generated, and assigned to that log directory by updating the file
  • If there are no offline log directories the broker will also create or amend the directory.ids field in each meta.properties file as required

If there are offline log directories, the broker might not be able to determine the UUID for each specific offline log directory, but by diffing diffing directory.ids with the loaded UUIDs from all directory.id the set of offline log directory UUIDs can still be determined.

After loading meta.properties the broker will diff all the UUIDs in directory.id  with the full set of all UUIDs (in directory.ids) to obtain the set of UUIDs for offline log directories. The sets of both online and offline log directory UUIDs are sent along in the broker registration request to the controller. If log directory that holds the cluster metadata topic is configured separately to a different path — using metadata.log.dir — then the respective UUID for this log directory is excluded from both online and offline sets, as the broker cannot run if this particular log directory is unavailable.

If a new entry is added in the log.dirs  configuration, the broker can always expand directory.ids as it can determine the "set of UUIDs for online log directories" + "set of UUIDs for offline log directories" + newly generated UUID for the log directory.

If an entry is removed from log.dirs  the broker can also automatically update directory.ids as long as no log directories are offline when the broker comes back up. The broker will need to be able to access all meta.properties to determine the new full set of UUIDs. An unresolvable mismatch might occur if some log directory was removed from log.dirs , and some other log directory is offline. It is not possible to determine which UUID belonged to each of the missing log dirs. The UUID for the removed log directory needs to be removed from directory.ids  but the UUID for the offline log directory should stay. Upon an unresolvable mismatch between the number of entries configured in log.dirs  and found in metada.properties  under directory.ids the broker will fail at startup.

Metadata caching

Replicas are considered offline if the replica references a log directory which is not in the list of online log directories for the broker ID hosting the replica.

Handling log directory failures

When one or more log directories become offline, the broker will communicate this change using the new field OfflineLogDirs  in the BrokerHeartbeat  request — indicating the UUIDs of the new offline log directories. The UUIDs for the newly accumulated failed log directories are included in the every BrokerHeartbeat  request until the broker sees them correctly identified as offline in a BrokerRegistrationChangeRecord.

Because the broker is proactive in communicating any log directory assignment changes to the controller, the metadata should be up to date and correct when the controller is notified of a failed log directory. However, the consequences of some partition assignment being incorrect – due to some error or race condition - can be quite damaging, as the controller might not know to update leadership for that partition, leaving it unavailable for an indefinite amount of time. So, as a fallback mechanism, when handling a runtime directory failure, the broker must verify the assignments for the newly failed partitions against the latest metadata, and for any incorrect assignments, the broker will use AlterReplicaLogDirs  to rectify them so that the controller can update leadership and ISR.

Replica management

restarts. If the Broker is configured with a single log directory, this field isn't used, as the current behavior of the broker is to shutdown when no log directories are online.

Log directory failure notifications are queued and batched together in all future broker heartbeat requests.

If the Broker repeatedly fails to communicate a log directory failure, or a replica assignment into a failed directory, after a configurable amount of time — log.dir.failure.timeout.ms — and it is the leader for any replicas in the failed log directory the broker will shutdown, as that is the only other way to guarantee that the controller will elect a new leader for those partitions.

Replica management

When As the broker, configured with multiple log.dirs, as the broker catches up with metadata, and sees the partitions which it should be hosting, it will check the associated log directory UUID for each partition (partitionRecord.Directories).

  • If the partition is not assigned to a log directory (refers to Uuid.ZEROUNASSIGNED_DIR)
    • If the partition already exists, the broker uses the new RPC — AssignReplicasToDirs — to notify the controller to change the metadata assignment to the actual log directory.
    • If the partition does not exist, the broker selects a log directory and uses the new RPC — AssignReplicasToDirs — to notify the controller to create the metadata assignment to the actual log directory.
  • If the partition is assigned to an online log directory
    • If the partition does not exist it is created in the indicated log directory.
    • If the partition already exists in the indicated log directory and no future replica exists, then no action is taken.
    • If the partition already exists in the indicated log directory, and there is a future replica in another log directory, then the broker starts the process to replicate the current replica to the future replica.
    • If the partition already exists in another online log directory and is a future replica in the log directory indicated by the metadata, the broker will replace the current replica with the future replica after making sure that the future replica is fully caught up with the current replica.
    • If the partition already exists in another online log directory, the broker uses the new RPC — AssignReplicasToDirs — to the controller to change the metadata assignment to the actual log directory. The partition might have been moved to a different log directory whilst the broker was offline. 
  • If the partition is assigned to an unknown log directory or refers to Uuid.LOST_DIR
    • If there are offline log
    directory
    • directories, no action is taken — the
    controller is already aware of this, and
    • assignment refers to a a log directory which may be offline, we don't want to fill the remaining online log directories with replicas that existed in the offline ones.
    • If
    the partition is assigned to an unknown log directory, no action is taken — the controller is already aware of this and will reassign the replica to one of the online log directories in a future metadata update. 
    • there are no offline directories, the broker selects a log directory and uses the new RPC — AssignReplicasToDirs — to notify the controller to create the metadata assignment to the actual log directory.

If instead, a single entry is configured under log.dirs or log.dir, then the AssignReplicasToDirs RPC is only sent to correct assignments to UUID.LOST_DIR, as described above.

If the broker is configured with multiple log directories it remains FENCED until it can verify that all partitions are assigned to the correct log directories in the cluster metadata. This excludes the log directory that hosts the cluster metadata topic, if it is configured separately to a different path — using metadata.log.dir.

Assignments to be sent via AssignReplicasToDirs are queued and batched together, handled by a log directory event manager that also handles log directory failure notifications.

Intra-broker replica movement

...

The existing AlterReplicaLogDirs RPC is sent directly to the broker in question, which starts moving the replicas using AlterReplicaLogDirs threads – ReplicaAlterLogDirsThread – this remains unchanged. But when the future replica first catches up with the main replica, instead of immediately promoting the future replica, the broker will:

  1. Asynchronously communicate the log directory change to the controller using the new RPC – AssignReplicasToDirs.
  2. Keep the AlterReplicaLogDirs thread going ReplicaAlterLogDirsThread going. The future replica is still the future replica, and it continues to copy from the main replica – which still in the original log directory – as new records are appended.

Once the broker receives confirmation of the metadata change – by fetching cluster metadata, and noticing a new PartitionChangeRecord which reflects the log directory assignment change for the moving replica – the metadata change – indicated by a successful response to AssignReplicasToDirs – then it will:

  1. Block appends to the main (old) replica and waits for the future replica to fully catch up once again.
  2. Makes the switch, promoting the future replica to main replica and cleaning up the old one.

...

The diagram below illustrates the sequence of steps involved in moving a replica between log directories.AlterReplicaLogDirs diagramImage Removed

Image Added


In the diagram above, notice that if dir1 fails after the AssignReplicasToDirs RPC is sent, but before the future replica is promoted, then the controller will not know to not know to update leadership and ISR for the partition. If the destination directory has failed, it won't be possible to promote the future replica, and the Broker needs to revert the assignment (cancelled locally if still queued). If the source directory has failed, then the future replica might not catch up, and the Controller might not update leadership and ISR for the partition. This is why it is important to have the fallback mechanism described in the"Handling log directory failures" section above – if any partitions become offline and are not correctly assigned to the offline directory in the cluster metadata (from the Broker's point of view), then In this exceptional case, the broker issues a AssignReplicasToDirs RPC to the Controller to both correct the assignment and let it update leadership and ISRto assignment the replica to UUID.LOST_DIR - this lets the Controller know that it needs to update leadership and ISR for this partition too.

Controller

Replica placement

For any new partitions, the active controller will use Uuid.ZERO UNASSIGNED_DIR as the initial value for log directory UUID for each replica – this is the default (empty) value for the tagged field. Each broker with multiple log.dirs hosting replicas then assigns a log directory UUID and communicates it back to the active controller using the new RPC AssignReplicasToDirs so so that cluster metadata can be updated with the log directory assignment. Brokers that are configured with a single log directory to not send this RPC.

Handling log directory failures

When a controller receives a  BrokerHeartbeatBrokerHeartbeat request from a broker that indicates any UUIDs under the new OfflineLogDirs field, it will:

  • Persist a BrokerRegistrationChange record, with the new list of online and offline log directories.
  • Update the Leader and ISR for all the replicas assigned to the failed log directories, persisting PartitionChangeRecords, in a similar way to how leadership and ISR is updated when a broker becomes fenced, unregistered or shuts down.

Handling replica assignments

The controller accepts the AssignReplicasToDirs RPC and persists the assignment into metadata records. If the indicated log directory UUID If the any of the listed log directory UUIDs is not a registered log directory then the call fails with error 57 — LOG_DIR_NOT_FOUND.

Handling replica assignments

The controller accepts the AssignReplicasToDirs RPC and persists the assignment into metadata records.

If the indicated log directory UUID is listed as offlinenot one of the Broker's online log directories, then the replica is considered offline and the leader and ISR is updated accordingly, same as when the BrokerHeartbeat indicates a new offline log directory.

...

Upon a broker registration request the controller will persist the broker registration as cluster metadata including the online log directory list and offline log directories flag for that broker. The controller may receive a new list of online directories and offline log directories flag — different from what was previously persisted in the cluster metadata for the requesting broker.the requesting broker.

  • If there are no indicated online log directory UUIDs the request is invalid and the controller replies with an error 42 – INVALID_REQUEST
  • If there are no indicated online log directory UUIDs the request is invalid and the controller replies with an error — INVALID_REQUEST.
  • If there are any missing log directories this means those have been removed from the broker’s configuration, so the controller will reassign all replicas currently assigned to the missing log directories to Uuid.ZERO to delegate the choice of log directory the broker, which will then report the choice via the AssignReplicasToDirs RPC.
  • If multiple log directories are registered the broker will remain fenced until the controller learns of all the partition to log directory placements in that broker - i.e. no remaining replicas assigned to Uuid.ZEROUNASSIGNED_DIR . The broker will indicate these using the AssignReplicasToDirs RPC.

    • The broker remains fenced by not wanting to unfence itself in heartbeat requests until the number of mismatching replica to log directory assignments is zero. This number is represented by the new metric NumMismatchingReplicaToLogDirAssignments QueuedReplicaToDirAssignments.
  • If multiple log directories are registered and some of them are new (not present in previous registration) then these log directories are assumed to be empty. If they are not, the broker will use the AssignReplicasToDirs  RPC to correct assignment and choose not to become UNFENCED before the metadata is correct.

...

The cluster needs to be upgraded before configuring multiple entries in log.dirs. As the upgraded brokers come up, the existing meta.properties  files in each broker are updated with a generated directory. id  and directory.ids . After the upgrade, the metadata.version feature flag needs to be upgraded using kafka-features.sh. Then the brokers can be reconfigured with multiple entries in log.dirs.

Upon being reconfigured with multiple log directories, brokers will update and generate generate directory.id in meta.properties as necessary to reflect the new log directories. Brokers will then register the log directories with the controller via BrokerRegistration and use AssignReplicasToDirs to create the partition-logdirectory assignments in the cluster metadata before becoming UNFENCED.

...

  • As per KIP-866, a separate Controller quorum is setup first, and only then the existing brokers are reconfigured and upgraded.
  • When configured for the migration and while still in ZK mode, brokers will:
    • update meta.properties to generate and include directory.id  and directory.ids;
    • send BrokerRegistrationRequest including the log directory UUIDs;
    • shutdown if any directory fails;
    • sends assignments via the  AssignReplicasToDirs RPCnotify the controller of log directory failures via BrokerHeartbeatRequest.
  • During the migration, the controller:
    • persists log directories indicated in broker registration requests in the cluster metadata;
    • relies on heartbeat requests to detect log directory failure instead of monitoring the ZK znode for notifications;
    • still uses full LeaderAndIsr requests to process log directory failures for any brokers still running in ZK modepersists directory assignments received via the AssignReplicasToDirs RPC.
  • The brokers restarting into KRaft mode will want to stay fenced until their log directory assignments for all hosted partitions are persisted in the cluster metadata.
  • The active controller will also ensure that any given broker stays fenced until it learns of all partition to log directory assignments in that specific broker via the new AssignReplicasToDirs RPC.
  • During the migration, existing replicas are assumed and assigned to log directory Uuid.ZERO until MIGRATING_DIR until the actual log directory is learnt by the active controller from a broker running in KRaft mode.

...

  • Assumed to live in a broker that isn’t yet configured with multiple log directories, and so live in a single log directory, even if the UUID for that directory isn’t yet known by the controllera single log directory. It is not possible to trigger a log directory failure from a broker that has a single log directory, as the broker would simply shut down if there are no remaining online log directories. Or
  • Assigned to a log directory as of yet unknown, in a broker that remains FENCED. As the broker remains FENCED it cannot assume leadership for any partition, and so a log directory failure would be handled by the current partition leader.

...

  • Partition reassignment across directories and across brokers involves different API calls — AlterPartitionReassignments and AlterReplicaLogDirs. Whilst reassigning partitions across brokers into a specific log directory is already possible, it involves an intricate sequence of prior calls to AlterReplicaLogDirs and expecting errors as a successful result. Once this work is done we can consolidate these two API calls by extending AlterPartitionReassignments to allow target log directories to be specified and deprecate AlterReplicaLogDirs. This can be done as part of a future KIP.
  • The only way to know which log directory UUID corresponds to which log directory path is by reading the meta.properties  files in each broker. A future KIP should expand the DescribeLogDirs RPC response to include log directory UUIDs along with the system path for each log directory.
  • extending AlterPartitionReassignments to allow target log directories to be specified and deprecate AlterReplicaLogDirs. This can be done as part of a future KIP.
  • The only way to know which log directory UUID corresponds to which log directory path is by reading the meta.properties  files in each broker. A future KIP should expand the DescribeLogDirs RPC response to include log directory UUIDs along with the system path for each log directory.
  • Partition initialization can be optimized, by having the controller preselect a log directory for new partitions. This would avoid having to wait for the broker to send a AssignReplicasToDirs request to indicate the chosen log directory before it is safe for the broker to assume leadership of the partition. Maybe the Controller could also take available storage in each log directory into account if the Broker indicates the available storage space for each log directory as part of broker registration. This may be be proposed in a future KIP, but we'd need to figure out a way to distinguish between a Controller initiated move, and a user manual move of a partition between log directories when the Broker is offline.

    Footnote

    Despite not being an advertised feature, currently replicas can be moved between log directories while the broker is offline. Once the broker comes back up it accepts the new location of the replica. To continue supporting this feature, the broker will need to compare the information in the cluster metadata with the actual replica location during startup and take action on any mismatch

    Partition initialization can be optimized, by having the controller preselect a log directory for new partitions. This would avoid having to wait for the broker to send a AssignReplicasToDirs request to indicate the chosen log directory before it is safe for the broker to assume leadership of the partition. Maybe the controller could also take available storage in each log directory into account if the broker indicates the available storage space for each log directory as part of broker registration. This may be be proposed in a future KIP

    .


Rejected alternatives

  • Keeping the scope of the log directory to the broker — while this would mean a much simpler change, as was proposed in KIP-589, if only the broker itself knows which partitions were assigned to a log directory, when a log directory fails the broker will need to send a potentially very large request enumerating all the partitions in the failed disk, so that the controller can update leadership and ISRs accordingly. 
  • Having the controller determine the log directory for new replicas — this would avoid a further RPC from the broker upon selecting a new log directory for new replicas, and reduce the time until it is safe for the broker to take leadership of the replica. However the broker is in a better position to make a choice of log directory than the controller, as it has easier access to e.g. disk usage in each log directory. The controller could also have this information if the broker were to include it the broker registration. But to keep the design simplethis change manageable and timely, this optimization is best left for future work. It would also be trickier to manage the ZK→KRaft migration if we had some brokers forwarding the RPC to the controllers and others handling it directly.
  • Changing how log directory failure is handled in ZooKeeper mode — ZooKeeper mode is going away, KIP-833 proposed its deprecation in a near future release.
  • Using the system path to identify each log directory, or storing the identifier somewhere else — When running Kafka with multiple log directories, each log directory is typically assigned to a different system disk or volume. The same storage device can be made accessible under a different mount, and Kafka should be able to identify the contents as the same disk. Because the log directory configuration can change on the broker, the only reliable way to identify the log directory in each broker is to add metadata to the file system under the log directory itself.
  • Fail on broker startup if the cluster metadata indicates replicas should be in a different log directory than the directory where the broker actually finds them — Despite not being an advertised feature, currently replicas can be moved between log directories while the broker is offline. Once the broker comes back up it accepts the new location of the replica. To continue supporting this feature, the broker will need to compare the information in the cluster metadata with the actual replica location during startup and take action on any mismatch.
  • mount, and Kafka should be able to identify the contents as the same disk.  Because the log directory configuration can change on the broker, the only reliable way to identify the log directory in each broker is to add metadata to the file system under the log directory itself.
  • Identifying offline log directories. Because we cannot identify them by mount paths, we cannot distinguish between an inaccessible log directory is simply unavailable or if it has been removed from configured – think of a scenario where one log dir is offline and one was removed from log.dirs. In ZK mode we don't care to do this, and we shouldn't do it in KRaft either. What we need to know is if there are any offline log directories, to prevent re-streaming the offline replicas into the remaining online log dirs. In ZK mode, the 'isNew' flag is used to prevent the Broker from creating partitions when any logdir is offline unless they're new. A simple boolean flag to indicate some log dir is offline is enough to maintain the functionality

    Not keeping a list of all configured log directory identifiers in a metadata file in the file system — If a broker finds some log directories to be offline during startup it will need another way to identify the log directory as the metadata is in the disk itself. For this reason, the list of all log directory IDs must also be persisted in somewhere that the broker can expected to be able to access. Regardless of how many log directories are configured, one of the log directories will be configured to host the cluster metadata topic. This can either be one of the data log directories or a completely separate one — typically in the system disk. Since the broker will not startup if this log directory is not available, it is the perfect location to persist the list of all log directory identifiers

    .

Footnotes

Footnotes Display

...