Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Status

Current state: Draft Discussion

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]

JIRA:

Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyKAFKA-7362

...

  1. The broker opens a Log instance for all partitions on its local disk, including stray partitions.
  2. Retention would not delete any segments for stray partitions. Retention starts deleting segments only when the high watermark is higher than the segment's last offset. A stray partition is not a valid replica anymore, and thus has no defined high watermark. This means the disk space for stray partitions can never be reclaimed.
  3. If a broker hosts a stray partition and the topic is recreated, there is no protection in place to be able to distinguish the current stray partition from the new partition. In the worst case, this means data for the previous generation of the partition will now reside in the current generation.

This KIP proposes a mechanism to clean up stray partitions, solving problems (1) and (2) listed above. (3) is mitigated to an extent but we would require the improvements that are part of KIP-516: Topic Identifiers.

Public Interfaces

We propose to change the LeaderAndIsrRequest to include an additional containsAllReplicas field, denoting whether the request contains the full replica list hosted by the target broker.

...


diff --git a/clients/src/main/resources/common/message/LeaderAndIsrRequest.json b/clients/src/main/resources/common/message/LeaderAndIsrRequest.json

...


index 852968801..7ddca80b9 100644

...


--- a/clients/src/main/resources/common/message/LeaderAndIsrRequest.json

...


+++ b/clients/src/main/resources/common/message/LeaderAndIsrRequest.json

...


@@ -22,7 +22,9 @@

...


// Version 2 adds broker epoch and reorganizes the partitions by topic.

...


//

...


// Version 3 adds AddingReplicas and RemovingReplicas

...


-

...

"validVersions": "0-4",

...


+

...

// Version 4 adds flexible versions

...


+

...

 // Version 5 adds ContainsAllReplicas

...


+

...

"validVersions": "0-5",

...


"flexibleVersions": "4+",

...


"fields": [

...


{ "name": "ControllerId", "type": "int32", "versions": "0+", "entityType": "brokerId",

...


@@ -51,7 +53,9 @@

...


"about": "The leader's hostname." },

...


{ "name": "Port", "type": "int32", "versions": "0+",

...


"about": "The leader's port." }

...


-

...

 

...

]}

...


+

...

 

...

]},

...


+

...

{ "name": "ContainsAllReplicas", "type": "bool", "versions": "5+",

...


+

...

"about": "Whether the request contains all replicas hosted by the target broker." }

...


],

...


"commonStructs": [

...


{ "name": "LeaderAndIsrPartitionState", "versions": "0+", "fields": [

...

Proposed Changes

Today, when a new broker starts up, the controller sends a full list of replicas the broker hosts in the LeaderAndIsrRequest. We will formalize this contract by adding the `containsAllReplicas` field to the request. On a new broker startup or on controller failover, the controller will send LeaderAndIsrRequest containing the full set of replicas and will also set `containsAllReplicas` to `true`. When a broker receives a LeaderAndIsrRequest with `containsAllReplicas` set to `true`, it can safely use the replica list in this request as the source-of-truth for the partitions it must host. Any partitions the broker hosts that are not present in the LeaderAndIsrRequest will then be scheduled for deletion, as those would constitute stray partitions.

...