Status
Current state: Under Discussion
Discussion thread: https://lists.apache.org/thread.html/7efa8cd169cadc7dc9cf86a7c0dbbab1836ddb5024d310fcebacf80c@%3Cdev.kafka.apache.org%3E
JIRA:
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
Today, Kafka uniquely identifies a topic by its name. This is generally sufficient however there are flaws with this scheme when a topic is deleted and recreated with the same name. Kafka currently attempts to prevent issues resulting from stale topics by ensuring a topic is fully deleted from all replicas before completing a deletion. This solution is imperfect, as it is possible for partitions to be reassigned away from brokers while they are down and there are no guarantees that this state will ever be cleaned up.
When a topic deletion is performed the controller must wait for all brokers to delete their local replicas. This blocks creation of a topic with the same name as a deleted topic until all replicas have successfully deleted the topic's data. This can mean that downtime for a single broker can effectively cause a complete outage for everyone producing/consuming to that topic name if a topic cannot be recreated without manual intervention.
Topic IDs aim to address this issue by associating a truly unique ID with each topic, ensuring a newly created topic with a previously used name cannot be confused with a previous topic with that name.
Topic IDs solve several additional problems:
- Renaming topics becomes feasible (although there may still be some complexity with the need to support the old name for a while as part of migration, etc.) Renaming topics may seem minor, but it will be difficult to have hierarchical topics without having some kind of renaming support.
- We can eventually get rid of the "deleting" state for topics. If a broker is down but there is some topic data there that is no longer relevant, it won't cause problems later on. It can be deleted when the broker rejoins the cluster and realizes that the relevant topic ID is not present any more. We gain some additional safety where stale/deleted replicas may currently interact with live ones.
- Sending 16 byte UUIDs instead of Strings over Kafka RPCs can be smaller. A string is 2 bytes plus the data, whereas the UUID is fixed 16 bytes. For any topic name with more than than 14 single byte characters (16 bytes serialized), UUIDs will be smaller. They will also be faster to compare and more friendly to the garbage collector.
- They will provide a true measure of topic uniqueness across clusters. This may be important in multi-cluster Kafka deployments where additional safety and debuggability is desired.
Overall, topic IDs provide a safer way for brokers to replicate topics without any chance of incorrectly interacting with stale topics with the same name. By preventing such scenarios, we can simplify a number of other interactions such as topic deletes which are currently more complicated and problematic than necessary.
Public Interfaces
Minor changes to the TopicDescription interface will be made to allow clients to access the topic ID of topics found in metadata responses.
/**
* Create an instance with the specified parameters.
*
* @param name The topic name
* @param internal Whether the topic is internal to Kafka
* @param partitions A list of partitions where the index represents the partition id and the element contains
* leadership and replica information for that partition.
* @param authorizedOperations authorized operations for this topic, or null if this is not known.
* @param topicId Unique value that identifies the topic
*
*/
public TopicDescription(String name, boolean internal, List<TopicPartitionInfo> partitions,
Set<AclOperation> authorizedOperations, UUID topicId)
/**
* A unique identifier for the topic.
*/
public UUID topicId()
Additionally, it may be dangerous to use older versions of Kafka tools with new broker versions when using their --zookeeper
flags. Use of older tools in this way is not supported today.
Proposed Changes
Topic IDs will be represented with 128 bit v4 UUIDs. A UUID with all bits as 0 will be reserved as a null UUID as the Kafka RPC protocol does not allow for nullable fields. When printed or stored as a string, topic IDs will be converted to base64 string representation.
On handling a CreateTopicRequest brokers will create the topic znode under /brokers/topics/[topic], as usual.
The znode value will now contain an additional topic ID field, represented as a base64 string in the "id" field, and the schema version will be bumped to version 3.
|
The controller will maintain local in-memory state containing a mapping from topic name to topic ID. On controller startup, the topic ID will automatically be loaded into memory along with the topics and partitions. A random UUID will be generated on topic creation or on migration of an existing topic without topic IDs.
The controller will supply topic IDs for all topic partitions to brokers by sending LeaderAndIsrRequest(s) that contain the topic IDs for all partitions contained in the request.
Requests to describe topics will return a result containing TopicDescriptions with topic IDs for each topic
Protocol Changes
Removal of Topic Names from Request and Responses
It is unnecessary to include the name of the topic in the following Request/Response calls:
StopReplica
Fetch
List Offsets
OffsetForLeader
Vote
BeginQuorumEpoch
EndQuorumEpoch
Including the topic name in the request may make it easier to debug when issues arise, as it will provide more information than the topic ID alone. However, it will also bloat the protocol (especially relevant for FetchRequest), and if they are incorrectly used it may prevent topic renames from being easily implemented in the future. For these reasons, the topic name field has been removed.
LeaderAndIsr
LeaderAndIsrRequest v5
|
LeaderAndIsrRequest v5 adds the topic ID to the topic_states field, and an enum type to denote the type of LeaderAndIsrRequest. Currently, the first LeaderAndIsrRequest sent to a broker by a controller contains all TopicPartitions that a broker is a replica for. We will formalize this behavior by also including a type enum to denote the type of LeaderAndIsrRequest. IBP will be used to determine whether this new form of the request will be used. For older requests, we will ignore this field and default to previous behavior.
value | enum | description |
---|---|---|
1 | INCREMENTAL | A LeaderAndIsrRequest that is not guaranteed to contain all topic partitions assigned to a broker. |
2 | FULL | A full LeaderAndIsrRequest containing all partitions the broker is a replica for. |
When type = FULL, the broker is able to reconcile its local state on disk with the request. Any partition not contained in this request and present on local disk can be staged for deletion. There are two such types of stale request. In both cases the broker's topic will be staged for deletion.
1. The TopicPartition is not present in the LeaderAndIsrRequest.
2. The TopicPartition is contained in the request, but the topic ID that does not match the local topic partition stored on the broker.
Reconciliation may also be necessary if type = INCREMENTAL and the topic ID set on a local partition does not match the topic ID contained in the request. A TopicPartition with the same name and a different topic ID by implies that the local topic partition is stale, as the topic must have been deleted to create a new topic with a different topic ID. This is similar to the type 2 stale request above, and the topic will be staged for deletion.
Deletion
Deletion of stale partitions triggered by LeaderAndIsrRequest(s) will take place by:
- Logging at WARN level all partitions that will be deleted and the time that they will be be deleted at.
- Move the partition's directory to log.dir/deleting/{topic_id}_{partition}
- Schedule deletion from disk with a delay of delete.stale.topic.delay.ms ms. This will clear the deleting directory of the partition's contents.
LeaderAndIsrResponse v5
|
The topic name field has been removed.
StopReplica
StopReplicaRequest v4
|
StopReplicaResponse v4
|
Fetch
To avoid issues where requests are made to stale partitions, a topic_id field will be added to fence reads from deleted topics. Note that the leader epoch is not sufficient for preventing these issues, as the partition leader epoch is reset when a topic is deleted and recreated. To reduce the size of the request and response, the topic name field has been removed.
FetchRequest v13
|
FetchResponse v13
|
ListOffsets
To avoid issues where requests are made to stale partitions, a topic_id field will be added to fence reads from deleted topics.
ListOffsetRequest v6
|
ListOffsetResponse v6
|
OffsetForLeader
To avoid issues where requests are made to stale partitions, a topic_id field will be added to fence reads from deleted topics.
OffsetForLeaderRequest v4
|
OffsetForLeaderResponse v4
|
Metadata
MetadataResponse must be modified so that describeTopics includes the topic id for each topic.
MetadataResponse v10
|
UpdateMetadata
UpdateMetadata should also include the topic ID.
UpdateMetadataRequest v7
|
Produce
Swapping a the topic name for the topic ID will cut down on the size of the request.
ProduceRequest v9
|
ProduceResponse v9
|
DeleteTopics
With the addition of topic IDs and the changes to LeaderAndIsrRequest described above, we can now make changes to topic deletion logic that will allow topics to be immediately considered deleted, regardless of whether all replicas have responded to a DeleteTopicsRequest.
When the controller receives a DeleteTopicsRequest, if the IBP is >= MIN_TOPIC_ID_VERSION it will delete the /brokers/topics/[topic] znode payload and immediately reply to the DeleteTopicsRequest with a successful response. At this point, the topic is considered deleted, and a topic with the same name can be created.
Although the topic is safely deleted at this point, it must still be garbage collected. To garbage collect, the controller will then send StopReplicaRequest(s) to all brokers assigned as replicas for the deleted topic. For the most part, deletion logic can be maintained between IBP versions, with some differences in responses and cleanup in ZooKeeper. Both formats must still be supported, as the IBP may not be bumped right away and deletes may have already been staged before the IBP bump occurs.
The updated controller's delete logic will:
- Collect deleted topics:
- Old format: /admin/delete_topics pulling the topic state from /brokers/topics/[topic].
- New in-memory topic deletion states from received DeleteTopicsRequest(s)
- Remove deleted topics from replicas by sending StopReplicaRequest V3 before the IBP bump using the old logic, and using V4 and the new logic with topic IDs after the IBP bump.
- Finalize successful deletes:
- For /admin/delete_topics deletes, we may need to respond to the TopicDeleteRequest. We can also delete the topic znode at /admin/delete_topics/[topic] and /brokers/topics/[topic].
- For deletes for topics with topic IDs, remove the topic from the in memory topic deletion state on the controller.
- Any unsuccessful StopReplicaRequest(s) will be retried after retryMs, starting from 1) and will be maintained in memory.
This leads to the question of what should be done if the controller never receives a successful response from a replica for a StopReplicaRequest. Under such a scenario it is still safe to stop retrying after a reasonable number of retries and time. Given that LeaderAndIsrRequest v5 includes a type flag, allowing for FULL requests to be identified, any stale partitions will be reconciled and deleted by a broker on startup upon receiving the initial LeaderAndIsrRequest from the a controller. This condition is also safe if the controller changes before the StopReplicaRequest(s) succeed, as the new controller will send a FULL LeaderAndIsrRequest on becoming the leader, ensuring that any stale partitions are cleaned up.
Immediate delete scenarios
Stale reads
- Broker B1 is a leader for topic partition A_p0_id0
- Topic A id0 is deleted.
- Topic A id1 is created.
- Broker B1 has not yet received a new LeaderAndIsrRequest, nor a StopReplicaRequest for topic partition A_p0_id0
- Broker B2 has received a LeaderAndIsrRequest for topic partition A_p0 _id0, and starts fetching from B1.
Inclusion of topic IDs in FetchRequest/ListOffsetRequest/OffsetsForLeaderEpochRequest(s) ensure that this scenario is safe. By adding the topic ID to these request types, any request to stale partitions will not be successful.
Stale state
- Broker B1 is a replica for A_p0_id0.
- Topic A id0 is deleted.
- B1 and has not does not receive a StopReplicaRequest for A_p0_id0.
- Topic A id1 is created.
- Broker B1 receives a LeaderAndIsrRequest containing partition A_p0_id1.
When this occurs, we will close the Log for A_p0_id0, and move A_p0_id0 to the deleting directory as described in the LeaderAndIsrRequest description above.
Storage
Partition Metadata file
To allow brokers to resolve the topic name under this structure, a metadata file will be created at logdir/partitiondir/partition.metadata.
This metadata file will be human readable, and will include:
- Metadata schema version (schema_version: int32)
- Topic ID (id: UUID)
This file will be plain text (key/value pairs).
version: 0 topic_id: 46bdb63f-9e8d-4a38-bf7b-ee4eb2a794e4 |
---|
One important use for this file is the directory structure does not allow us to reload the broker's view of topic ID on startup (perhaps after a failure). It is necessary to persist this file to disk so this information can be reloaded.
During LeaderAndIsrRequests, this file may be used to disambiguate topics safely and delete topics if necessary. More details on this process are explained in the LeaderAndIsrRequest v5 section.
It will be easy to update the file to include more fields in the future.
In the JBOD mode, a partition's data can be moved from one disk to another. The partition metadata file would be copied during this process.
Tooling
kafka-topics.sh --describe will be updated to include the topic ID in the output. A user can specify a topic name to describe with the --topic parameter, or alternatively the user can supply a topic ID with the --topic_id parameter
Migration
Upon a controller becoming active, the list of current topics is loaded from /brokers/topics/[topic]. When a topic without a topic ID is found, a UUID will be randomly generated and assigned the topic information at /brokers/topics/[topic] will be updated with the id filled and the schema version bumped to version 3.
LeaderAndIsrRequest(s) will only be sent by the controller once a topic ID has been successfully assigned to the topic. Since the LeaderAndIsrRequest version was bumped, the IBP must also be bumped for migration.
When a replica receives a LeaderAndIsrRequest containing a topic ID for an existing partition which does not have a topic ID associated, it will create a partition metadata file for the topic partition locally. At this point the local partition will have been migrated to support topic IDs.
Configuration
The following configuration options will be added:
Option | Unit | Default | Description |
---|---|---|---|
delete.stale.topic.delay.ms | ms | 14400 (4 hours) | When a FULL or INCREMENTAL LeaderAndIsrRequest is received and the request does not contain a partition that exists on a broker or a broker's topic ID does not match the ID in the request, a deletion event will be staged for that partition which will complete after delete.stale.topic.delay.ms milliseconds. |
Compatibility with KIP-500
KIP-500 and KIP-595 utilize a special metadata topic to store information that ZooKeeper has stored in the past. This topic must exist before the controller election, but in KIP-516, topic IDs are assigned in the controller. Here is an outline of how we can handle this.
Problem: KIP-595 describes a Vote Request which is used to elect the controller. Currently KIP-595 contains the topic name as part of the protocol.
Solution: Change Vote to use topic ID field. Use a sentinel ID reserved only for this topic before its ID is known.
Switching over to topic IDs in this KIP will result in fewer changes later on.
Problem: Post Zookeeper, a Fetch request for the metadata topic will be used to obtain information that was once stored in Zookeeper. KIP-516 stores topic IDs in Zookeeper, and the controller pushes them to brokers using LeaderAndIsrRequests. This will change to pulling the topic IDs to the broker with a fetch of the metadata topic. KIP-516 is replacing the topic name field with a topic ID field. So how will the first Fetch request know the correct topic ID for the metadata topic?
Solution: Use the same sentinel ID reserved for the metadata topic before its ID is known. After controller election, upon receiving the result, assign the metadata topic its unique topic ID.
Using a topic ID will result in a slightly smaller fetch request and likely prevent further changes. Assigning a unique ID for the metadata topic leaves the possibility for the topic to be placed in tiered storage, or used in other scenarios where topics from multiple clusters may be in one place without appending the cluster ID.
Sentinel ID
The idea is that this will be a hard-coded UUID that no other topic can be assigned. Initially the all zero UUID was considered, but was ultimately rejected since this is used as a null ID in some places and it is better to keep these usages separate. An example of a hard-coded UUID is 00000000-0000-0000-0000-000000000001
Vote
Vote will be changed to replace topic name with topic ID, and will use a sentinel topic ID if no topic ID has been assigned already. See above for more information on sentinel topic IDs.
VoteRequest v0
|
VoteResponse v0
|
BeginQuorumEpoch
BeginQuorumEpoch will replace the topic name field with the topic id field
BeginQuorumEpochRequest v0
|
BeginQuorumEpochResponse v0
|
EndQuorumEpoch
EndQuorumEpoch will replace the topic name field with the topic id field
EndQuorumEpochRequest v0
|
EndQuorumEpochResponse v0
|
Compatibility, Deprecation, and Migration Plan
We will need to support all API calls which refer to a partition by either (topicId, partition) or (topicName, partition) until clients are updated to interact with topics by ID. No deprecations are currently planned.
Rejected Alternatives
Sequence ID
As an alternative to a topic UUID, a sequence number (long) could be maintained that is global for the given cluster.
This sequence number could be stored at /topicid/seqid.
Upon topic creation, this sequence number will incremented, and the ID assigned to the created topic. Sequential topic ID generation can use the same approach to broker id generation.
If global uniqueness across clusters is required for topic IDs the first N bits of the ID could consist of a cluster ID prefix, followed by the sequence number. However, to achieve global uniqueness, this would require a large number of bits for the cluster ID prefix.
Use of a UUID has the benefit of being globally unique across clusters without partitioning the ID space by clusterID, and is conceptually simpler.
Topic Deletion
We considered and rejected two other strategies for performing topic deletes.
Best Effort Strategy
Under this stategy, the controller will attempt to send a StopReplicaRequest to all replicas. The controller will give up after a certain number of retries and will complete the delete. Although this will not simplify the topic deletion code, it will prevent delete topic requests from being blocked if one of the replicas is down. This would now be relatively safe, as stale topics will be deleted when a broker receives an initial LeaderAndIsrRequest, however it could prevent space from being reclaimed from a broker that does not respond to a StopReplicaRequest(s) before it is timed out, but is otherwise alive.
Send StopReplicaRequest(s) to online brokers only
In this approach, the controller will send StopReplicaRequests to only the brokers that are online, and will wait for a response from these brokers before marking the delete as successful. This will allow a topic delete to take place while some replicas are offline. If any replicas return to being online, they will receive an initial LeaderAndIsrRequest that will allow them to clear up any stale state. This is similar to the "best effort strategy above".
org.apache.kafka.common.TopicPartition
Eventually the TopicPartition class should include the topic ID. This may be difficult to enact until all APIs support topic IDs, and could come with a performance impact if implemented prior to this, as TopicPartitions are used for hashmap lookups throughout the broker.
Persisting Topic IDs
A few other alternatives to the partition metadata file were considered. One topic of discussion was whether it was necessary to include at all. With the current decision of maintaining the topic name in the directory, the only way to persist the topic ID to disk is through a file. The decision against changing the directory is discussed below.
Another alternative is to have a single file mapping all topic names to ids. Although this could be useful for tooling, it would be harder to maintain this file and update on each new topic added.
Future Work
Requests
The following requests could be improved by presence of topic IDs, but are out of scope for this KIP.
- CreatePartitionsRequest
- ElectPreferredLeadersRequest
- AlterReplicaLogDirsRequest
- AlterConfigsRequest
- DeleteTopicsRequest
- DescribeConfigsRequest
- DescribeLogDirsRequest
- DeleteRecordsRequest
- AddPartitionsToTxnRequest
- TxnOffsetCommitRequest
- WriteTxnMarkerRequest
Clients
Some of the implemented request types are also relevant to clients. Adding support for topic IDs in the clients would add an additional measure of safety when producing and consuming data.
__consumer_offsets topic
Ideally, consumer offsets stored in the __consumer_offsets topic would be associated with the topic ID for which they were read. However, given the way the __consumer_offsets is compacted, this may be difficult to achieve in a forwards compatible way. This change will be left until topic IDs are implemented in the clients. Another future improvement opportunity is to use topicId in GroupMetadataManager.offsetCommitKey in the offset_commit topic. This may save some space.
log.dir layout
It would be ideal if the log.dir layout could be restructured from {topic}_{partition} format to {{topicIdprefix}}/{topicId}_{partition}, e.g. "mytopic_1" → "24/24cc4332-f7de-45a3-b24e-33d61aa0d16c_1". Note the hierarchical directory structure using the first two characters of the topic ID to avoid having too many directories at the top level of the logdir. This change is not required for the topic deletion improvements above, and will be left for a future KIP where it may be required e.g. topic renames.
Changing the directory structure in this way would also require more changes to tooling. Finding the correct log directory for a given topic will require more work for the user with the current changes in the KIP. There are other considerations when it comes to changing the directory structure, so it is probably best to spend more time before we commit to a decision.
Security/Authorization
One idea was to support authorizing a principal for a topic ID rather than a topic name. For now, this would be a breaking change, and it would be hard to support prefixed ACLs with topic IDs.