...
A share-partition is a topic-partition with a subscription in a share - group. For a topic-partition subscribed in more than one share group, each share group has its own share-partition.
...
It fetches the records from the replica manager from the local replica
It manages and persists the states of the in-flight records
This means that the fetch-from-follower optimization is not supported by share-groups. The KIP does however include rack information so that consumers could preferentially fetch from share-partitions whose leadership is in the same rack.
Relationship with consumer groups
...
Share group membership is controlled by the group coordinator. Consumers in a share group use the heartbeat mechanism to join, leave and confirm continued membership of the share group, using the new ShareGroupHeartbeat
RPC. Share-partition assignment is also piggybacked on the heartbeat mechanism. Share groups only support server-side assignors, which implement the new internal org.apache.kafka.coordinator.group.assignor.SharePartitionAssignor
interface.
This KIP introduces just one assignor, org.apache.kafka.servercoordinator.group.shareassignor.SimpleAssignorSimpleShareAssignor
, which assigns all partitions of all subscribed topics to all members. In the future, a more sophisticated share group assignor could balance the number of consumers assigned to the partitions, and it may well revoke partitions from existing members in order to improve the balance. The simple assignor isn’t that smart.
...
Share Group | |||||
---|---|---|---|---|---|
Name | Type | Description | |||
Group ID | string | The group ID as configured by the consumer. The ID uniquely identifies the group. | |||
Group Epoch | int32 | The current epoch of the group. The epoch is incremented by the group coordinator when a new assignment is required for the group. | |||
Server Assignore | string | The server-side assignor used by the group. | |||
Members | []Member | The set of members in the group. | |||
Partitions Metadata | []PartitionMetadata | The metadata of the partitions that the group is subscribed to. This is used to detect partition metadata changes. | |||
Member | |||||
Name | Type | Description | |||
Member ID | string | The unique identifier of the member. It is generated by the coordinator upon the first heartbeat request and must be used throughout the lifetime of the member. | |||
Rack ID | string | The rack ID configured by the consumer. | |||
Client ID | string | The client ID configured by the consumer. | |||
Client Host | string | The client host of the consumer. | |||
Subscribed Topic Names | []string | The current set of subscribed topic names configured by the consumer. | Server Assignor | string | The server-side assignor used by the group. |
Target Assignment
The target assignment of the group. This represents the assignment that all the members of the group will eventually converge to. It is a declarative assignment which is generated by the assignor based on the group state.
...
Note that because the share groups are all consuming from the same log, the retention behavior for a topic applies to all of the share groups consuming from that topic.
Reading transactional records
Each consumer in a consumer group has its own isolation level which controls how it handles records which were produced in transactions. For a share group, the concept of isolation level applies to the entire group, not each consumer.
Log compaction
When share groups are consuming from compacted topics, there is the possibility that in-flight records are cleaned while being consumed. In this case, the delivery flow for these records continues as normal because the disappearance of the cleaned records will only be discovered when they are next fetched from the log. This is analogous to a consumer group reading from a compacted topic - records which have been fetched by the consumer can continue to be processed, but if the consumer tried to fetch them again, it would discover they were no longer there.
When fetching records from a compacted topic, it is possible that record batches fetched have offset gaps which correspond to records the log cleaner removed. This simple results in gaps of the range of offsets of the in-flight records.
Reading transactional records
Each consumer in a consumer group has its own isolation level which controls how it handles records which were produced in transactions. For a share group, the concept of isolation level applies to the entire group, not each consumer.
The isolation level of a share group is controlled by the group The isolation level of a share group is controlled by the group configuration group.share.isolation.level
.
...
Operation | State changes | Cumulative state |
---|---|---|
Starting state of topic-partition with latest offset 100 | SPSO=100, SPEO=100 | SPSO=100, SPEO=100 |
In the batched case with successful processing, there’s a state change per batch to move the SPSO forwards | ||
Fetch records 100-109 | SPEO=110, records 100-109 (acquired, delivery count 1) | SPSO=100, SPEO=110, records 100-109 (acquired, delivery count 1) |
Acknowledge 100-109 | SPSO=110 | SPSO=110, SPEO=110 |
With a messier sequence of release and acknowledge, there’s a state change for each operation which can act on multiple records | ||
Fetch records 110-119 Consumer 1 gets 110-112, consumer 2 gets 113-118, consumer 3 gets 119 | SPEO=120, records 110-119 (acquired, delivery count 1) | SPSO=110, SPEO=120, records 110-119 (acquired, delivery count 1) |
Release 110 (consumer 1) | record 110 (available, delivery count 1) | SPSO=110, SPEO=120, record 110 (available, delivery count 1), records 111-119 (acquired, delivery count 1) |
Acknowledge 119 (consumer 3) | record 110 (available, delivery count 1), records 111-118 acquired, record 119 acknowledged | SPSO=110, SPEO=120, record 110 (available, delivery count 1), records 111-118 (acquired, delivery count 1), record 119 acknowledged |
Fetch records 110, 120 (consumer 1) | SPEO=121, record 110 (acquired, delivery count 2), record 120 (acquired, delivery count 1) | SPSO=110, SPEO=121, record 110 (acquired, delivery count 2), records 111-118 (acquired, delivery count 1), record 119 acknowledged, record 120 (acquired, delivery count 1) |
Lock timeout elapsed 111, 112 (consumer 1's records) | records 111-112 (available, delivery count 1) | SPSO=110, SPEO=121, record 110 (acquired, delivery count 2), records 111-112 (available, delivery count 1), records 113-118 (acquired, delivery count 1), record 119 acknowledged, record 120 (acquired, delivery count 1) |
Acknowledge 113-118 (consumer 2) | records 113-118 acknowledged | SPSO=110, SPEO=121, record 110 (acquired, delivery count 2), records 111-112 (available, delivery count 1), records 113-119 acknowledged, record 120 (acquired, delivery count 1) |
Fetch records 111,112 (consumer 3) | records 111-112 (acquired, delivery count 2) | SPSO=110, SPEO=121, record 110-112 (acquired, delivery count 2), records 113-119 acknowledged, record 120 (acquired, delivery count 1) |
Acknowledge 110 (consumer 1) | SPSO=111 | SPSO=111, SPEO=121, record 111-112 (acquired, delivery count 2), records 113-119 acknowledged, record 120 (acquired, delivery count 1) |
Acknowledge 111,112 (consumer 3) | SPSO=120 | SPSO=120, SPEO=121, record 120 (acquired, delivery count 1) |
...
Two new control record types are introduced: SHARE_CHECKPOINT (5) and SHARE_DELTA (6). They are written into separate message sets with the Control flag set. This flag indicates that the records are not intended for application consumption. Indeed, these message sets are not returned to consumer any consumers at all since they are just intended for the share-partition leader.
In order to recover the share-partition state, the share-partition leader has to read a SHARE_CHECKPOINT and zero or more SHARE_DELTA records which chain backwards to the SHARE_CHECKPOINT with the same checkpoint epoch. By applying the records in order, from earliest to latest, the state can be rebuilt.
To avoid having to scan the topic in order to find these records, the share-partition leader keeps a share snapshot file which lets it locate the control records more efficientlyWhen a control record is written as a result of an operation such as a ShareAcknowledge
RPC, the control record must be written and fully replicated before the RPC response is sent.
SHARE_CHECKPOINT
A SHARE_CHECKPOINT record contains a complete checkpoint of the share-partition state. It contains:
...
Note that the Acquired state is not recorded because it’s transient. As a result, an Acquired record with a delivery count of 1 is recorded as Available with a delivery count of 0. In the unlikely event of a share-partition leader crash, memory of the in-flight delivery will be lost.
...
- The group ID
- The checkpoint epoch of the SHARE_CHECKPOINT it applies toThe offset of the preceding control record with the same checkpoint epoch
- An array of
[BaseOffset, LastOffset, State, DeliveryCount]
tuples
...
Here are the previous examples, showing the control records which record the cumulative state durably. Note that any SHARE_DELTA could be replaced with a SHARE_CHECKPOINT. This example omits the details about consumer instances.
Operation | State changes | Cumulative state | Control records | ||||
---|---|---|---|---|---|---|---|
Starting state of topic-partition with latest offset 100 | SPSO=100, SPEO=100 | SPSO=100, SPEO=100 |
| ||||
In the batched case with successful processing, there’s a state change per batch to move the SPSO forwards | |||||||
Fetch records 100-109 | SPEO=110, records 100-109 (acquired, delivery count 1) | SPSO=100, SPEO=110, records 100-109 (acquired, delivery count 1) | |||||
Acknowledge 100-109 | SPSO=110 | SPSO=110, SPEO=110 |
| ||||
With a messier sequence of release and acknowledge, there’s a state change for each operation which can act on multiple records | |||||||
Fetch records 110-119 | SPEO=120, records 110-119 (acquired, delivery count 1) | SPSO=110, SPEO=120, records 110-119 (acquired, delivery count 1) | |||||
Release 110 | record 110 (available, delivery count 1) | SPSO=110, SPEO=120, record 110 (available, delivery count 1), records 111-119 (acquired, delivery count 1) |
Note that the SPEO in the control records is 111 at this point. All records after this are in their first delivery attempt so this is an acceptable situation. | ||||
Acknowledge 119 | record 110 (available, delivery count 1), records 111-118 acquired, record 119 acknowledged | SPSO=110, SPEO=120, record 110 (available, delivery count 1), records 111-118 (acquired, delivery count 1), record 119 acknowledged |
| ||||
Fetch records 110, 120 | SPEO=121, record 110 (acquired, delivery count 2), record 120 (acquired, delivery count 1) | SPSO=110, SPEO=121, record 110 (acquired, delivery count 2), records 111-118 (acquired, delivery count 1), record 119 acknowledged, record 120 (acquired, delivery count 1) | |||||
Lock timeout elapsed 111, 112 | records 111-112 (available, delivery count 1) | SPSO=110, SPEO=121, record 110 (acquired, delivery count 2), records 111-112 (available, delivery count 1), records 113-118 (acquired, delivery count 1), record 119 acknowledged, record 120 (acquired, delivery count 1) |
| ||||
Acknowledge 113-118 | records 113-118 acknowledged | SPSO=110, SPEO=121, record 110 (acquired, delivery count 2), records 111-112 (available, delivery count 1), records 113-119 acknowledged, record 120 (acquired, delivery count 1) |
| ||||
Fetch records 111,112 | records 111-112 (acquired, delivery count 2) | SPSO=110, SPEO=121, record 110-112 (acquired, delivery count 2), records 113-119 acknowledged, record 120 (acquired, delivery count 1) | |||||
Acknowledge 110 | SPSO=111 | SPSO=111, SPEO=121, record 111-112 (acquired, delivery count 2), records 113-119 acknowledged, record 120 (acquired, delivery count 1) |
| ||||
Acknowledge 111,112 | SPSO=120 | SPSO=120, SPEO=121, record 120 (acquired, delivery count 1) |
or alternatively, taking a new checkpoint:
Note that the delivery of 120 has not been recorded yet because it is the first delivery attempt and it is safe to recover the SPEO back to offset 120 and repeat the attempt. |
Public Interfaces
This KIP introduces extensive additions to the public interfaces.
Client API changes
KafkaShareConsumer
This KIP introduces a new interface for consuming records from a share group called org.apache.kafka.clients.consumer.ShareConsumer
with an implementation called org.apache.kafka.clients.consumer.KafkaShareConsumer
. The interface stability is Evolving
.
...
KafkaShareConsumer(Map<String, Object> configs)
...
KafkaShareConsumer(Properties properties)
...
KafkaShareConsumer(Map<String, Object> configs,
Deserializer<K> keyDeserializer,
Deserializer<V> valueDeserializer)
...
KafkaShareConsumer(Properties properties,
Deserializer<K> keyDeserializer,
Deserializer<V> valueDeserializer)
...
Uuid clientInstanceId(Duration timeout)
...
Recovering share-partition state and interactions with log cleaning
A share-partition is a topic-partition with a subscription in a share group. The share-partition is essentially a view of the topic-partition, managed by the share-partition leader, with durable state stored on the topic-partition in SHARE_CHECKPOINT and SHARE_DELTA control records.
In order to recreate the share-partition state when a broker becomes the leader of a share-partition, it must read the most recent SHARE_CHECKPOINT and any subsequent SHARE_DELTA control records, which will all have the same checkpoint epoch. In order to minimise the amount of log scanning required, it’s important to write SHARE_CHECKPOINT records frequently, and also to have an efficient way of finding the most recent SHARE_CHECKPOINT record.
For each share-partition, the offset of the most recent SHARE_CHECKPOINT record is called the Share Checkpoint Offset (SCO). The Earliest Share Offset (ESO) is the earliest of the share checkpoint offsets for all of the share groups with a subscription in a share group.
- The log cleaner can clean all SHARE_CHECKPOINT and SHARE_DELTA records before the SCO.
- The log cleaner must not clean SHARE_CHECKPOINT and SHARE_DELTA records after the SCO.
In practice, the ESO is used as the cut-off point for cleaning of these control records.
Administration
Several components work together to create share groups. The group coordinator is responsible for assignment, membership and the state of the group. The share-partition leaders are responsible for delivery and acknowledgement. The following table summarises the administration operations and how they work.
Operation | Supported by | Notes |
---|---|---|
Create share group | Group coordinator | This occurs as a side-effect of a ShareGroupHeartbeat. The group coordinator writes a record to the consumer offsets topic to persist the group's existence. |
List share groups | Group coordinator | |
List share group offsets | Group coordinator and share-partition leaders | |
Describe share group | Group coordinator | |
Alter share group offsets | Share-partition leaders | The share-partition leader makes a durable share-partition state update for each share-partition affected. |
Delete share group offsets | Share-partition leaders | The share-partition leader makes a durable share-partition state update for each share-partition affected. |
Delete share group | Group coordinator working with share-partition leaders | Only empty share groups can be deleted. However, the share-partition leaders need to delete share group offsets, and then delete the share group. It is not an atomic operation. The share-partition leader makes a durable share-partition state update for each share-partition affected. The group coordinator writes a tombstone record to the consumer offsets topic to persist the group deletion. |
Public Interfaces
This KIP introduces extensive additions to the public interfaces.
Client API changes
KafkaShareConsumer
This KIP introduces a new interface for consuming records from a share group called org.apache.kafka.clients.consumer.ShareConsumer
with an implementation called org.apache.kafka.clients.consumer.KafkaShareConsumer
. The interface stability is Evolving
.
Code Block | ||
---|---|---|
| ||
@InterfaceStability.Evolving
public interface ShareConsumer<K, V> {
/**
* Get the current subscription. Will return the same topics used in the most recent call to
* {@link #subscribe(Collection)}, or an empty set if no such call has been made.
*
* @return The set of topics currently subscribed to
*/
Set<String> subscription();
/**
* Subscribe to the given list of topics to get dynamically assigned partitions.
* <b>Topic subscriptions are not incremental. This list will replace the current
* assignment, if there is one.</b> If the given list of topics is empty, it is treated the same as {@link #unsubscribe()}.
*
* <p>
* As part of group management, the coordinator will keep track of the list of consumers that belong to a particular
* group and will trigger a rebalance operation if any one of the following events are triggered:
* <ul>
* <li>A member joins or leaves the share group
* <li>An existing member of the share group is shut down or fails
* <li>The number of partitions changes for any of the subscribed topics
* <li>A subscribed topic is created or deleted
* </ul>
*
* @param topics The list of topics to subscribe to
*
* @throws IllegalArgumentException If topics is null or contains null or empty elements
* @throws KafkaException for any other unrecoverable errors
*/
void subscribe(Collection<String> topics);
/**
* Unsubscribe from topics currently subscribed with {@link #subscribe(Collection)}.
*
* @throws KafkaException for any other unrecoverable errors
*/
void unsubscribe();
/**
* Fetch data for the topics specified using {@link #subscribe(Collection)}. It is an error to not have
* subscribed to any topics before polling for data.
*
* <p>
* This method returns immediately if there are records available. Otherwise, it will await the passed timeout.
* If the timeout expires, an empty record set will be returned.
*
* @param timeout The maximum time to block (must not be greater than {@link Long#MAX_VALUE} milliseconds)
*
* @return map of topic to records since the last fetch for the subscribed list of topics
*
* @throws AuthenticationException if authentication fails. See the exception for more details
* @throws AuthorizationException if caller lacks Read access to any of the subscribed
* topics or to the configured groupId. See the exception for more details
* @throws InterruptException if the calling thread is interrupted before or while this method is called
* @throws InvalidTopicException if the current subscription contains any invalid
* topic (per {@link org.apache.kafka.common.internals.Topic#validate(String)})
* @throws WakeupException if {@link #wakeup()} is called before or while this method is called
* @throws KafkaException for any other unrecoverable errors (e.g. invalid groupId or
* session timeout, errors deserializing key/value pairs,
* or any new error cases in future versions)
* @throws IllegalArgumentException if the timeout value is negative
* @throws IllegalStateException if the consumer is not subscribed to any topics
* @throws ArithmeticException if the timeout is greater than {@link Long#MAX_VALUE} milliseconds.
*/
ConsumerRecords<K, V> poll(Duration timeout);
/**
* Acknowledge successful delivery of a record returned on the last {@link #poll(Duration)} call.
* The acknowledgement is committed on the next {@link #commitSync()}, {@link #commitAsync()} or
* {@link #poll(Duration)} call.
*
* <p>
* Records for each topic-partition must be acknowledged in the order they were returned from
* {@link #poll(Duration)}. By using this method, the consumer is using
* <b>explicit acknowledgement</b>.
*
* @param record The record to acknowledge
*
* @throws IllegalArgumentException if the record being acknowledged doesn't meet the ordering requirement
* @throws IllegalStateException if the record is not waiting to be acknowledged, or the consumer has already
* used implicit acknowledgement
*/
void acknowledge(ConsumerRecord<K, V> record);
/**
* Acknowledge delivery of a record returned on the last {@link #poll(Duration)} call indicating whether
* it was processed successfully. The acknowledgement is committed on the next {@link #commitSync()},
* {@link #commitAsync()} or {@link #poll(Duration)} call. By using this method, the consumer is using
* <b>explicit acknowledgement</b>.
*
* <p>
* Records for each topic-partition must be acknowledged in the order they were returned from
* {@link #poll(Duration)}.
*
* @param record The record to acknowledge
* @param type The acknowledge type which indicates whether it was processed successfully
*
* @throws IllegalArgumentException if the record being acknowledged doesn't meet the ordering requirement
* @throws IllegalStateException if the record is not waiting to be acknowledged, or the consumer has already
* used implicit acknowledgement
*/
void acknowledge(ConsumerRecord<K, V> record, AcknowledgeType type);
/**
* Commit the acknowledgements for the records returned. If the consumer is using explicit acknowledgement,
* the acknowledgements to commit have been indicated using {@link #acknowledge(ConsumerRecord)} or
* {@link #acknowledge(ConsumerRecord, AcknowledgeType)}. If the consumer is using implicit acknowledgement,
* all the records returned by the latest call to {@link #poll(Duration)} are acknowledged.
* <p>
* This is a synchronous commit and will block until either the commit succeeds, an unrecoverable error is
* encountered (in which case it is thrown to the caller), or the timeout expires.
*
* @return A map of the results for each topic-partition for which delivery was acknowledged.
* If the acknowledgement failed for a topic-partition, an exception is present.
*
* @throws InterruptException If the thread is interrupted while blocked.
* @throws KafkaException for any other unrecoverable errors
*/
Map<TopicIdPartition, Optional<KafkaException>> commitSync();
/**
* Commit the acknowledgements for the records returned. If the consumer is using explicit acknowledgement,
* the acknowledgements to commit have been indicated using {@link #acknowledge(ConsumerRecord)} or
* {@link #acknowledge(ConsumerRecord, AcknowledgeType)}. If the consumer is using implicit acknowledgement,
* all the records returned by the latest call to {@link #poll(Duration)} are acknowledged.
* <p>
* This is a synchronous commit and will block until either the commit succeeds, an unrecoverable error is
* encountered (in which case it is thrown to the caller), or the timeout expires.
*
* @param timeout The maximum amount of time to await completion of the acknowledgement
*
* @return A map of the results for each topic-partition for which delivery was acknowledged.
* If the acknowledgement failed for a topic-partition, an exception is present.
*
* @throws IllegalArgumentException If the {@code timeout} is negative.
* @throws InterruptException If the thread is interrupted while blocked.
* @throws KafkaException for any other unrecoverable errors
*/
Map<TopicIdPartition, Optional<KafkaException>> commitSync(Duration timeout);
/**
* Commit the acknowledgements for the records returned. If the consumer is using explicit acknowledgement,
* the acknowledgements to commit have been indicated using {@link #acknowledge(ConsumerRecord)} or
* {@link #acknowledge(ConsumerRecord, AcknowledgeType)}. If the consumer is using implicit acknowledgement,
* all the records returned by the latest call to {@link #poll(Duration)} are acknowledged.
*
* @throws KafkaException for any other unrecoverable errors
*/
void commitAsync();
/**
* Sets the acknowledge commit callback which can be used to handle acknowledgement completion.
*
* @param callback The acknowledge commit callback
*/
void setAcknowledgeCommitCallback(AcknowledgeCommitCallback callback);
/**
* Determines the client's unique client instance ID used for telemetry. This ID is unique to
* this specific client instance and will not change after it is initially generated.
* The ID is useful for correlating client operations with telemetry sent to the broker and
* to its eventual monitoring destinations.
* <p>
* If telemetry is enabled, this will first require a connection to the cluster to generate
* the unique client instance ID. This method waits up to {@code timeout} for the consumer
* client to complete the request.
* <p>
* Client telemetry is controlled by the {@link ConsumerConfig#ENABLE_METRICS_PUSH_CONFIG}
* configuration option.
*
* @param timeout The maximum time to wait for consumer client to determine its client instance ID.
* The value must be non-negative. Specifying a timeout of zero means do not
* wait for the initial request to complete if it hasn't already.
*
* @return The client's assigned instance id used for metrics collection.
*
* @throws IllegalArgumentException If the {@code timeout} is negative.
* @throws IllegalStateException If telemetry is not enabled because config `{@code enable.metrics.push}`
* is set to `{@code false}`.
* @throws InterruptException If the thread is interrupted while blocked.
* @throws KafkaException If an unexpected error occurs while trying to determine the client
* instance ID, though this error does not necessarily imply the
* consumer client is otherwise unusable.
*/
Uuid clientInstanceId(Duration timeout);
/**
* Get the metrics kept by the consumer
*/
Map<MetricName, ? extends Metric> metrics();
/**
* Close the consumer, waiting for up to the default timeout of 30 seconds for any needed cleanup.
* This will commit acknowledgements if possible within the default timeout.
* See {@link #close(Duration)} for details. Note that {@link #wakeup()} cannot be used to interrupt close.
*
* @throws InterruptException If the thread is interrupted before or while this method is called
* @throws KafkaException for any other error during close
*/
void close();
/**
* Tries to close the consumer cleanly within the specified timeout. This method waits up to
* {@code timeout} for the consumer to complete acknowledgements and leave the group.
* If the consumer is unable to complete acknowledgements and gracefully leave the group
* before the timeout expires, the consumer is force closed. Note that {@link #wakeup()} cannot be
* used to interrupt close.
*
* @param timeout The maximum time to wait for consumer to close gracefully. The value must be
* non-negative. Specifying a timeout of zero means do not wait for pending requests to complete.
*
* @throws IllegalArgumentException If the {@code timeout} is negative.
* @throws InterruptException If the thread is interrupted before or while this method is called
* @throws KafkaException for any other error during close
*/
void close(Duration timeout);
/**
* Wake up the consumer. This method is thread-safe and is useful in particular to abort a long poll.
* The thread which is blocking in an operation will throw {@link WakeupException}.
* If no thread is blocking in a method which can throw {@link WakeupException},
* the next call to such a method will raise it instead.
*/
void wakeup();
}
|
The following constructors are provided for KafkaShareConsumer
.
Method signature | Description |
---|---|
KafkaShareConsumer(Map<String, Object> configs) | Constructor |
KafkaShareConsumer(Properties properties) | Constructor |
KafkaShareConsumer(Map<String, Object> configs, | Constructor |
KafkaShareConsumer(Properties properties, | Constructor |
AcknowledgeCommitCallback
The new org.apache.kafka.clients.consumer.AcknowledgeCommitCallback
can be implemented by the user to execute when acknowledgement completes. It is called on the application thread and is not permitted to called the methods of KafkaShareConsumer
with the exception of KafkaShareConsumer.wakeup()
.
Method signature | Description |
---|---|
void onComplete(Map<TopicIdPartition, Set<OffsetAndMetadata>> offsets, Exception exception) | A callback method the user can implement to provide asynchronous handling of request completion. Parameters: offsets - A map of the offsets that this callback applies to. exception - The exception thrown during processing of the request, or null if the acknowledgement completed successfully. Exceptions: WakeupException - if KafkaShareConsumer.wakeup() is called. InterruptException - if the calling thread is interrupted. AuthorizationException - if not authorized to the topic or group. KafkaException - for any other unrecoverable errors. |
ConsumerRecord
Add the following method on the org.apache.kafka.client.consumer.ConsumerRecord
class.
Method signature | Description |
---|---|
| Get the delivery count for the record if available. |
The delivery count is available for records delivered using a share group and Optional.empty()
otherwise.
A new constructor is also added:
Code Block | ||
---|---|---|
| ||
/**
* Creates a record to be received from a specified topic and partition
*
* @param topic The topic this record is received from
* @param partition The partition of the topic this record is received from
* @param offset The offset of this record in the corresponding Kafka partition
* @param timestamp The timestamp of the record.
* @param timestampType The timestamp type
* @param serializedKeySize The length of the serialized key
* @param serializedValueSize The length of the serialized value
* @param key The key of the record, if one exists (null is allowed)
* @param value The record contents
* @param headers The headers of the record
* @param leaderEpoch Optional leader epoch of the record (may be empty for legacy record formats)
* @param deliveryCount Optional delivery count of the record (may be empty when deliveries not counted)
*/
public ConsumerRecord(String topic,
int partition,
long offset,
long timestamp,
TimestampType timestampType,
int serializedKeySize,
int serializedValueSize,
K key,
V value,
Headers headers,
Optional<Integer> leaderEpoch,
Optional<Short> deliveryCount) |
AcknowledgeCommitCallback
The new org.apache.kafka.clients.consumer.AcknowledgeCommitCallback
can be implemented by the user to execute when acknowledgement completes. It is called on the application thread and is not permitted to called the methods of KafkaShareConsumer
with the exception of KafkaShareConsumer.wakeup()
.
...
A callback method the user can implement to provide asynchronous handling of request completion.
AcknowledgeType
The new org.apache.kafka.clients.consumer.AcknowledgeType
enum distinguishes between the types of acknowledgement for a record consumer using a share group.
...
Code Block | ||
---|---|---|
| ||
package org.apache.kafka.client.admin; import org.apache.kafka.common.GroupType; /** * Options for {@link Admin#listGroups(ListGroupsOptions)}. * * The API of this class is evolving, see {@link Admin} for details. */ @InterfaceStability.Evolving public class ListGroupsOptions extends AbstractOptions<ListGroupsOptions> { /** * If types is set, only groups of these types will be returned. Otherwise, all groups are returned. */ public ListGroupsOptions types(Set<GroupType> statestypes); /** * Return the list of types that are requested or empty if no types have been specified. */ public Set<GroupType> types(); } |
...
Option | Description |
---|---|
--all-topics | Consider all topics assigned to a group in the `reset-offsets` process. |
--bootstrap-server <String: server to connect to> | REQUIRED: The server(s) to connect to. |
--command-config <String: command config property file> | Property file containing configs to be passed to Admin Client. |
--delete | Pass in groups to delete topic partition offsets over the entire share group. For instance --group g1 --group g2 |
--delete-offsets | Delete offsets of share group. Supports one share group at the time, and multiple topics. |
--describe | Describe share group and list offset lag (number of records not yet processed) related to given group. |
--dry-run | Only show results without executing changes on share groups. Supported operations: reset-offsets. |
--execute | Execute operation. Supported operations: reset-offsets. |
--group <String: share group> | The share group we wish to act on. |
--help | Print usage information. |
--list | List all share groups. |
--members | Describe members of the group. This option may be used with the '--describe' option only. |
--offsets | Describe the group and list all topic partitions in the group along with their offset lag. This is the default sub-action of and may be used with the '--describe' option only. |
--reset-offsets | Reset offsets of share group. Supports one share group at a time, and instances must be inactive.one share group at a time, and instances must be inactive. |
--state [String] | When specified with '--describe', includes the state of the group. When specified with '--list', it displays the state of all groups. It can also be used to list groups with specific states. |
--timeout <Long: timeout (ms)> | The timeout that can be set for some use cases. For example, it can be used when describing the group to specify the maximum amount of time in milliseconds to wait before the group stabilizes (when the group is just created, or is going through some changes). (default: 5000) |
--to-datetime <String: datetime> | Reset offsets to offset from datetime. Format: 'YYYY-MM-DDTHH:mm:SS.sss'. |
--to-earliest | Reset offsets to earliest offset. |
--to-latest | Reset offsets to latest offset. |
--topic <String: topic> | The topic whose share group information should be deleted or topic which should be included in the reset offset process. |
--version | Display Kafka version. |
...
Configuration | Description | Values |
---|---|---|
group.share.enable | Whether to enable share groups on the broker. | Default false while the feature is being developed. Will become true in a future release. |
group.share.delivery.count.limit | The maximum number of delivery attempts for a record delivered to a share group. | Default 5, minimum 2, maximum 10 |
group.share.record.lock.duration.ms | Share-group record acquisition lock duration in milliseconds. | Default 30000 (30 seconds), minimum 1000 (1 second), maximum 60000 (60 seconds) |
group.share.record.lock.duration.max.ms | Share-group record acquisition lock maximum duration in milliseconds. | Default 60000 (60 seconds), minimum 1000 (1 second), maximum 3600000 (1 hour) |
group.share.record.lock.partition.limit | Share-group record lock limit per share-partition. | Default 200, minimum 100, maximum 10000 |
group.share.session.timeout.ms | The timeout to detect client failures when using the group protocol. | Default 45000 (45 seconds) |
group.share.min.session.timeout.ms | The minimum session timeout. | Default 45000 (45 seconds) |
group.share.max.session.timeout.ms | The maximum session timeout. | Default 60000 (60 seconds) |
group.share.heartbeat.interval.ms | The heartbeat interval given to the members. | Default 5000 (5 seconds) |
group.share.min.heartbeat.interval.ms | The minimum heartbeat interval. | Default 5000 (5 seconds) |
group.share.max.heartbeat.interval.ms | The maximum heartbeat interval. | Default 15000 (15 seconds) |
group.share.max.groups | The maximum number of share groups. | Default 10, minimum 1, maximum 100 |
group.share.max.size | The maximum number of consumers that a single share group can accommodate. | Default 200, minimum 10, maximum 1000 |
group.share.assignors | The server-side assignors as a list of full class names. In the initial delivery, only the first one in the list is used. | A list of class names. Default "org.apache.server.group.share.SimpleAssignor" |
...
ShareGroupHeartbeat
- for consumers to form and maintain share groupsShareGroupDescribe
- for describing share groupsShareFetch
- for fetching records from share-partition leadersShareAcknowledge
- for acknowledging delivery of records with share-partition leadersAlterShareGroupOffsets
- for altering the share-partition start offsets for the share-partitions in a share groupDeleteShareGroupOffsets
- for deleting the offsets for the share-partitions in a share groupDescribeShareGroupOffsets
- for describing the offsets for the share-partitions in a share group
Error codes
This KIP adds the following error codes the Kafka protocol.
INVALID_RECORD_STATE
- The record state is invalid. The acknowledgement of delivery could not be completed.
ShareGroupHeartbeat API
The ShareGroupHeartbeat API is used by share group consumers to form a group. The API allows members to advertise their subscriptions and their state. The group coordinator uses it to assign partitions to and revoke partitions from members. This API is also used as a liveness check.
...
Code Block |
---|
{ "apiKey": TBD, "type": "response", "name": "ShareGroupHeartbeatResponse", "validVersions": "0", "flexibleVersions": "0+", // Supported errors: // - GROUP_AUTHORIZATION_FAILED (version 0+) // - NOT_COORDINATOR (version 0+) // - COORDINATOR_NOT_AVAILABLE (version 0+) // - COORDINATOR_LOAD_IN_PROGRESS (version 0+) // - INVALID_REQUEST (version 0+) // - UNKNOWN_MEMBER_ID (version 0+) // - GROUP_MAX_SIZE_REACHED (version 0+) "fields": [ { "name": "ThrottleTimeMs", "type": "int32", "versions": "0+", "about": "The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota." }, { "name": "ErrorCode", "type": "int16", "versions": "0+", "about": "The top-level error code, or 0 if there was no error" }, { "name": "ErrorMessage", "type": "string", "versions": "0+", "nullableVersions": "0+", "default": "null", "about": "The top-level error message, or null if there was no error." }, { "name": "MemberId", "type": "string", "versions": "0+", "nullableVersions": "0+", "default": "null", "about": "The member ID generated by the coordinator. Only provided when the member joins with MemberEpoch == 0." }, { "name": "MemberEpoch", "type": "int32", "versions": "0+", "about": "The member epoch." }, { "name": "HeartbeatIntervalMs", "type": "int32", "versions": "0+", "about": "The heartbeat interval in milliseconds." }, { "name": "Assignment", "type": "Assignment", "versions": "0+", "nullableVersions": "0+", "default": "null", "about": "null if not provided; the assignment otherwise.", "fields": [ { "name": "Error", "type": "int8", "versions": "0+", "about": "The assigned error." }, { "name": "AssignedTopicPartitions", "type": "[]TopicPartitions", "versions": "0+", "about": "The partitions assigned to the member." } ]} ], "commonStructs": [ { "name": "TopicPartitions", "versions": "0+", "fields": [ { "name": "TopicId", "type": "uuid", "versions": "0+", "about": "The topic ID." }, { "name": "Partitions", "type": "[]int32", "versions": "0+", "about": "The partitions." } ]} ] } |
...
Code Block |
---|
{ "apiKey": NN, "type": "request", "listeners": ["broker"], "name": "ShareFetchRequest", "validVersions": "0", "flexibleVersions": "0+", "fields": [ { "name": "GroupId", "type": "string", "versions": "0+", "nullableVersions": "0+", "default": "null", "entityType": "groupId", "about": "null if not provided or if it didn't change since the last fetch; the group identifier otherwise." }, { "name": "MemberId", "type": "string", "versions": "0+", "nullableVersions": "0+", "about": "The member ID." }, { "name": "ShareSessionEpoch", "type": "int32", "nullableVersionsversions": "0+", "about": "The member ID current share session epoch: 0 to open a share session; -1 to close it; otherwise increments for consecutive requests." }, { "name": "MaxWaitMs", "type": "int32", "versions": "0+", "about": "The maximum time in milliseconds to wait for the response." }, { "name": "MinBytes", "type": "int32", "versions": "0+", "about": "The minimum bytes to accumulate in the response." }, { "name": "MaxBytes", "type": "int32", "versions": "0+", "default": "0x7fffffff", "ignorable": true, "about": "The maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored." }, { "name": "Topics", "type": "[]FetchTopic", "versions": "0+", "about": "The topics to fetch.", "fields": [ { "name": "TopicId", "type": "uuid", "versions": "0+", "ignorable": true, "about": "The unique topic ID."}, { "name": "Partitions", "type": "[]FetchPartition", "versions": "0+", "about": "The partitions to fetch.", "fields": [ { "name": "PartitionIndex", "type": "int32", "versions": "0+", "about": "The partition index." }, { "name": "CurrentLeaderEpoch", "type": "int32", "versions": "0+", "default": "-1", "ignorable": true, "about": "The current leader epoch of the partition." }, { "name": "PartitionMaxBytes", "type": "int32", "versions": "0+", "about": "The maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored." }, { "name": "AcknowledgementBatches", "type": "[]AcknowledgementBatch", "versions": "0+", "about": "Record batches to acknowledge.", "fields": [ { "name": "StartOffset", "type": "int64", "versions": "0+", "about": "Start offset of batch of records to acknowledge."}, { "name": "LastOffset", "type": "int64", "versions": "0+", "about": "Last offset (inclusive) of batch of records to acknowledge."}, { "name": "GapOffsets", "type": "[]int64", "versions": "0+", "about": "Array of offsets in this range which do not correspond to records."}, { "name": "AcknowledgeType", "type": "int8", "versions": "0+", "default": "0", "about": "The type of acknowledgement - 0:Accept,1:Release,2:Reject."} ]} ]}, { "name": "ForgottenTopicsData", "type": "[]ForgottenTopic", "versions": "0+", "ignorable": false, "about": "The partitions to remove from this share session.", "fields": [ { .", "fields": [ { "name": "TopicId", "type": "uuid", "versions": "0+", "ignorable": true, "about": "The unique topic ID."}, { "name": "Partitions", "type": "[]int32", "versions": "0+", "about": "The partitions indexes to forget." } ]} ] } |
Response schema
Code Block |
---|
{ "apiKey": NN, "type": "response", "name": "TopicIdShareFetchResponse", "typevalidVersions": "uuid0", "versionsflexibleVersions": "0+", "ignorable": true, "about": "The unique topic ID"}, { "name": "Partitions", "type": "[]int32", "versions": "0+", "about": "The partitions indexes to forget." } ]} ] } |
Response schema
Code Block |
---|
{ "apiKey": NN, "type": "response", "name": "ShareFetchResponse", "validVersions": "0", "flexibleVersions": "0+", // Supported errors: // - GROUP_AUTHORIZATION_FAILED (version 0+) // - TOPIC_AUTHORIZATION_FAILED (version 0+) // - UNKNOWN_TOPIC_OR_PARTITION (version 0+) // - NOT_LEADER_OR_FOLLOWER (version 0+) // - UNKNOWN_TOPIC_ID (version 0+) // - INVALID_RECORD_STATE (version 0+) // - KAFKA_STORAGE_ERROR (version 0+) // - CORRUPT_MESSAGE (version 0+) // - INVALID_REQUEST (version 0+) // - UNKNOWN_SERVER_ERROR (version 0+) "fields": [ { "name": "ThrottleTimeMs", "type": "int32", "versions": "0+", "ignorable": true, "about": "The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota." }, { "name": "ErrorCode", "type": "int16", "versions": "0+", "ignorable": true, "about": "The top level response error code." }, { "name": "Responses", "type": "[]ShareFetchableTopicResponse", "versions": "0+", "about": "The response topics.", "fields": [ { "name": "TopicId", "type": "uuid", "versions": "0+", "ignorable": true, "about": "The unique topic ID."}, { "name": "Partitions", "type": "[]PartitionData", "versions": "0+", "about": "The topic partitions.", "fields": [ { "name": "PartitionIndex", "type": "int32", "versions": "0+", "about": "The partition index." }, { "name": "ErrorCode", "type": "int16", "versions": "0+", "about": "The error code, or 0 if there was no fetch error." }, { "name": "AcknowledgeErrorCode", "type": "int16", "versions": "0+", "about": "The acknowledge error code, or 0 if there was no acknowledge error." }, { "name": "CurrentLeader", "type": "LeaderIdAndEpoch", "versions": "0+", "taggedVersions": "0+", "tag": 0, "fields": [ { "name": "LeaderId", "type": "int32", "versions": "0+", "about": "The ID of the current leader or -1 if the leader is unknown." }, { "name": "LeaderEpoch", "type": "int32", "versions": "0+", "about": "The latest known leader epoch." } ]}, { "name": "Records", "type": "records", "versions": "0+", "nullableVersions": "0+", "about": "The record data."}, { "name": "AcquiredRecords", "type": "[]AcquiredRecords", "versions": "0+", "about": "The acquired records.", "fields": [ {"name": "BaseOffset", "type": "int64", "versions": "0+", "about": "The earliest offset in this batch of acquired records."}, {"name": "LastOffset", "type": "int64", "versions": "0+", "about": "The last offset of this batch of acquired records."}, {"name": "DeliveryCount", "type": "int16", "versions": "0+", "about": "The delivery count of this batch of acquired records."} ]} ]} ]}, { "name": "NodeEndpoints", "type": "[]NodeEndpoint", "versions": "16+", "taggedVersions": "0+", "tag": 0, "about": "Endpoints for all current leaders enumerated in PartitionData with error NOT_LEADER_OR_FOLLOWER.", "fields": [ { "name": "NodeId", "type": "int32", "versions": "0+", "mapKey": true, "entityType": "brokerId", "about": "The ID of the associated node." }, { "name": "Host", "type": "string", "versions": "0+", "about": "The node's hostname." }, { "name": "Port", "type": "int32", "versions": "0+", "about": "The node's port." }, { "name": "Rack", "type": "string", "versions": "0+", "nullableVersions": "0+", "default": "null", "about": "The rack of the node, or null if it has not been assigned to a rack." } ]} ] } |
...
Code Block |
---|
{ "apiKey": NN, "type": "request", "listeners": ["broker"], "name": "ShareAcknowledgeRequest", "validVersions": "0", "flexibleVersions": "0+", "fields": [ { "name": "MemberId", "type": "string", "versions": "0+", "nullableVersions": "0+", "about": "The member ID." }, { "name": "ShareSessionEpoch", "type": "int32", "versions": "0+", "about": "The current share session epoch: 0 to open a share session; -1 to close it; otherwise increments for consecutive requests." }, { "name": "Topics", "type": "[]AcknowledgeTopic", "versions": "0+", "about": "The topics containing records to acknowledge.", "fields": [ { "name": "TopicId", "type": "uuid", "versions": "0+", "about": "The unique topic ID."}, { "name": "Partitions", "type": "[]AcknowledgePartition", "versions": "0+", "about": "The partitions containing records to acknowledge.", "fields": [ { "name": "PartitionIndex", "type": "int32", "versions": "0+", "about": "The partition index." }, { "name": "AcknowledgementBatches", "type": "[]AcknowledgementBatch", "versions": "0+", "about": "Record batches to acknowledge.", "fields": [ { "name": "StartOffset", "type": "int64", "versions": "0+", "about": "Start offset of batch of records to acknowledge."}, { "name": "LastOffset", "type": "int64", "versions": "0+", "about": "Last offset (inclusive) of batch of records to acknowledge."}, { "name": "GapOffsets", "type": "[]int64", "versions": "0+", "about": "Array of offsets in this range which do not correspond to records."}, { "name": "AcknowledgeType", "type": "int8", "versions": "0+", "default": "0", "about": "The type of acknowledgement - 0:Accept,1:Release,2:Reject."} ]} ]} ]} ] } |
...
Code Block |
---|
{ "apiKey": NN, "type": "response", "name": "ShareAcknowledgeResponse", "validVersions": "0", "flexibleVersions": "0+", // Supported errors: // - GROUP_AUTHORIZATION_FAILED (version 0+) // - TOPIC_AUTHORIZATION_FAILED (version 0+) // - UNKNOWN_TOPIC_OR_PARTITION (version 0+) // - NOT_LEADER_OR_FOLLOWER (version 0+) // - UNKNOWN_TOPIC_ID (version 0+) // - INVALID_RECORD_STATE (version 0+) // - KAFKA_STORAGE_ERROR (version 0+) // - INVALID_REQUEST (version 0+) // - UNKNOWN_SERVER_ERROR (version 0+) "fields": [ { "name": "ThrottleTimeMs", "type": "int32", "versions": "0+", "ignorable": true, "about": "The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota." }, { "name": "ErrorCode", "type": "int16", "versions": "0+", "ignorable": true, "about": "The top level response error code." }, { "name": "Responses", "type": "[]ShareAcknowledgeTopicResponse", "versions": "0+", "about": "The response topics.", "fields": [ { "name": "TopicId", "type": "uuid", "versions": "0+", "ignorable": true, "about": "The unique topic ID."}, { "name": "Partitions", "type": "[]PartitionData", "versions": "0+", "about": "The topic partitions.", "fields": [ { "name": "PartitionIndex", "type": "int32", "versions": "0+", "about": "The partition index." }, { "name": "ErrorCode", "type": "int16", "versions": "0+", "about": "The error code, or 0 if there was no error." }, { "name": "CurrentLeader", "type": "LeaderIdAndEpoch", "versions": "0+", "taggedVersions": "0+", "tag": 0, "fields": [ { "name": "LeaderId", "type": "int32", "versions": "0+", "about": "The ID of the current leader or -1 if the leader is unknown." }, { "name": "LeaderEpoch", "type": "int32", "versions": "0+", "about": "The latest known leader epoch." } ]} ]} ]}, { "name": "NodeEndpoints", "type": "[]NodeEndpoint", "versions": "16+", "taggedVersions": "0+", "tag": 0, "about": "Endpoints for all current leaders enumerated in PartitionData with error NOT_LEADER_OR_FOLLOWER.", "fields": [ { "name": "NodeId", "type": "int32", "versions": "0+", "mapKey": true, "entityType": "brokerId", "about": "The ID of the associated node." }, { "name": "Host", "type": "string", "versions": "0+", "about": "The node's hostname." }, { "name": "Port", "type": "int32", "versions": "0+", "about": "The node's port." }, { "name": "Rack", "type": "string", "versions": "0+", "nullableVersions": "0+", "default": "null", "about": "The rack of the node, or null if it has not been assigned to a rack." } ]} ] } |
...
Code Block |
---|
{ "apiKey": NN, "type": "request", "listeners": ["broker"], "name": "AlterShareGroupOffsetsRequest", "validVersions": "0", "flexibleVersions": "0+", "fields": [ { "name": "GroupId", "type": "string", "versions": "0+", "entityType": "groupId", "about": "The group identifier." }, { "name": "Topics", "type": "[]AlterShareGroupOffsetsRequestTopic", "versions": "0+", "about": "The topics to alter offsets for.", "fields": [ { "name": "TopicName", "type": "string", "versions": "0+", "entityType": "topicName", "mapKey": true, "about": "The topic name." }, { "name": "Partitions", "type": "[]AlterShareGroupOffsetsRequestPartition", "versions": "0+", "about": "Each partition to alter offsets for.", "fields": [ { "name": "PartitionIndex", "type": "int32", "versions": "0+", "about": "The partition index." }, { "name": "StartOffset", "type": "int64", "versions": "0+", "about": "The share-partition start offset." } ]} ]} ] } |
...
Code Block |
---|
{ "apiKey": NN, "type": "response", "name": "AlterShareGroupOffsetsResponse", "validVersions": "0"0", "flexibleVersions": "0+", , "flexibleVersions": "0+", // Supported errors: // - GROUP_AUTHORIZATION_FAILED (version 0+) // - NOT_COORDINATOR (version 0+) // - COORDINATOR_NOT_AVAILABLE (version 0+) // - COORDINATOR_LOAD_IN_PROGRESS (version 0+) // - GROUP_ID_NOT_FOUND (version 0+) // - GROUP_NOT_EMPTY (version 0+) // - KAFKA_STORAGE_ERROR (version 0+) // - INVALID_REQUEST (version 0+) // - UNKNOWN_SERVER_ERROR (version 0+) "fields": [ { "name": "ThrottleTimeMs", "type": "int32", "versions": "0+", "ignorable": true, "about": "The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota." }, { "name": "Responses", "type": "[]AlterShareGroupOffsetsResponseTopic", "versions": "0+", "about": "The results for each topic.", "fields": [ { "name": "TopicName", "type": "string", "versions": "0+", "entityType": "topicName", "about": "The topic name." }, { "name": "TopicId", "type": "uuid", "versions": "0+", "ignorable": true, "about": "The unique topic ID." }, { "name": "Partitions", "type": "[]AlterShareGroupOffsetsResponsePartition", "versions": "0+", "fields": [ { "name": "PartitionIndex", "type": "int32", "versions": "0+", "about": "The partition index." }, { "name": "ErrorCode", "type": "int16", "versions": "0+", "about": "The error code, or 0 if there was no error." }, { "name": "ErrorMessage", "type": "string", "versions": "0+", "nullableVersions": "0+", "ignorable": true, "default": "null", "about": "The error message, or null if there was no error." } ]} ]} ] } |
...
Code Block |
---|
{ "apiKey": NN, "type": "response", "name": "DeleteShareGroupOffsetsResponse", "validVersions": "0", "flexibleVersions": "0+", ", "name": "DeleteShareGroupOffsetsResponse", "validVersions": "0", "flexibleVersions": "0+", // Supported errors: // - GROUP_AUTHORIZATION_FAILED (version 0+) // - NOT_COORDINATOR (version 0+) // - COORDINATOR_NOT_AVAILABLE (version 0+) // - COORDINATOR_LOAD_IN_PROGRESS (version 0+) // - GROUP_ID_NOT_FOUND (version 0+) // - GROUP_NOT_EMPTY (version 0+) // - KAFKA_STORAGE_ERROR (version 0+) // - INVALID_REQUEST (version 0+) // - UNKNOWN_SERVER_ERROR (version 0+) "fields": [ { "name": "ThrottleTimeMs", "type": "int32", "versions": "0+", "ignorable": true, "about": "The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota." }, { "name": "Responses", "type": "[]DeleteShareGroupOffsetsResponseTopic", "versions": "0+", "about": "The results for each topic.", "fields": [ { "name": "TopicName", "type": "string", "versions": "0+", "entityType": "topicName", "about": "The topic name." }, { "name": "TopicId", "type": "uuid", "versions": "0+", "ignorable": true, "about": "The unique topic ID." }, { "name": "Partitions", "type": "[]DeleteShareGroupOffsetsResponsePartition", "versions": "0+", "fields": [ { "name": "PartitionIndex", "type": "int32", "versions": "0+", "about": "The partition index." }, { "name": "ErrorCode", "type": "int16", "versions": "0+", "about": "The error code, or 0 if there was no error." }, { "name": "ErrorMessage", "type": "string", "versions": "0+", "nullableVersions": "0+", "ignorable": true, "default": "null", "about": "The error message, or null if there was no error." } ]} ]} ] } |
...
Code Block |
---|
{ "apiKey": NN, "type": "request", "listeners": ["broker"], "name": "DescribeShareGroupOffsetsRequest", "validVersions": "0", "flexibleVersions": "0+", "fields": [ { "name": "GroupId", "type": "string", "versions": "0+", "entityType": "groupId", "about": "The group identifier." }, { "name": "Topics", "type": "[]DescribeShareGroupOffsetsRequestTopic", "versions": "0+", "about": "The topics to describe offsets for.", "fields": [ { "name": "TopicName", "type": "string", "versions": "0+", "entityType": "topicName", "about": "The topic name." }, { "name": "Partitions", "type": "[]int32", "versions": "0+", "about": "The partitions." } ]} ]} ] } |
Response schema
Code Block |
---|
{ "apiKey": NN, "type": "response", "name": "DescribeShareGroupOffsetsResponse", "validVersions": "0", "flexibleVersions": "0+", // Supported errors: // - GROUP_AUTHORIZATION_FAILED (version 0+) // - NOT_COORDINATOR (version 0+) // - COORDINATOR_NOT_AVAILABLE (version 0+) // - COORDINATOR_LOAD_IN_PROGRESS ]}(version 0+) ] } |
Response schema
Code Block |
---|
{ "apiKey": NN, "type": "response", "name": "DescribeShareGroupOffsetsResponse", "validVersions": "0", "flexibleVersions": "0+", // - GROUP_ID_NOT_FOUND (version 0+) // - INVALID_REQUEST (version 0+) // - UNKNOWN_SERVER_ERROR (version 0+) "fields": [ { "name": "ThrottleTimeMs", "type": "int32", "versions": "0+", "ignorable": true, "about": "The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota." }, { "name": "Responses", "type": "[]DescribeShareGroupOffsetsResponseTopic", "versions": "0+", "about": "The results for each topic.", "fields": [ { "name": "TopicName", "type": "string", "versions": "0+", "entityType": "topicName", "about": "The topic name." }, { "name": "TopicId", "type": "uuid", "versions": "0+", "ignorable": true, "about": "The unique topic ID." }, { "name": "Partitions", "type": "[]DescribeShareGroupOffsetsResponsePartition", "versions": "0+", "fields": [ { "name": "PartitionIndex", "type": "int32", "versions": "0+", "about": "The partition index." }, { "name": "StartOffset", "type": "int64", "versions": "0+", "about": "The share-partition start offset."}, { "name": "ErrorCode", "type": "int16", "versions": "0+", "about": "The error code, or 0 if there was no error." }, { "name": "ErrorMessage", "type": "string", "versions": "0+", "nullableVersions": "0+", "ignorable": true, "default": "null", "about": "The error message, or null if there was no error." } ]} ]} ] } |
...
Code Block |
---|
{ "type": "data", "name": "ShareDeltaValue", "validVersions": "0", "flexibleVersions": "none", "fields": [ { "name": "GroupId", "type": "string", "versions": "0", "about": "The group identifier." }, { "name": "CheckpointEpoch", "type": "uint16", "versions": "0", "about": "The checkpoint epoch, increments with each checkpoint." }, { "name": "BackOffset", "type": "int64", "versions": "0", "about": "The offset of the previous ShareCheckpoint or ShareDelta." }, { "name": "States", "type": "[]State", "versions": "0", "fields": [ { "name": "BaseOffset", "type": "int64", "versions": "0", "about": "The base offset of this state batch." }, { "name": "LastOffset", "type": "int64", "versions": "0", "about": "The last offset of this state batch." }, { "name": "State", "type": "int8", "versions": "0", "about": "The state - 0:Available,2:Acked,4:Archived." }, { "name": "DeliveryCount", "type": "int16", "versions": "0", "about": "The delivery count." } ]} ] } "The delivery count." } ]} ] } |
Index structure for locating share-partition state
More information needs to be added to describe how the index for locating the share-partition state is arranged.
Metrics
Broker Metrics
The following new broker metrics should be added:
Metric Name | Type | Group | Tags | Description | JMX Bean |
---|---|---|---|---|---|
group-count | Gauge | group-coordinator-metrics |
| The total number of share groups managed by group coordinator. |
|
rebalance (rebalance-rate and rebalance-count) | Meter | group-coordinator-metrics |
| The total number of share group rebalances count and rate. |
|
num-partitions | Gauge | group-coordinator-metrics |
| The number of share partitions managed by group coordinator. |
|
group-count | Gauge | group-coordinator-metrics |
| The number of share groups in respective state. | kafka.server:type=group-coordinator-metrics,name=group-count,protocol=share,state={empty|stable|dead} |
share-acknowledgement (share-acknowledgement-rate and share-acknowledgement-count) | Meter | group-coordinator-metrics |
| The total number of offsets acknowledged for share groups. |
|
record-acknowledgement (record-acknowledgement-rate and record-acknowledgement-count) | Meter | group-coordinator-metrics |
| The number of records acknowledged per acknowledgement type. |
|
partition-load-time (partition-load-time-avg and partition-load-time-max) | Meter | group-coordinator-metrics |
| The time taken to load the share partitions. |
|
Future Work
There are some obvious extensions to this idea which are not included in this KIP in order to keep the scope manageable.
...
Compatibility, Deprecation, and Migration Plan
Kafka Broker Migration
This KIP builds upon KIP-848 which introduced the new group coordinator and the new records for the __consumer_offsets
topic. The pre-KIP-848 group coordinator will not recognize the new records, so this downgrade is not supported.
Downgrading to a software version that supports the new group coordinator but does not support share groups is supported. This KIP adds a new version for the ConsumerGroupMetadataValue
record to include the group type. If the software version does not understand the v1 record type, it will assume the records apply to a consumer group of the same name. We should make sure this is a harmless situation.
More information need to be added here based on the share-partition persistence mechanism. Details are still under consideration hereThe changes in this KIP add to the capabilities of Kafka rather than changing existing behavior.
Test Plan
The feature will be throughly tested with unit, integration and system tests. We will also carry out performance testing both to understand the performance of share groups, and also to understand the impact on brokers with this new feature.
Rejected Alternatives
Share group consumers use KafkaConsumer
...