Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Of course there could be a theoretical case where both `member.id` and `group.instance.id` are set to empty string. We shall explicitly handle the case as an error and expose in the server log, while returning UNKNOWN_MEMBER_ID error if both lists are configured with empty strings.

Scale Up

We will not plan to solve the scale up issue holistically within this KIP, since there is a parallel discussion about Incremental Cooperative Rebalancing, in which we will encode the "when to rebalance" logic at the application level, instead of at the protocol level. 

For initial scale up, there is a plan to deprecate group.initial.rebalance.delay.ms since we no longer needs it once static membership is delivered and the incremental rebalancing work is done.

Rolling Bounce

Currently broker accepts a config value called rebalance timeout which is provided by consumer max.poll.intervals. The reason we set it to poll interval is because consumer could only send request within the call of poll() and we want to wait sufficient time for the join group request. When reaching rebalance timeout, the group will move towards COMPLETING_REBALANCE stage and remove unjoined members. This is actually conflicting with the design of static membership, because those temporarily unavailable members will potentially reattempt the join group and trigger extra rebalances. Internally we would optimize this logic by having rebalance timeout only in charge of stopping PREPARE_REBALANCE stage, without removing non-responsive members immediately. There would not be a full rebalance if the lagging consumer sends a JoinGroupRequest within the session timeout.

So in summary, the member will only be removed due to session timeout. We shall remove it from both in-memory static `group.instance.id` map and member list.

Scale Down

Currently the scale down is controlled by session timeout, which means if user removes the over-provisioned consumer members it waits until session timeout to trigger the rebalance. This is not ideal and motivates us to change LeaveGroupRequest to be able to include a list of `group.instance.id` and a list of `member.id` such that we could batch remove offline members and trigger rebalance immediately without them.

Fault-tolerance of Static Membership 

To make sure we could recover from broker failure/coordinator transition, an in-memory `group.instance.id` map is not enough. We would reuse the _consumer_offsets topic to store the static member map information. When another broker takes over the leadership, it will load the static mapping info together. 

Command Line API for Membership Management

RemoveMemberFromGroup (introduced above) will remove given instances and trigger rebalance immediately, which is mainly used for fast scale down/host replacement cases (we detect consumer failure faster than the session timeout). This API will first send a FindCoordinatorRequest to locate the correct broker, and initiate a LeaveGroupRequest to target broker hosting that coordinator. 

The coordinator will decide whether to take this metadata change request based on its status on runtime. Error will be returned if

  1. The broker is on an old version (UNSUPPORTED_VERSION)
  2. Consumer group does not exist (INVALID_GROUP_ID)
  3. Operator is not authorized. (GROUP_AUTHORIZATION_FAILED)
  4. Some instance ids (non-empty) are not found, which means the request is not valid (GROUP_INSTANCE_ID_INVALID, defined in the public changes section)
  5. The length of MemberIdList doesn't match length of GroupInstanceIdList (MEMBER_ID_MISMATCH, defined in the public changes section)

Command Line API for Membership Management

RemoveMemberFromGroup will remove given instances and trigger rebalance immediately, which is mainly used for fast scale down/host replacement cases (we detect consumer failure faster than the session timeout). This API will first send a FindCoordinatorRequest to locate the correct broker, and initiate a LeaveGroupRequest to target broker hosting that coordinator. 

The coordinator will decide whether to take this metadata change request based on its status on runtime. Error will be returned if

  1. The broker is on an old version (UNSUPPORTED_VERSION)
  2. Consumer group does not exist (INVALID_GROUP_ID)
  3. Operator is not authorized. (GROUP_AUTHORIZATION_FAILED)
  4. Some instance ids (non-empty) are not found, which means the request is not valid (GROUP_INSTANCE_ID_INVALID, defined in the public changes section)
  5. The length of MemberIdList doesn't match length of GroupInstanceIdList (MEMBER_ID_MISMATCH, defined in the public changes section)

We need to enforce special access to these APIs for the end user who may not be in administrative role of Kafka Cluster. The solution is to allow a similar access level to the join group request, so the consumer service owner could easily use this API.

Scale Up

We will not plan to solve the scale up issue holistically within this KIP, since there is a parallel discussion about Incremental Cooperative Rebalancing, in which we will encode the "when to rebalance" logic at the application level, instead of at the protocol level. 

For initial scale up, there is a plan to deprecate group.initial.rebalance.delay.ms since we no longer needs it once static membership is delivered and the incremental rebalancing work is done.

Rolling Bounce

Currently broker accepts a config value called rebalance timeout which is provided by consumer max.poll.intervals. The reason we set it to poll interval is because consumer could only send request within the call of poll() and we want to wait sufficient time for the join group request. When reaching rebalance timeout, the group will move towards COMPLETING_REBALANCE stage and remove unjoined members. This is actually conflicting with the design of static membership, because those temporarily unavailable members will potentially reattempt the join group and trigger extra rebalances. Internally we would optimize this logic by having rebalance timeout only in charge of stopping PREPARE_REBALANCE stage, without removing non-responsive members immediately. There would not be a full rebalance if the lagging consumer sends a JoinGroupRequest within the session timeout.

So in summary, the member will only be removed due to session timeout. We shall remove it from both in-memory static `group.instance.id` map and member list.

Scale Down

Currently the scale down is controlled by session timeout, which means if user removes the over-provisioned consumer members it waits until session timeout to trigger the rebalance. This is not ideal and motivates us to change LeaveGroupRequest to be able to include a list of `group.instance.id` and a list of `member.id` such that we could batch remove offline members and trigger rebalance immediately without them.

Fault-tolerance of Static Membership 

To make sure we could recover from broker failure/coordinator transition, an in-memory `group.instance.id` map is not enough. We would reuse the _consumer_offsets topic to store the static member map information. When another broker takes over the leadership, it will load the static mapping info together. We need to enforce special access to these APIs for the end user who may not be in administrative role of Kafka Cluster. The solution is to allow a similar access level to the join group request, so the consumer service owner could easily use this API.

Compatibility, Deprecation, and Migration Plan

Upgrade Process

The recommended upgrade process is as follow:

...