You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »


Status

Current state: Draft

Discussion thread: TBD

JIRA: here

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

For stateful applications, one of the biggest performance bottleneck is the state shuffling. In Kafka consumer, there is concept called "rebalance" which means that for given M partitions and N consumers in one consumer group, Kafka will try to balance the load between consumers and ideally have each consumer dealing with M/N partitions. Broker will also adjust the workload dynamically by monitoring consumers' health and handling new consumer join request. The intuition is to avoid processing hot spot and maintain fairness plus liveness of the whole application. However, when the service state is heavy, a rebalance of one partition from instance A to B means huge amount of data transfer. If multiple rebalances are triggered, the whole service could take a very long time to recover. 

The idea of this KIP is to reduce number of rebalances by specifying the consumer member id. Core argument: If the broker recognize this consumer as an existing member, it shouldn't trigger rebalance. The only exception is when leader rejoins the group, because there might be assignment protocol change that needs to be enforced.

Public Interfaces

Right now broker handles consumer state in a two-phase protocol. To be concise here, we only discuss 3 involving states here: RUNNING, PREPARE_REBALANCE and SYNC.

  • When a new member joins the consumer group, if this is a new member or the group leader, the broker will move this group state from RUNNING to PREPARE_REBALANCE. The reason for triggering rebalance when leader joins is because there might be assignment protocol change.
  • When moved to PREPARE_REBALANCE state, the broker will mark first joined consumer as leader, and wait for all the members to rejoin the group. Once we collected enough number of consumers/ reached rebalance timeout, we will reply the leader with current member information and move the state to SYNC. All current members are informed to send SyncGroupRequest to get the final assignment.
  • The leader consumer will decide the assignment and send it back to broker. As last step, broker will announce the new assignment by sending SyncGroupResponse to all the followers. Till now we finished one rebalance and the group generation is incremented by 1.

In the current architecture, during each rebalance consumer group on broker side will assign new member id with a UUID randomly generated each time. So through restarts, the consumer uses different member ids, which means the rejoin of same consumer will always be treated as "new member". On the client side, consumer will send a JoinGroupRequest with a special UNKNOWN_MEMBER id, which also has no intention to be treated as an existing member.  To make the KIP work, we need to change both client side and server side.

Proposed Changes

On client side, we add a new config called MEMBER_ID in ConsumerConfig. On consumer service init, if the MEMBER_ID config is set, we will it in the initial join group request; otherwise, we will still send unknown member id. To distinguish between previous version of protocol, we will also increase the join group request version to v3. 

On server side, broker will keep handling existing the same join group request as before. If the protocol version is upgraded to v3, the broker will no longer use the client id plus random generated member id suffix as the member id. Instead, server will use the member id specified in the join group request v3. The change will be applied in addMemberAndRebalance. 

In the join request v3, it is user's responsibility to assign unique member id for each consumers. This could be in service discovery hostname, unique IP address, etc. The downside is that if user configured the member id wrongly, there could be multiple consumers with the same member id, which invalidates the consumption balance and triggers unpredictable buggy behaviors. We could think of a way to detect this. 

If the MEMBER_ID config is not set, the consumer client be using client id + random generated number to get a new unique member id, the same way as broker would do.

Compatibility, Deprecation, and Migration Plan

  • This new change is an effort of reducing rebalances during consumer rolling restart. The UNKNOWN_MEMBER_ID handling.

Rejected Alternatives

In this pull request, we did an experimental approach to materialize member id on the instance local disk. This approach could reduce the rebalances as expected, but KIP 345 has a few advantages over it:

  1. It gives users more control of their member id string; this would help for debugging purposes.
  2. It is more cloud-/k8s-and-alike-friendly: when we move an instance from one container to another, we can copy the member id to the config files.
  3. It doe not require the brokers to be able to access another dir on the local disks (think your brokers is deployed on AWS with remote disks mounted).
  4. By allowing consumers to optionally specifying a member id, this rebalance benefit can be easily migrated to connect and streams as well which relies on consumers, even in a cloud environment.



  • No labels