You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

The new consumer currently relies on a server-side coordinator to negotiate the set of consumer processes that form the group and to assign the partitions to each member of the consumer group per some assignment strategy which group members must agree on. This provides assurance that the group will always have a consistent assignment and it enables the coordinator to validate that offsets are only committed from consumers that own the respective partition. However, it relies on the server having access to the code implementing the assignment strategy, which is problematic for two reasons:

  1. First is just a matter of convenience. New assignment strategies cannot be deployed to the server without updating configuration and restarting the cluster. It can be a significant operational undertaking just to provide the capability to do this. 
  2. Different assignment strategies have different validation requirements. For example, with a redundant partitioning scheme, a single partition can be assigned to multiple consumers. This limits the ability of the coordinator to validate assignments, which is one of the main reasons for having the coordinator do the assignment in the first place.

If new assignment use cases were rare, this may be a viable solution, but we are already have a number of cases where more control over assignment is needed. For example:

  • Co-partitioning: When joining two topics (in the context of KIP-28), it is necessary to assign the same partitions from more than one topic to the same consumer.
  • Sticky partitioning: For stateful consumers, it is often best to minimize the number of partitions that have to be moved during a rebalance.
  • Redundant partitioning: For some use cases, it is useful to assign each partition to multiple consumers. For e.g search indexers consuming from a Kafka topic need multiple replicas for the same partition. This would mean the same Kafka partition should be assigned to n consumer processes in such a search indexer application.
  • Metadata-based assignment: In some cases, it is convenient to leverage consumer-local metadata to make assignment decisions. For example, if you can derive the rack from the FQDN of the Kafka brokers (which is common), then it would be possible to have rack-aware consumer groups if there was a way to communicate each consumer's rack to the partition assignment.

To address the problems pointed out above and support custom assignment strategies easily, we propose to move the assignment to the client. Specifically, we propose to separate the group management capability provided by the coordinator from partition assignment.  We leave the coordinator to handle the former, while the latter is pushed into the consumer. This promotes separation of concerns and loose coupling.

More concretely, instead of the JoinGroup protocol returning each consumer's assignment directly, we modify the protocol to return the list of members in the group and have each consumer decide its assignment independently. This solves the deployment problem since it is typically an order of magnitude easier to update clients than servers. It also decouples the server from the needs of the assignment strategy, which allow us to support the above use cases without any server changes and provide some "future-proofing" for new use cases. For consumers, the join group protocol becomes more of an abstract group membership capability which, in addition to enabling assignment, can be used as a primitive to build other group management functions (such as leadership). 

There are some disadvantages though. First, since the coordinator does not know the owners of a partition, it can no longer verify that offset commits come from the "right" consumer, which potentially opens the door to inconsistent processing. However, as mentioned above, the ability of the server to validate assignments (and therefore commits) would have to be handicapped anyway to support redundant partitioning. Also, with client-side assignment, debugging assignment bugs requires a little more work. Finding assignment errors may involve aggregating logs from each consumer in the group. In practice, the partitioning strategies used by most users will be simple and tested enough that such errors should be unlikely, but it is still a potential concern.

So far, we made an argument to separate group management from resource assignment. A significant benefit of this proposal is that it enables the group membership protocol to be used for other purposes. Below we outline all the use cases that would now be possible due to group management becoming a generic facility in the Kafka protocol. 

  1. The processor client (KIP-): Depending on the nature of your processing, your processor client might require a different partitioning strategy. For e.g. if your processing requires joins, it needs the co-partitioning assignment strategy for those topics and possibly a simple round robin for other topics. 
  2. Copycat: Here, you have a pool of worker processes in a copycat cluster that act as one large group. If one worker fails, the connector partitions that lived in that process need to be redistributed over the rest of the worker processes. Again, some connectors require a certain assignment strategy while a simple round robin works for others. The problem is the same - group management for a set of processes and assignment of resources amongst them that is really dictated by the application (copycat)  
  3. Single-writer producer: This use case may be a little out there since the transactional producer work hasn't quite shaped up. But the general idea is that you have multiple producers acting as a group, where only one producer is active and writing at any given point of time. If that producer fails, some other producer in the group becomes the single writer.
  4. Consumer: A set of consumer processes need to be part of a group and partitions for the subscribed topics need to be assigned to each consumer processes, as dictated by the consumer application.

Given that there are several non-consumer use cases for a general group management protocol, we propose changing JoinGroupRequest and JoinGroupResponse such that it is not tied to consumer specific concepts.

Below we outline the changes needed to the protocol to make it more general and also the changes to the consumer API to support this.

Protocol

This proposal does not change the basic mechanics of the join group protocol. All members of the group send JoinGroup requests to the coordinator, which waits for all expected members before responding. However, instead of the coordinator returning each consumer's individual assignment, it returns to each member the full list of group members along with their associated metadata. In the case of the new consumer, each member would then compute its assignment independently based on the returned group metadata.

The proposed format of the new JoinGroup messages is given below.

JoinGroup Request

The new join group message is similar to the previous one, but we have dropped the fields specific to consumer partition assignment (e.g. assignment strategy). Instead, all of this information is treated as protocol-specific metadata, which is opaque to the broker. The join group request includes a list of the protocols which the group member supports (sorted by preference). A protocol is used to communicate membership semantics to the members of the group. In the case of the new consumer, it corresponds exactly to the assignment strategy. The coordinator inspects the supported protocols of each member and chooses one that all members support. If no common protocol can be found among members, then the group fails construction. This provides a facility for upgrading to a new version of the protocol in a rolling update. 

Note that each protocol has a field for its own metadata. In the case of the consumer, this allows the assignment strategy to depend on its own format. In the case of the normal round-robin strategy, the metadata would just contain the list of subscribed topics, but other strategies may contain other information (such as the number of cpus on the host).

The GroupProtocolType field provides a scope for the protocol. For the consumer, the protocol type would be "consumer" and the protocols would be "round-robin," "range," etc. Copycat would use "copycat" as the group type and provide its own convention for protocol naming. If group members do not all have the same protocol type, the coordinator will not allow the group to be created (i.e. it will send an error in the join group response). It's an open question whether this is really necessary since the protocol name could embed this information as well.

 

JoinGroupRequest => GroupId SessionTimeout MemberId GroupProtocolType GroupProtocols
  GroupId                 => String
  SessionTimeout          => int32
  MemberId                => String
  GroupProtocolType       => String
  GroupProtocols          => [Protocol ProtocolVersion ProtocolMetadata]
    Protocol              => String
    ProtocolVersion       => String
    ProtocolMetadata      => bytes

 

JoinGroup Response

The response is similarly modified to remove the fields specific to consumer group management. The coordinator is responsible for analyzing the supported protocols from each group member and choosing one which all members support, which is then transmitted to group members in the join group response. Note that the metadata from each group member for the chosen protocol is returned in the response to all members. This is to allow each member to propagate some local information (such as topic subscriptions) to the entire group. The generation id, as before, is incremented on every successful iteration of the join group protocol.

The basic idea behind the coordinator's protocol selection algorithm is to consider the protocols supported by all members in terms of the preference (as indicated by the position in the list). This means that if all members list protocol "a" before protocol "b," then the coordinator will choose "a." If there is no agreement in terms of preference among the protocols which all members support, then one is chosen randomly.

 

JoinGroupResponse => ErrorCode GroupGenerationId GroupProtocol MemberId GroupMembers
  ErrorCode              => int16
  GroupGenerationId      => int32
  MemberId               => String
  GroupProtocol          => String
  GroupProtocolVersion   => String
  GroupMembers           => [MemberId ProtocolMetadata]
    MemberId             => String
    ProtocolMetadata     => bytes

One of the major concerns in this protocol is the size of the join group response. Since each member's metadata is included in the responses for all members, the total amount of data which the coordinator must forward in the join group responses increases quadratically with the size of the group. For example, with a per-member metadata size of 100KB, in a group of 100 members, each join group response would contain 10MB of data, which means that the coordinator would have to transfer 1GB total on every rebalance. It is therefore important to keep the size of the metadata fairly small. Even with smaller metadata size, the group can only grow so large before this becomes a concern again. However, we argue that the protocol is already unsuited to such large groups since it does not have any mechanism to cope with churn. Every time there is a membership change in the group, all members must synchronize to form next generation. If this happens often enough, as is possible with larger groups, then progress is severely restricted.

Consumer Embedded Protocol

Above we outlined the generalized JoinGroup protocol that the consumer would leverage. Next we show how we intend to implement consumer semantics on top of this protocol. Other use cases for the join group protocol would be implemented similarly. The two items that must be defined to use the join group protocol are the format of the protocol versions and the format of the protocol metadata.

ProtocolType => "consumer"
 
Protocol => AssignmentStrategy
  AssignmentStrategy => String
 
ProtocolMetadata => Subscription
  Subscription                 => Topics TopicPattern MetadataHash
    Topics                     => [String]
    TopicPattern               => String
    MetadataHash               => bytes

Subscriptions: To support differing subscriptions within the group, each member must include its own subscription in the protocol metadata. These subscriptions are forwarded to all members of the group who can then independently compute their assignment. Subscriptions can be specified either as a list of topics or as a regular expression. The latter can provide a more compact representation when subscribing to a large number of topics (e.g. if using mirror maker to replicate all the topics in a cluster).

Metadata: The metadata hash is included to ensure that each consumer has the same view of the topic metadata. A disagreement could cause an inconsistent assignment, so upon joining the group, each member checks the metadata hash of all other members to make sure they are consistent. It covers the full list of topics in the subscription set and their respective partition counts. If a regex subscription is used, then the hash covers all the topics in the cluster. If there is any disagreement on the number of partitions (e.g. due to stale metadata), then the hashes will compute differently and the consumers will refetch metadata and rejoin the group.

One potential concern in this protocol is whether a sustained disagreement might lead to continual rebalancing. This could be possible if two brokers disagree on the topic metadata for an extended period of time. While metadata should eventually converge, this issue can be resolved by having the consumers fetch their metadata from the coordinator, ensuring that they each see the same view. However, it would still be possible to have metadata disagreement if the metadata itself is changing at a very high rate.

It is worth mentioning that there is a class of assignment strategies which do not depend on consistent metadata among the consumers. For example, in a consistent hashing approach, each partition would be deterministically mapped to one of the group members. Even if two members see a different partition count for a topic, there would be no disagreement over which consumer owns each partition. The tradeoff is generally sub-optimal load balancing of partitions across consumers.

Note that the format of the metadata is an attribute of the assignment strategy. This makes it possible for different strategies to support different metadata formats. For rack-aware assignment, the metadata would also include the rack of each consumer, and the metadata hash would have to cover the leader of each partition since that governs where fetches will be sent to and the whole point of rack-aware assignment is to fetch from brokers on the same rack as the consumer. In general, any information that is used in decision making must somehow be included in the metadata.

Rolling Upgrades (Consumer)

Support for rolling upgrades is enabled through the protocol list in the join group request. For the consumer, this makes it possible to upgrade or change the assignment strategy used by the group without downtime. Note that the protocol does not distinguish between upgrades to the assignment strategy metadata and upgrades to the assignment strategy algorithm. The coordinator just looks for a protocol and version which are supported by all members of the group. It is up to the assignment strategy implementations to implement their own versioning. The following example provides an example of how an upgrade would work in practice.

Phase 1: The group consists of consumers A, B, and C each supporting version 0 of the round-robin assignment strategy.
A: (round-robin, 0)
B: (round-robin, 0)
C: (round-robin, 0)
GroupProtocol: (round-robin, 0)
 
Phase 2: A is upgraded. It now supports version 1 of the round-robin strategy in addition to version 0.
A: (round-robin, 1), (round-robin, 0)
B: (round-robin, 0)
C: (round-robin, 0)
GroupProtocol: (round-robin, 0)
 
Phase 3: B is similarly upgraded. The coordinator still chooses (round-robin, 0) since it must choose a version supported by all members.
A: (round-robin, 1), (round-robin, 0)
B: (round-robin, 1), (round-robin, 0)
C: (round-robin, 0)
GroupProtocol: (round-robin, 0)

Phase 4: C is similarly upgraded, which allows the coordinator to update the group's protocol.
A: (round-robin, 1), (round-robin, 0)
B: (round-robin, 1), (round-robin, 0)
C: (round-robin, 1), (round-robin, 0)
GroupProtocol: (round-robin, 1)

This example highlights a few interesting points. First, since the consumers have no awareness of the protocols supported by the other group members, they don't know when the update has completed and continue sending the metadata for both versions. This is not a major problem since the coordinator will only forward the associated metadata from the strategy chosen by the coordinator. Nevertheless, another round of updates would be needed to remove support for the old assignment strategy in the group.

As noted previously, the order of the assignment strategies is significant. The coordinator uses it when selecting between two or more protocols which are supported by all members. For the rolling upgrade case, this just means that the upgraded protocol should be listed first, which guarantees that the coordinator will select it after all members have been updated. 

Note that the same upgrade mechanism can also be used to change to a different assignment strategy.

Scaling Groups

As mentioned previously, the need to propagate the metadata of each member to all other members puts a significant restriction on the amount of metadata that can be used in large groups. For small and medium-sized groups, this might not be a major concern, but assignment strategies must be mindful of the metadata size and set clear scaling expectations. We have considered several approaches to deal with this problem.

Reducing the Join Payload

Dropping Topic Lists: The main contribution to the size of the join group response for the consumer case is the subscription set. One option is to remove the topic list and topic pattern and instead include only a hash of that data. This would make it impossible to simultaneously support differing subscription sets within the group, but it would still be possible to support a changing subscription set in the rolling upgrade scenario, albeit in a weaker form. More concretely, the embedded protocol would look like this: 

ProtocolType => "consumer"
 
Protocol => AssignmentStrategy
  AssignmentStrategy => String
 
ProtocolMetadata => SubscriptionHash MetadataHash
  SubscriptionHash => bytes
  MetadataHash     => bytes

Just as before, the consumers would join the group with their respective subscriptions and supported assignment strategies. The coordinator would select an assignment strategy supported by all members and forward the full member metadata to all members of the group. When the members of the group received the join group responses, they would compare the subscription hashes of all the members to find the largest matching sub-group. The members who belonged to this sub-group would assign partitions based on their own subscription sets (which match those of the rest of that sub-group), while non-members would just assign themselves an empty set of partitions and go dormant until the next rebalance. 

For example, suppose that a consumer group consists of members A, B, and C. If A and B are subscribed to (foo), and C is subscribed (foo, bar), then A and B would having matching subscription hashes and would form the largest matching sub-group. Therefore A and B would assign themselves the partitions from (foo), and C would be inactive. If consumer B was then upgraded to the subscription set (foo, bar), then B and C would form the largest sub-group and A would go inactive (obviously there would need to be a tie-breaker in case there is no largest sub-group). The major limitation should also be clear in this example: during a rolling upgrade, the capacity of the cluster will be temporarily halved, which may cause lag in the group. This would have to be considered in capacity planning.

CompressionAn even simpler option is to use compression to reduce the size of the topic subscriptions. Without changing the protocol, members could compress their own metadata which is opaque to the broker anyway. For maximum compression, the protocol could be modified to support compression of the entire group metadata. Since most subscription sets in a group will probably have considerable overlap (even matching exactly in most cases), this should be very effective, though it does come at the cost of some additional complexity for the protocol.

Single Client Assignor

A more dramatic alternative is to change the protocol to allow one member to provide the assignment for the entire group. One option that we have considered to enable this is to allow assignment to piggyback on the join group requests. Here is a basic outline of how this protocol would work: 

  1. As before all members send their protocol metadata to the coordinator in a JoinGroupRequest.
  2. The coordinator chooses one member of the group and returns to it a JoinGroupResponse containing all members and their respective metadata.
  3. The chosen member performs the assignment for the entire group and sends it to the coordinator in a new JoinGroupRequest.
  4. The coordinator distributes to each member its respective assignment in a JoinGroupResponse.

Advantages: Since only one member of the group receives the full group's metadata, the quadratic increase in overall message load becomes linear as the number of group members increases. The assignment that is returned to each member can be much smaller since it only contains the information that that consumer needs (in the case of the consumer, this is just the assigned partitions, similar to the current server-side assignment protocol). It also solves the problem of metadata consistency since only one member of the group does the assignment. Finally, it allows for easier assignment implementations in general since they no longer have to be deterministic.

Disadvantages: Obviously the tradeoff is complexity. In particular, the coordinator must implement a new state in which it is awaiting assignment from the selected assignor. It has to handle the case that the assignor fails, which it might do by selecting another assignor or just failing the generation and having the group members retry. It's also possible that a new group member might arrive while assignment is active, or one might fail. For the client, on the other hand, the protocol does not seem significantly more complex: it just parses the join group response and responds based on whether an assignment is needed or provided.

Here is a sketch of how the JoinGroup request and response schemas might be changed to support this protocol:

JoinGroupRequest => GroupId ProtocolType Phase
  GroupId              => String
  ProtocolType         => String
  Phase                => Join | Sync
  Join                 => SessionTimeout MemberId MemberMetadata
    SessionTimeout     => int32
    MemberId           => String
    MemberMetadata     => bytes
  Sync                 => [MemberId MemberAssignment]
    MemberId           => String
    MemberState        => bytes
 
JoinGroupResponse => ErrorCode Phase
  ErrorCode              => int16
  Phase                  => RequestSync | Complete
  RequestSync            => [MemberId MemberMetadata]
    MemberId             => String
    MemberMetadata       => bytes
  Complete               => GroupGenerationId MemberId MemberState
    GroupGenerationId    => int32
    MemberId             => String
    MemberState          => bytes

KafkaConsumer API

Although the protocol is slightly more complex for the KafkaConsumer implementation, most of the details are hidden from users. Below we show the basic assignment interface that would be exposed in KafkaConsumer. The partition assigner is generic to allow for custom metadata. For simple versions, the generic type would probably be Void.

class ConsumerMetadata<T> {
  String consumerId;
  List<String> subscribedTopics;
  T metadata;
}
 
interface PartitionAssigner<T> extends Configurable {
 
  /**
   * Derive the metadata to be used for the local member by this assigner. This could 
   * come from configuration or it could be derived dynamically (e.g. for host information
   * such as the hostname or number of cpus).
   * @return The metadata
   */
  public T metadata();
  /**
   * Assign partitions for this consumer.
   * @param consumerId The consumer id of this consumer
   * @param partitionsPerTopic The count of partitions for each subscribed topic
   * @param consumers Metadata for consumers in the current generation
   */
  List<TopicPartition> assign(String consumerId,
                              Map<String, Integer> partitionsPerTopic, 
                              List<ConsumerMetadata<T>> consumers);
}

TODO:

To support client-side assignment, we'd have to make the following changes:

  1. Migrate existing assignment strategies from the broker to the client. Since the assignment interface is nearly the same, this should be straightforward.
  2. Modify client/server for the new join group protocol. Since we're not really changing the protocol (just the information that is passed through it), this should also be straightforward.
  3. Remove offset validation from the consumer coordinator. Just a couple lines to remove for this.
  4. Add support for assignment versioning (if we decide we need it). Depending on what we do, may or may not be trivial.

 

  • No labels