Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Status

Current state: Under DiscussionAccepted (vote thread)

Discussion thread: here

JIRA: here

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

...

The sticky partitioner attempts to create this behavior in the partitioner. By “sticking” to a partition until the batch is full (or is sent when linger.ms is up), we can create larger batches and reduce latency in the system compared to the default partitioner. Even in the case where linger.ms is 0 and we send right away, we see improved batching and a decrease in latency. After sending a batch, the partition that is sticky changes. Over time, the records should be spread out evenly among all the partitions.

Netflix had a similar idea and created a sticky partitioner that selects a partition and sends all records to it for a given period of time before switching to a new partition.

...

The sticky partitioner will be part of the default partitioner, so there will not be a public interface directly.

There is a one new method exposed on the partitioner interface.


Code Block
languagejava
   /**
   * Executes right before a new batch will be created. For example, if a sticky partitioner is used,
   * this method can change the chosen sticky partition for the new batch. 
   * @param topic The topic name
   * @param cluster The current cluster metadata
   * @param prevPartition The partition of the batch that was just completed
   */
  default public Integervoid onNewBatch(String topic, Cluster cluster, int prevPartition) {

  return  null;

}}


The method onNewBatch will execute code right before a new batch is created. The sticky partitioner will define this method to update the sticky partition. This includes changing the sticky partition even when there will be a new batch on a keyed value. Test results show that this change will not significantly affect latency in the keyed value case.

The default of this method The default will result in no change to the current partitioning behavior for other user-defined partitioners. If a user wants to implement a sticky partitioner in their own partitioner class, this method can be overridden.

...

Change the behavior of the default partitioner in the no explicit partition, key = null case. Choose the “sticky” partition for the given topic. The “sticky” partition is changed when the record accumulator is allocating a new batch for a topic on a given partition.

These changes will slightly modify the code path for records that have keys as well, but the changes will not affect latency significantly.

A new partitioner called UniformStickyPartitioner will be created to allow sticky partitioning on all records, even those that have non-null keys. This will mirror how the RoundRobinPartitioner uses the round robin partitioning strategy for all records, including those with keys.

Compatibility, Deprecation, and Migration Plan

...

EC2 Instance

m3.xlarge

Disk Type

SSD

Duration of Test

12 minutes

Number of Brokers

3

Number of Producers

3

Replication Factor

3

Active Topics

4

Inactive Topics

1

Linger.ms

0

Acks

All

keyGenerator

{"type": "null"}

manualPartitionuseConfiguredPartitioner

True

No Flushing on Throttle (skipFlush)

True

...

Aside from latency, the sticky partitioner also sees decreased cpu utilization compared to the default code. In the cases observed, the sticky partition’s nodes often used saw up to 5-15% less reduction in CPU utilization (for example from 9-17% to 5-12.5% or from 30-40% to 15-25%) compared to the default code’s nodes.

...