You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Status

Current state: Under Discussion

JIRA: KAFKA-8904

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

When a Kafka producer begins generating records, it must retrieve metadata about the topic it's producing to, specifically to determine information about the partition to which the record will be written. No prior knowledge of the working set of topics is provided by the client to the producer, therefore all topic metadata is fetched on-demand as it is encountered.

For the majority of use-cases, where a fixed and/or manageable number of topics are processed, fetching topic metadata is a cost that's incurred upon startup but subsequently mitigated by maintaining a cache of the topics' metadata. However, in the case where a large or variable number of topics are written, clients may encounter degenerate metadata fetching behavior that can severely limit processing.

There are three primary factors that hinder client processing when working with a large number of topics:

  1. The number of metadata RPCs generated.
  2. The response size of the metadata RPCs.
  3. Throughput constriction while fetching metadata.

For (1), an RPC is generated every time an uncached topic's metadata must be fetched. During periods when a large number of uncached topics are processed (e.g. producer startup), this can result in a large number of RPCs in that are sent out to the controller in a short period of time.

For (2), requests for metadata will also ask to refresh metadata about all known, cached topics. As the working set becomes large, this can inflate the response size and processing to be quite large. This further exacerbates (1) in that every subsequent message will result in more metadata being transmitted.

For (3), fetching of a topic's metadata is a blocking process in the client, which is all the more surprising because it blocks in a function that's advertised as asynchronous! This means that a pipeline of records to be submitted for various topics may block on the fetching of a particular topic's metadata.

Public Interfaces

No public interfaces will be modified.

Proposed Changes

The first step to addressing the above changes is to make the fetching of metadata asynchronous. This fixes (3), and opens the path for resolving (1) by enabling the metadata requests to be batched. The producer inherently batches the sending of records to partitions, so subjecting the metadata fetching to a value <= the batching delay doesn't change the interaction or expectation of the client.

To address (2), the producer can maintain a staleness duration threshold for every topic, and only request metadata updates for the topics whose thresholds have been exceeded. A soft threshold could also be added such that best-effort fetching could be performed on a subset of the topics, so that metadata updates are staggered over time in smaller batches.

Compatibility, Deprecation, and Migration Plan

Impact on client will be strictly internal performance improvements; no public APIs, protocols, or other external factors are being changed.

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.

  • No labels