You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

Status

Current state: Under Discussion

Discussion thread: here

JIRA: KAFKA-8904

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

When a Kafka producer sends a record to a topic, it must have metadata about the topic in order to determine the partition to which the record will be delivered. Therefore, an unknown topic's metadata must be fetched whenever it is encountered. In the producer's current incarnation, no prior knowledge of the working set of topics is provided by the client so all topic metadata is requested on-demand.

For the majority of use-cases, where a fixed and/or manageable number of topics are processed, fetching topic metadata is a cost that's incurred upon startup but subsequently mitigated by maintaining a metadata cache. However, in the case where a large or variable number of topics are written, clients may encounter degraded performance that severely limits processing, or in degenerate timeout cases, behavior which impedes progress altogether.

There are three primary factors in the producer that hinder client processing when working with a large number of topics:

  1. The number of metadata RPCs generated.
  2. The size of the metadata RPCs.
  3. Throughput constriction while fetching metadata.

For (1), an RPC is generated every time an uncached topic's metadata must be fetched. During periods when a large number of uncached topics are processed (e.g. producer startup), a large number of RPCs may be sent out to the controller in a short period of time. Generally, if there's n unknown topics, then n metadata RPCs will be sent regardless to their proximity in time.

For (2), requests for metadata will also ask to refresh metadata about all known topics. As the number of topics becomes large, this will inflate the response size to be quite large and require non-trivial processing. This further exacerbates (1) in that every subsequent metadata request will result in an increasing amount of data transmitted back to the client for every RPC.

For (3), fetching of a topic's metadata is a blocking operation in the producer, which is all the more surprising because it blocks in a function that's advertised as asynchronous. This means that a pipeline of records to be submitted for various uncached topics will serially block while fetching an individual topic's metadata.

In concert, these three factors amplify the negative effects of each other, and should be resolved in order to alleviate any topic scalability issues.

Public Interfaces

No public interfaces will be modified.

Proposed Changes

The first step to addressing the above changes is to make the fetching of metadata asynchronous within the producer. This directly fixes (3), and opens the path for resolving (1) by enabling the metadata requests to be batched together. Since the producer's interface is asynchronous and it inherently batches the sending of records to partitions, subjecting the metadata fetching to a subset of the batching delay doesn't change the interaction or expectations of the client. This change alone should be good enough to bring performance back to acceptable, pending verification.

Specific modifications would be to make KafkaProducer#waitOnMetadata to be asynchronous when it must block. For uncached topics, the producer will maintain a queue of its outstanding records to ensure proper ordering (in the accumulator and for callback invocations) once topic metadata is resolved. Proper care must be taken to maintain the linger period for fetching metadata, and individual record timeout while queued.

To address (2), the producer currently maintains an expiry threshold for every topic, which is used to remove a topic from the working set at a future time (currently hard-coded to 5 minutes). While this does work to reduce the size of the topic working set, the producer will continue fetching metadata for these topics in every metadata request for the full expiry duration. This logic can be made more intelligent by managing the expiry from when the topic was last used, enabling the expiry duration to be reduced to improve cases where a large number of topics are touched intermittently. To control behavior, a configuration variable topic.expiry.ms should be added to the producer configuration.

Additionally, a second configuration variable topic.refresh.ms may be introduced to control the permitted staleness of a topic's metadata. Then, when topic metadata is fetched, only topics whose metadata is older than the refresh value need to be included in the request. Background, best-effort updates should be performed for topics with stale metadata, which would work towards reducing the number of topics included in a single topic metadata request.

Compatibility, Deprecation, and Migration Plan

Impact on client will be strictly internal performance improvements; no public APIs, protocols, or other external factors are being changed.

Rejected Alternatives

None.

  • No labels