Status
Current state: Under Discussion
JIRA: KAFKA-8904
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
When a Kafka producer sends a record to a topic, it must have metadata about the topic in order to determine the partition to which the record will be delivered. Therefore, an unknown topic's metadata must be fetched whenever it is encountered. In the producer's current incarnation, no prior knowledge of the working set of topics is provided by the client so all topic metadata is requested on-demand.
For the majority of use-cases, where a fixed and/or manageable number of topics are processed, fetching topic metadata is a cost that's incurred upon startup but subsequently mitigated by maintaining a metadata cache. However, in the case where a large or variable number of topics are written, clients may encounter degraded performance that severely limits processing, or in degenerate timeout cases, behavior which impedes progress altogether.
There are three primary factors in the producer that hinder client processing when working with a large number of topics:
- The number of metadata RPCs generated.
- The size of the metadata RPCs.
- Throughput constriction while fetching metadata.
For (1), an RPC is generated every time an uncached topic's metadata must be fetched. During periods when a large number of uncached topics are processed (e.g. producer startup), a large number of RPCs may be sent out to the controller in a short period of time. Generally, if there's n unknown topics, then n metadata RPCs will be sent regardless to their proximity in time.
For (2), requests for metadata will also ask to refresh metadata about all known topics. As the number of topics becomes large, this will inflate the response size to be quite large and require non-trivial processing. This further exacerbates (1) in that every subsequent metadata request will result in an increasing amount of data transmitted back to the client for every RPC.
For (3), fetching of a topic's metadata is a blocking operation in the producer, which is all the more surprising because it blocks in a function that's advertised as asynchronous. This means that a pipeline of records to be submitted for various uncached topics will serially block while fetching an individual topic's metadata.
In concert, these three factors amplify the negative effects of each other, and should be resolved in order to alleviate any topic scalability issues.
Public Interfaces
No public interfaces will be modified.
Proposed Changes
The first step to addressing the above changes is to make the fetching of metadata asynchronous within the producer. This directly fixes (3), and opens the path for resolving (1) by enabling the metadata requests to be batched together. Since the producer's interface is asynchronous and it inherently batches the sending of records to partitions, subjecting the metadata fetching to a subset of the batching delay doesn't change the interaction or expectations of the client. This change alone should be good enough to bring performance back to acceptable, pending verification.
Specific modifications would be to make KafkaProducer#waitOnMetadata to be asynchronous when it must block. A client queue of records for uncached topics will be maintained to ensure proper ordering of submission and callback invocation, where the records would flow back into the current execution logic when metadata is resolved. Proper care must be taken to handle batch sizing and the efforts to maintain the linger timeout.
To address (2), the producer maintains a staleness duration threshold for every topic, but it does not act upon this for metadata fetching, instead falls back to fetching information about all topics in the cluster. Further optimization could be done to only request metadata updates for topics whose staleness thresholds have been exceeded. A soft threshold could also be added such that best-effort fetching could be performed on a subset of the topics, so that metadata updates are staggered over time and performed in smaller batches.
Compatibility, Deprecation, and Migration Plan
Impact on client will be strictly internal performance improvements; no public APIs, protocols, or other external factors are being changed.
Rejected Alternatives
None.