You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state: Under Discussion

JIRA: KAFKA-8904

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

When a Kafka producer begins generating records, it must retrieve metadata about the topic it's producing to, specifically to determine information about the partition to which the record will be written. No prior knowledge of the working set of topics is provided by the client to the producer, therefore all topic metadata is fetched on-demand as it is encountered.

For the majority of use-cases, where a fixed and/or manageable number of topics are processed, fetching topic metadata is a cost that's incurred upon startup but subsequently mitigated by maintaining a cache of the topics' metadata. However, in the case where a large or variable number of topics are written, clients may encounter degenerate metadata fetching behavior that can severely limit processing.

There are three primary factors that hinder client processing when working with a large number of topics:

  1. The number of metadata RPCs generated.
  2. The response size of the metadata RPCs.
  3. Throughput constriction while fetching metadata.

For (1), an RPC is generated every time an uncached topic's metadata must be fetched. During periods when a large number of uncached topics are processed (e.g. producer startup), this can result in a large number of RPCs in that are sent out to the controller in a short period of time.

For (2), requests for metadata will also ask to refresh metadata about all known, cached topics. As the working set becomes large, this can inflate the response size and processing to be quite large. This further exacerbates (1) in that every subsequent message will result in more metadata being transmitted.

For (3), fetching of a topic's metadata is a blocking process in the client, which is all the more surprising because it blocks in a function that's advertised as asynchronous! This means that a pipeline of records to be submitted for various topics may block on the fetching of a particular topic's metadata.

Public Interfaces

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

  • Binary log format

  • The network protocol and api behavior

  • Any class in the public packages under clientsConfiguration, especially client configuration

    • org/apache/kafka/common/serialization

    • org/apache/kafka/common

    • org/apache/kafka/common/errors

    • org/apache/kafka/clients/producer

    • org/apache/kafka/clients/consumer (eventually, once stable)

  • Monitoring

  • Command line tools and arguments

  • Anything else that will likely break existing users in some way when they upgrade

Proposed Changes

The first step to addressing the above changes is to make the fetching of metadata asynchronous. This fixes (3), and opens the path for resolving (1) by enabling the metadata requests to be batched. The producer inherently batches the sending of records to partitions, so subjecting the metadata fetching to a value <= the batching delay doesn't change the interaction or expectation of the client.

To address (2), the producer can maintain a staleness duration threshold for every topic, and only request metadata updates for the topics whose thresholds have been exceeded. A soft threshold could also be added such that best-effort fetching could be performed on a subset of the topics, so that metadata updates are staggered over time in smaller batches.

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.

  • No labels