Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state["Under Discussion"]

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]

JIRA: here [Change the link from KAFKA-1 to your own ticket] 

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

...

This will keep increasing until it hits the retry.backoff.max.ms value.

Proposed Changes

Since there are different components that make use of the retry.backoff.ms configuration, each module that uses this will have to be changed. That being said, across all components, there will be a similar logic for dynamically updating the retry backoff, just making it fit to the component it’s in.

Admin Client

In KafkaAdminClient, we will have to modify the way the retry backoff is calculated for the calls that have failed and need to be retried. From the current static retry backoff, we have to introduce a mechanism for all calls that upon failure, the next retry time is dynamically calculated.

Consumer

In KafkaConsumer,  we have to change the retry backoff logic in the Fetcher, ConsumerNetworkClient, and Metadata. Since ConsumerNetworkClient and Metadata are also used by other clients, they would have to house their own retry backoff logic. For the Fetcher however, it could query a dynamically updating retryBackOffMs from KafkaConsumer.

Producer

For the KafkaProducer, we have to change the retry backoff logic in ConsoleProducer, RecordAccumulator, Sender, TransactionManager, and Metadata. As mentioned above, Metadata is used by other clients, so it would have its own retry backoff logic. For the rest of the classes, as described in the “Consumer” section above, it can query a dynamically updating retryBackOffMs from KafkaProducer.  

Broker API Versions Command

Changes made for AdminClient and ConsumerNetworkClient would apply here. The main changes that would have to be made are for BrokerApiVersionsCommand to set the appropriate arguments for AdminClient and ConsumerNetworkClient after changes for those are madeDescribe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Rejected Alternatives

...

For users who have not set retry.backoff.ms explicitly, the default behavior will change so that the backoff will grow up to 1000 ms. For users who have set retry.backoff.ms explicitly, the behavior will remain the same as they could have specific requirements.

Rejected Alternatives

  1. Default retry.backoff.max.ms to the same value as retry.backoff.ms so that existing behavior is always maintained: for reasons explained in the compatibility section.
  2. Default retry.backoff.max.ms to be 1000 ms unconditionally: for reasons explained in the compatibility section.