Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Note that the scope of this KIP is only for the user facing public APIs that are needed for adding this client. We are not going to replace the whole StreamsKafkaClient completely within in steps alongside with this KIP.

 

Public Interfaces

...

Code Block
languagejava
import org.apache.kafka.clients.admin.AdminClient;
 
public interface KafkaClientSupplier {
    // other existing APIs

    /**
     * Create a {@link AdminClient} which is used for internal topic management.
     *
     * @param config Supplied by the {@link StreamsConfig} given to the {@link KafkaStreams}
     * @return an instance of {@link AdminClient}
     */
    AdminClient getAdminCLientgetAdminClient(final Map<String, Object> config);
}

 

Proposed Changes

Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Rejected Alternatives

The newly added API allows the Streams library to create an instance of AdminClient. We are going to create one instance within each thread, and pass in that instance into the created InternalTopicManager for the leader of the group only. Here are a list of changes we are going to make:

  • Purge repartition data on commit: this is summarized in 
    Jira
    serverASF JIRA
    serverId5aa69414-a9e9-3523-82ec-879b028fb15b
    keyKAFKA-6150
    . The AdminClient's deleteRecords API (adding in KIP-204) will be used upon committing intervals.
  • Create internal topic within InternalTopicManager: we will use the create topic API to do that, and also we'll remove the endless-loop checking after the creation within StreamsPartitionAssignor, instead after the rebalance we will let the restoration retry within the main loop if the metadata is not known yet.
  • Compatibility check: we will use a network client for this purpose, as it is a one-time thing. 

 

Compatibility, Deprecation, and Migration Plan

  • Since we are only adding a new function in the public API, it is binary compatible though not source compatible; users are only required to make one-line change and recompile if they customize the KafkaClientSupplier.

Rejected Alternatives

NoneIf there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.