Status
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
Users in large infrastructure setups often need to process and/or join data that lives in different Kafka clusters in Flink. In addition, multiple Kafka clusters may be relevant when Kafka consumers need to migrate Kafka clusters.
Some of the challenging use cases that this feature solves are:
- Transparent Kafka cluster addition/removal without Flink job restart.
- Transparent Kafka topic addition/removal without Flink job restart.
- Direct integration with Hybrid Source.
This source will extend the KafkaSource to be able to read from multiple Kafka clusters within a single source. In addition, the source can adjust the clusters and topics the source consumes from dynamically, without Flink job restart.
Public Interfaces
The source will use the FLIP-27: Refactor Source Interface to integrate it with Flink and support both bounded and unbounded jobs.
This proposal does not include any changes to existing public interfaces of the KafkaSource. A new MultiClusterKafkaSource builder will serve as the public API and all other APIs will be marked as Internal
in this proposal.
The new source will go into the Kafka connector module and follow any connector repository changes of Kafka Source.
An example of building the new Source in unbounded mode
MultiClusterKafkaSource.<String>builder() // some default implementations will be provided (file based, statically defined streams) .setKafkaMetadataService(new KafkaMetadataServiceImpl()) .setStreamIds(List.of("my-stream-1", "my-stream-2")) .setGroupId("myConsumerGroup") .setDeserializer(KafkaRecordDeserializationSchema.valueOnly(StringDeserializer.class)) .setStartingOffsets(OffsetsInitializer.earliest()) .setProperties(properties) .build();
Basic Idea
The components of the KafkaSource solve the requirements for a single Kafka cluster and the low level components are composed into order to provide the functionality to read from multiple Kafka clusters. For example, an underlying KafkaSourceEnumerator will be used to discover splits, checkpoint assigned splits, and do periodic partition discovery. Likewise, an underlying KafkaSourceReader will be used to poll and deserialize records, checkpoint split state, and commit offsets back to Kafka.
With the ability to dynamically change the underlying source components without job restart, there needs to exist a coordination mechanism to manage how the underlying KafkaSourceEnumerators and KafkaSources interact with multiple clusters and multiple topics. To discover the required clusters and topics, a Kafka Metadata Service provides the metadata and allows the source components to reconcile metadata changes, only the MultiClusterKafkaSourceEnumerator interacts with the Kafka Metadata Service. Periodic metadata discovery will be supported via source configuration just like topic partition discovery is supported via an interval in KafkaSource.
A default implementation will be provided so that native Kubernetes configmap can easily control the metadata (yaml/json file). This implementation is targeted for the basic use cases where external monitoring will inform how users change the metadata.
In addition, KafkaStream
is introduced as abstraction that contains a logical mapping to physical Kafka clusters and Kafka topics, which can be used to derive metadata for source state. Changes in metadata are detected by the source enumerator and propagated to source reader via source events to reconcile the changes.
Reconciliation is designed as KafkaSourceEnumerator and KafkaSourceReader restarts–this enables us to "remove" splits. There is careful consideration for resource cleanup and error handling--for example thread pools, metrics, and KafkaConsumer errors.
Other required functionality leverages and composes the existing KafkaSource implementation for discovering Kafka topic partition offsets, committing offsets, doing the actual Kafka Consumer polling, snapshotting state, and split coordination per Kafka cluster. We will reuse the code of the KafkaSource in order to achieve this.
To the source more user friendly, a MultiClusterKafkaSourceBuilder will be provided (e.g. batch mode should not turn on KafkaMetadataService discovery, should only be done at startup).
Proposed Changes
KafkaClusterIdentifier
This is logical abstraction is introduced since bootstrap servers may change although the "cluster" is still the same. Thus, name is used as a unique identifier, which also has the added benefit to use a short name for connector related metrics. Bootstrap server can be used as the name in simple usecases.
@PublicEvolving public class KafkaClusterIdentifier implements Comparable<KafkaClusterIdentifier>, Serializable { private final String name; private final String bootstrapServers; ... }
KafkaStream
It is possible that a Kafka stream is composed of multiple topics on multiple Kafka clusters. In addition, this flexible and general abstraction does not require any conventions on the topic naming but implementations can make assumptions to do so if desired. In the simplest case, a Kafka stream is a single topic on a single Kafka cluster.
@PublicEvolving public class KafkaStream implements Serializable { private final String streamId; private final Map<KafkaClusterIdentifier, Set<String>> kafkaClusterTopicMap; public KafkaStream( String streamId, Map<KafkaClusterIdentifier, Set<String>> kafkaClusterTopicMap) { this.streamId = streamId; this.kafkaClusterTopicMap = kafkaClusterTopicMap; } ... }
KafkaMetadataService
This is responsible to resolve Kafka metadata from streams. This may be backed by an external service or simply something logical that is contained in memory. A config map file based implementation will be provided as well for convenience. Similarly to KafkaSource subscriber integration, the #getAllStreams() API is supported here to be able to filter out streams, for example, by a regex.
This interface represents the source of truth for the current metadata and metadata that is removed is considered non-active (e.g. removing a cluster from the return value, means that a cluster is non-active and should not be read from).
@PublicEvolving public interface KafkaMetadataService extends AutoCloseable, Serializable { /** * Get current metadata for all streams. * * @return set of all streams */ Set<KafkaStream> getAllStreams(); /** * Get current metadata for queried streams. * * @param streamIds stream full names * @return map of stream name to metadata */ Map<String, KafkaStream> describeStreams(Collection<String> streamIds); /** * Check if the cluster is active. * * @param kafkaClusterIdentifier Kafka cluster identifier * @return boolean whether the cluster is active */ boolean isClusterActive(KafkaClusterIdentifier kafkaClusterIdentifier); }
KafkaStreamSubscriber
This is similar to KafkaSource's KafkaSubscriber. A regex subscriber will be provided to match streams by a regex pattern.
@PublicEvolving public interface KafkaStreamSubscriber extends Serializable { /** Get the set of subscribed streams. */ Set<KafkaStream> getSubscribedStreams(KafkaMetadataService kafkaMetadataService); }
MetadataUpdateEvent
This is a metadata update event containing the current metadata, sent from enumerator to reader. The metadata does not include stream information since it is not required by the reader, which does not directly interact with streams or the KafkaMetadataService.
@Internal public class MetadataUpdateEvent implements SourceEvent { private final Map<KafkaClusterIdentifier, Set<String>> currentClusterTopics; ... }
MultiClusterKafkaSourceEnumerator
This reader is responsible for discovering and assigning splits from 1+ clusters. At startup, the enumerator will invoke the KafkaStreamSubscriber and reconcile changes from state. Source events will be sent to the source reader to reconcile the metadata. This enumerator has the ability to poll the KafkaMetadataService, periodically for stream discovery. In addition, restarting enumerators involve clearing outdated metrics since clusters may be removed and so should their metrics.
@PublicEvolving public class MultiClusterKafkaSourceEnumerator implements SplitEnumerator<MultiClusterKafkaSourceSplit, MultiClusterKafkaSourceEnumState> { private final Map< KafkaClusterIdentifier, SplitEnumerator<KafkaPartitionSplit, KafkaSourceEnumState>> clusterEnumeratorMap; private final Map<KafkaClusterIdentifier, StoppableKafkaEnumContextProxy> clusterEnumContextMap; private final KafkaStreamSubscriber kafkaStreamSubscriber; private final KafkaMetadataService kafkaMetadataService; private Map<KafkaClusterIdentifier, Set<String>> activeClusterTopicsMap; private void restartEnumerators(KafkaClusterIdentifier kafkaClusterId, Set<TopicPartition> enumeratorState) {} ... }
StoppableKafkaEnumContextProxy
This enumerator context proxy facilitates the ability to close executors used by scheduled callables in the underlying KafkaSourceEnumerators and wraps the KafkaPartitionSplits with cluster information.
KafkaSourceEnumerators need to properly cleanup the topic partition discovery scheduled callable in restart. This can also safely handle errors with the scheduled callables when metadata is not sync with source state.
@Internal public class StoppableKafkaEnumContextProxy implements SplitEnumeratorContext<KafkaPartitionSplit>, AutoCloseable { private final KafkaClusterIdentifier kafkaClusterIdentifier; private final KafkaMetadataService kafkaMetadataService; private final SplitEnumeratorContext<MultiClusterKafkaSourceSplit> enumContext; private final ScheduledExecutorService subEnumeratorWorker; /** Wrap splits with cluster metadata. */ public void assignSplits(SplitsAssignment<KafkaPartitionSplit> newSplitAssignments) {} ... }
GetMetadataUpdateEvent
This is a metadata update event requesting the current metadata, sent from reader to enumerator.
At startup, the reader will first send a source event to grab the latest metadata from the enumerator before working on the splits (from state if existing). This is also done because it is hard to reason about reader failure during split assignment–the most reliable protocol is for the readers to request metadata at startup.
This enables us to filter splits and "remove" invalid splits (e.g. remove a topic partition from consumption). For example, at startup, checkpointed splits will be stored not but assigned an internal data structure–and valid splits according to the metadata will only be assigned.
@Internal public class GetMetadataUpdateEvent implements SourceEvent {}
MultiClusterKafkaSourceReader
This reader is responsible for reading from 1+ clusters.
There will be error handling related to reconciliation exceptions (e.g. KafkaConsumer WakeupException if KafkaSourceReader restarts in the middle of a poll). In addition, restarting enumerators involve releasing resources from underlying thread pools. Furthermore, this enables us to remove topics from KafkaSourceReader processing, since the metadata reconciliation will induce KafkaSourceReader restart in which splits can be filtered according to the current metadata.
@PublicEvolving public class MultiClusterKafkaSourceReader<T> implements SourceReader<T, MultiClusterKafkaSourceSplit> { @VisibleForTesting final NavigableMap<KafkaClusterIdentifier, KafkaSourceReader<T>> clusterReaderMap; private void restartReader( KafkaClusterIdentifier kafkaClusterId, List<KafkaPartitionSplit> readerState) {} ... }
MultiClusterKafkaSourceSplit
This extends KafkaSource's KafkaPartitionSplit to include cluster information.
@PublicEvolving public class MultiClusterKafkaSourceSplit implements SourceSplit { private final KafkaClusterIdentifier kafkaClusterId; private final KafkaPartitionSplit kafkaPartitionSplit; ... }
MultiClusterKafkaSource
Connecting it all together...
@PublicEvolving public class MultiClusterKafkaSource<T> implements Source<T, MultiClusterKafkaSourceSplit, MultiClusterKafkaSourceEnumState>, ResultTypeQueryable<T> { private final KafkaStreamSubscriber kafkaStreamSubscriber; private final KafkaMetadataService kafkaMetadataService; private final KafkaRecordDeserializationSchema<T> deserializationSchema; private final OffsetsInitializer startingOffsetsInitializer; private final OffsetsInitializer stoppingOffsetsInitializer; private final Properties properties; private final Boundedness boundedness; ... }
Compatibility, Deprecation, and Migration Plan
The source is opt in and would require users to implement code changes.
In the same vein as the migration from FlinkKafkaConsumer and KafkaSource, the source state is incompatible between KafkaSource and MultiClusterKafkaSource so it is recommended to reset all state or reset partial state by setting a different uid and starting the application from nonrestore state.
Test Plan
This will be tested by unit and integration tests. The work will extend existing KafkaSource test utilities in Flink to exercise multiple clusters.
Rejected Alternatives
None