Status
Current state: "Under Discussion"
Discussion thread: here
JIRA:
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
The Kafka Connect framework allows sink connector tasks to do their own offset tracking in case they want to do asynchronous processing (for instance, buffering records sent by the framework to be flushed to the sink system at some later time). The SinkTask::preCommit
method allows sink task implementations to provide the framework with the consumer offsets that are safe to commit for each topic partition. There's currently an incompatibility between Sink connectors overriding the SinkTask::preCommit
method and SMTs that mutate the topic field.
The problem was present since the SinkTask::preCommit
method's inception and is rooted in a mismatch between the Kafka topic partition and offset that is passed to SinkTask::open
/ SinkTask::preCommit
(the original topic partition and offset before applying any transformations) and the topic/partition/offset that is present in the SinkRecord that the SinkTask::put
method receives (after transformations are applied). Since that's all the information the connector has to implement any kind of internal offset tracking, the topic/partition/offset it can return in preCommit will correspond to the transformed topic/partition/offset, when the framework actually expects it to be the original topic/partition/offset.
In SinkTask::preCommit
. For the others, it was acknowledged that "broader API changes are required".
Public Interfaces
org.apache.kafka.connect.sink.SinkRecord
Add new fields along with their corresponding getters and constructor:
private final String originalTopic; private final Integer originalKafkaPartition; private final long originalKafkaOffset; ... public SinkRecord(String topic, int partition, Schema keySchema, Object key, Schema valueSchema, Object value, long kafkaOffset, Long timestamp, TimestampType timestampType, Iterable<Header> headers, String originalTopic) { ... /** * @return the topic corresponding to the Kafka record before any transformations were applied. This should be * used for any internal offset tracking purposes rather than {@link #topic()}, in order to be compatible * with SMTs that mutate the topic name. */ public String originalTopic() { return originalTopic; } /** * @return the topic partition corresponding to the Kafka record before any transformations were applied. This * should be used for any internal offset tracking purposes rather than {@link #kafkaPartition()}, in order to be * compatible with SMTs that mutate the topic partition. */ public Integer originalKafkaPartition() { return originalKafkaPartition; } /** * @return the offset corresponding to the Kafka record before any transformations were applied. This * should be used for any internal offset tracking purposes rather than {@link #kafkaOffset()}, in order to be * compatible with SMTs that mutate the offset value. */ public long originalKafkaOffset() { return originalKafkaOffset; }
Proposed Changes
Expose the original Kafka topic, topic partition and offset via new SinkRecord
public methods and ask Sink Connectors to use that information for offset tracking purposes. Note that while the record's offset can't be modified via the standard SinkRecord::newRecord
methods that SMTs are expected to use, SinkRecord
has public constructors that would allow SMTs to return records with modified offsets. This is why the proposed changes include a new SinkRecord::originalKafkaOffset
method as well.
Compatibility, Deprecation, and Migration Plan
Backwards Compatibility
This proposal is backward compatible such that existing sink connector implementations will continue to work as before.
Forward Compatibility
To ensure that connectors using these new methods can still be deployed on older versions of Kafka Connect, the developer should use a try catch block to catch the NoSuchMethodError
or NoClassDefFoundError
thrown by Connect workers running an older version of AK, but it should be clearly documented that they would not be compatible with topic/partition/offset mutating SMTs in those environments.
Rejected Alternatives
Address the problem entirely within the framework, doing some kind of mapping from the transformed topic back to the original topic.
This would only work in the cases where there’s no overlap between the transformed topic names, but would break for the rest of the transformations (e.g. static transformation, topic = “a”).
Even if we wanted to limit the support to those cases, it would require considerable bookkeeping to add a validation to verify that the transformation chain adheres to that expectation (and fail fast if it doesn’t).
Expose the entire original record instead of only topic/partition (e.g. originalSinkRecord)
We should not expose the original value/key, transformations might be editing them for security reasons.
Create a method in the
SinkTaskContext
to get this information, instead of updatingSinkRecord
(e.g.SinkTaskContext.getOriginalTopic(SinkRecord sr) / SinkTaskContext.getOriginalKafkaPartition(SinkRecord sr)
Requires extra bookkeeping without concrete value.
Update
SinkTask::put
in any way to pass the new information outsideSinkRecord
(e.g. a Map or a derived class)Much more disruptive change without considerable pros