Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Discussion thread: here

JIRA: here

PR: here (WIP)

Motivation

Currently there are two error handling options in Kafka Connect, “none” and “all”. Option “none” will config the connector to fail fast, and option “all” will ignore broken records.

If users want to store their broken records, they have to config a broken record queue, which is too much work for them in some cases. The complexity comes from maintaining additional topics and connectors, which requires extra time and money. I can imagine a case where a user has 3 topics consumed by S3, HDFS and JDBC respectively. The user has to maintain 3 more connectors to consume three DLQs, in order to put broken records to the place they should go. This new option will give users a choice to only maintain half of their connectors, yet having broken records stored in each destination system. 

Some sink connectors have the ability to deal with broken records, for example, a JDBC sink connector can store the broken raw bytes into a separate table, a S3 connector can store that in a zipped file.

...

In Kafka Connect, the configuration errors.tolerance

...

will have a third option "continue" besides "none" and "all".

SinkTask will be added a new function to handle broken records

public void putBrokenRecord(Collection<SinkRecord> var1) {
throw UnsupportedOperationException(); // This is the default implementation to indicate that broken record handling is not supported by this connector.
}

Connectors can overwrite this function to handle broken records.

Proposed Changes

Add a third option to error handling, which should behave like “continue” when error occurs at Converter or SMT. The infrastructure should send the broken byte message directly to SinkTask.

...