...
For sink connectors, we will write the original record (from the Kafka topic the sink connector is consuming from) which caused the failure to a configurable Kafka topic.
Config Option | Description | Default Value | Domain |
---|---|---|---|
errors.deadletterqueue.topic.name | The name of the dead letter queue topic. If not set, this feature will be disabled. | "" | A valid Kafka topic name |
errors.deadletterqueue.context.headers.enable | If true, multiple headers will be added to annotate the record with the error context | false | Boolean |
If the property errors.deadletterqueue.
context.headers.enable
is set to true
, the following headers will be added to the produced raw message (only if they don't already exist in the message). All values will be Strings.
Header Name | Description |
---|---|
__connect.errors.topic | Name of the topic that contained the message. |
__connect.errors.partition | The original partition that contained the message (encoded as a String). |
__connect.errors.offset | The original offset that contained this message (encoded as a String). |
__connect.errors.connector.name | The name of the connector which encountered the error. |
__connect.errors.task.id | The id of the task which encountered the error (encoded as a String). |
__connect.errors.stage | The name of the stage where the error occurred. |
__connect.errors.class.name | The fully qualified name of the class that caused the error. |
__connect.errors.exception.class.name | The fully qualified classname of the exception that was thrown during the execution. |
__connect.errors.exception.message | The message in the exception. |
__connect.errors.exception.stacktrace | The stacktrace of the exception. |
Metrics
The following new metrics will monitor the number of failures, and the behavior of the response handler. Specifically, the following set of counters:
...