THIS IS A TEST INSTANCE. ALL YOUR CHANGES WILL BE LOST!!!!
...
When does a record not contain a valid timestamp:
- If you are using the default
ConsumerRecordTimestampExtractor
, (FailOnInvalidTimestamp since default FailOnInvalidTimestamp (for v0.10.0 and v0.10.21ConsumerRecordTimestampExtractor
), it is most likely that your records do not carry an embedded timestamp (embedded record timestamps got introduced in Kafka’s message format in Kafka0.10
). This might happen, if for example, you consume a topic that is written by old Kafka producer clients (i.e., version0.9
or earlier) or by third-party producer clients. Another situation where this may happen is after upgrading your Kafka cluster from0.9
to0.10
, where all the data that was generated with0.9
does not include the0.10
message timestamps. - If you are using a custom timestamp extractor, make sure that your extractor is properly handling invalid (negative) timestamps, where “properly” depends on the semantics of your application. For example, you can return a default or an estimated timestamp if you cannot extract a valid timestamp (maybe the timestamp field in your data is just missing).
- As of Kafka 0.10.2, there are two alternative extractors namely LogAndSkipOnInvalidTimestamp and UsePreviousTimeOnInvalidTimestamp that handle invalid record timestamps more gracefully (but with potential data loss or semantical impact).
- You can also switch to processing-time semantics via
WallclockTimestampExtractor
; whether such a fallback is an appropriate response to this situation depends on your use case.
...