Status
Current state: "Under Discussion"
Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]
JIRA: KAFKA-13511
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
TimestampConverter should have a config to define which precision to use when convert from and to Long epoch unix timestamp.
Currently, the Kafka Connect SMT TimestampConverter can convert Timestamp from multiples sources types (String, Long or Date) into different target types (String, Long or Date).
The problem is that Long as a source or as a target type is required to be epoch in milliseconds.
In many cases, epoch is represented with different precisions within external systems : seconds, microseconds, nanoseconds.
When such case arise, Kafka Connect can't do anything expect pass along the Long and leave the conversion to another layer.
This issue was raised several times :
Public Interfaces
epoch.precision
, which defaults to millis
Proposed Changes
New config property for TimestampConverter
Implementation details to be discussed :
- TimeUnit.MILLISECONDS.toMicros(epochMilis) and so on for the other conversions seems the easiest way.
"transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value", "transforms.TimestampConverter.field": "event_date_long", "transforms.TimestampConverter.epoch.precision": "micros", "transforms.TimestampConverter.target.type": "Timestamp"
java.util.Date and SimpleDateFormat limitations
Since these classes can only handle precisions down to the millisecond, it should be noted that:
- converting source Long microseconds or nanos into any target type leads to a precision loss (truncation after millis)
- converting any source type into target Long microseconds or nanos, the part after milliseconds will always be 0
- A KIP that address Date vs Instant may be more appropriate but it impacts so much of the code that I believe this is a good first step.
int32 and seconds
Systems that produces int32 into Kafka should willingly chain Cast SMT and then TimestampConverter SMT if they want to use this feature.
"transforms": "Cast,TimestampConverter", "transforms.Cast.type": "org.apache.kafka.connect.transforms.Cast$Value", "transforms.Cast.spec": "event_date_int:int64", "transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value", "transforms.TimestampConverter.field": "event_date_int", "transforms.TimestampConverter.epoch.precision": "seconds", "transforms.TimestampConverter.target.type": "Timestamp"
Compatibility, Deprecation, and Migration Plan
The change will not break the compatibility.
Rejected Alternatives
If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.