Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Renamed field from epoch.precision to unix.precision

Table of Contents

Status

Current state: "Under Discussion"

Discussion thread: here 

Vote thread: here

JIRA: KAFKA-13511 

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

TimestampConverter should have a config to define which precision to use when convert from and to Long epoch unix timestamp.

Currently, the Kafka Connect SMT TimestampConverter can convert Timestamp from multiples sources types (String, Unix Long or Date) into different target types (String, Unix Long or Date).

The problem is that Unix Long as a source or as a target type is required to be epoch in millisecondsmust be with milliseconds precision.

In many cases, epoch Unix time is represented with different precisions within external systems : seconds, microseconds, nanoseconds.

...

This issue was raised several times : 

TimestampConverter should have a config to define which precision to use when converting from and to Long Unix timestamp.

Public Interfaces

namedescriptiontypedefaultvalid valuesimportance
epoch
unix.precisionThe desired epoch precision for the timestamp. Used to generate the output when type=unix or used to parse the input if the input is a Long.Stringmillisseconds, millis, micros, nanoslow

Proposed Changes

New config property for TimestampConverter

Implementation details to be discussed :

  • TimeUnit.MILLISECONDS.toMicros(epochMilisunixMillis) and so on for the other conversions seems the easiest way. 

...

Code Block
languageyml
"transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.TimestampConverter.field": "event_date_long",
"transforms.TimestampConverter.epochunix.precision": "micros",
"transforms.TimestampConverter.target.type": "Timestamp"

...

Code Block
languageyml
"transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.TimestampConverter.field": "event_date_str",
"transforms.TimestampConverter.format": "yyyy-MM-dd'T'HH:mm:ss.SSS",
"transforms.TimestampConverter.target.type": "unix",
"transforms.TimestampConverter.epochunix.precision": "nanos"

java.util.Date and SimpleDateFormat limitations

Since these classes can only handle precisions down to the millisecond, it should be noted that:

  • converting source Unix Long microseconds or nanos into any target type leads to a precision loss (truncation after millis)
  • converting any source type into target Unix Long microseconds or nanos, the part after milliseconds will always be 0
  • A KIP that address Date vs Instant may be more appropriate but it impacts so much of the code that I believe this is a good first step.

int32 and seconds

Systems that produces int32 into Kafka should willingly chain Cast SMT and then TimestampConverter SMT if they want to use this feature.

Code Block
languageyml
"transforms": "Cast,TimestampConverter",
"transforms.Cast.type": "org.apache.kafka.connect.transforms.Cast$Value",
"transforms.Cast.spec": "event_date_int:int64",
"transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.TimestampConverter.field": "event_date_int",
"transforms.TimestampConverter.epochunix.precision": "seconds",
"transforms.TimestampConverter.target.type": "Timestamp"

Compatibility, Deprecation, and Migration Plan

The change will not break the compatibility.

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.