Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state:  [One of "Under Discussion", " Accepted", "Rejected"]

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread] 

Vote thread: here

JIRA: here [Change the link from KAFKA-1 to your own ticket]13511 

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

...

Describe the problems you are trying to solve.

Public Interfaces

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

  • Binary log format

  • The network protocol and api behavior

  • Any class in the public packages under clientsConfiguration, especially client configuration

    • org/apache/kafka/common/serialization

    • org/apache/kafka/common

    • org/apache/kafka/common/errors

    • org/apache/kafka/clients/producer

    • org/apache/kafka/clients/consumer (eventually, once stable)

  • Monitoring

  • Command line tools and arguments

  • Anything else that will likely break existing users in some way when they upgrade

Proposed Changes

...

Currently, the Kafka Connect SMT TimestampConverter can convert Timestamp from multiples sources types (String, Unix Long or Date) into different target types (String, Unix Long or Date).

The problem is that Unix Long as a source or as a target type must be with milliseconds precision.

In many cases, Unix time is represented with different precisions within external systems : seconds, microseconds, nanoseconds.

When such case arise, Kafka Connect can't do anything expect pass along the Unix Long and leave the conversion to another layer.

This issue was raised several times : 

TimestampConverter should have a config to define which precision to use when converting from and to Long Unix timestamp.

Public Interfaces

namedescriptiontypedefaultvalid valuesimportance
unix.precision

The desired unix precision for the timestamp. Used to generate the output when type=unix or used to parse the input if the input is a Long.

Note: This SMT will cause precision loss during conversions from and to values with sub-milliseconds components.

Stringmillisecondsseconds, milliseconds, microseconds, nanosecondslow

Proposed Changes

New config property for TimestampConverter

Implementation details to be discussed :

  • TimeUnit.MILLISECONDS.toMicros(unixMillis) and so on for the other conversions seems the easiest way. 

Unix Long to Timestamp example:

Code Block
languageyml
"transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.TimestampConverter.field": "event_date_long",
"transforms.TimestampConverter.unix.precision": "microseconds",
"transforms.TimestampConverter.target.type": "Timestamp"

String to Unix Long nanoseconds example:

Code Block
languageyml
"transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.TimestampConverter.field": "event_date_str",
"transforms.TimestampConverter.format": "yyyy-MM-dd'T'HH:mm:ss.SSS",
"transforms.TimestampConverter.target.type": "unix",
"transforms.TimestampConverter.unix.precision": "nanoseconds"

java.util.Date and SimpleDateFormat limitations

Since these classes can only handle precisions down to the millisecond, it should be noted that:

  • converting source Unix Long microseconds or nanos into any target type leads to a precision loss (truncation after millis)
  • converting any source type into target Unix Long microseconds or nanos, the part after milliseconds will always be 0
  • A KIP that address Date vs Instant may be more appropriate but it impacts so much of the code that I believe this is a good first step.

int32 and seconds

Systems that produces int32 into Kafka should willingly chain Cast SMT and then TimestampConverter SMT if they want to use this feature.

Code Block
languageyml
"transforms": "Cast,TimestampConverter",
"transforms.Cast.type": "org.apache.kafka.connect.transforms.Cast$Value",
"transforms.Cast.spec": "event_date_int:int64",
"transforms.TimestampConverter.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.TimestampConverter.field": "event_date_int",
"transforms.TimestampConverter.unix.precision": "seconds",
"transforms.TimestampConverter.target.type": "Timestamp"

Compatibility, Deprecation, and Migration Plan

...

The change will not break the compatibility.

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.

Name the field epoch.precision

Since epoch is not a measure but rather a point in time, it can't be associated to a precision.

It makes more sense to name the field unix.precision for that reason.

Use seconds, millis, micros, nanos  or s,ms,us,ns as values

seconds is a unit, but millis micros, nanos are really just prefixes. Mixing up doesn't work well.

s, ms, µs, ns are a valid SI symbols but the µs (or its accepted equivalent us) can be confusing.

For clarity, it was decided to use the plaintext naming convention.