Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Interpret the compacted kafka topic (or KTable notion in Kafka) as a changelog stream that interpret records with keys as upsert events [1-3];  
  • As a part of the real time pipeline, join multiple streams for enrichment and store results into a Kafka topic for further calculation later. However, the result may contain update events.
  • As a part of the real time pipeline, aggregate on data streams and store results into a Kafka topic for further calculation later. However, the result may contain update events.

...

Kafka-compacted Connector Options

The options in KTable kafka-compacted Connector are much like Kafka Connector.

Option

Required

Default

Type

Description

connector

required

(none)

String

Specify what connector to use, here should be 'ktablekafka-compacted'.

properties.bootstrap.servers

required

(none)

String

Comma separated list of Kafka brokers.

key.format

required

(none)

String

The format used to deserialize and serialize the key of Kafka record. Only insert-only format is supported.

value.format

required

(none)

String

The format used to deserialize and serialize the value of Kafka records.

topic

optional

(none)

String

Topic name(s) to read data from when the table is used as source or to write data to when the table is used as sink. Note, only one of "topic-pattern" and "topic" can be specified for sources. When the table is used as sink, the topic name is the topic to write data to. Note topic list is not supported for sinks.

topic-patternoptional(none)StringThe regular expression for a pattern of topic names to read from. All topics with names that match the specified regular expression will be subscribed by the consumer when the job starts running. Note, only one of "topic-pattern" and "topic" can be specified for sources.scan.topic-partition-discovery.intervaloptional(none)DurationInterval for consumer to discover dynamically created Kafka topics and partitions periodically.

value.fields-include

optional

'ALL'

String

Controls which fields should end up in the value as well, possible values 

-   ALL (all fields of the schema, even if they are part of e.g. the key)

-   EXCEPT_KEY (all fields of the schema - fields of the key)

 

...

It’s also possible that users want to use KTable kafka-compacted connector in batch mode. But we need more discuission about this feature.

...