You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

Status

Current stateUnder Discussion

Discussion thread: Not available now

JIRA: here

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

We have an existing table() API in the StreamsBuilder which could materialize a Kafka topic into a local state store called KTable. This interface is very useful when we want to back up a Kafka topic to local store. As we know, currently there are 2 different types of state store: key-value based and window based. The current interface could only accept key-value store, which is not ideal. There are certain cases we need to materialize a windowed topic (or changlog topic) created by another Stream application into local store. In this KIP, we would like to address this problem by creating a new API called windowedTable() which supports the generation of a windowed KTable.

Here comes the tricky part: when building this API, in the source processor point of view, the windowed topic input should be (Windowed<K> key, V value). Note that this is different from a normal topic as the serdes required here should be windowed serdes. Let's clear the four different cases involved in the discussion: 

  1. Non-windowed topic materialized to key-value store. This is the most common case and has already been covered by table() API. 
  2. Non-windowed topic materialized to window store. This is a fallacious requirement because we could easily use aggregate() API to generate a window store based on non-windowed topic.
  3. Windowed topic (stream changelog) materialized to key-value store. This is also a rare requirement to discuss, because the natural difference between key-value store and window store is that window store sets a retention of the data. By materializing windowed topic to key-value we lost the control on the TTL, which leads to wrong representation of the changlog data.
  4. Windowed topic  (stream changelog) materialized to window store. This is a missing requirement which needs to be addressed by our new API. Currently it's very hard to share a changlog between stream applications, and it could be really useful to share the same state store across applications by this API.


Public Interfaces

The current KTable API looks like:

StreamsBuilder.java
public synchronized <K, V> KTable<K, V> table(final String topic);
public synchronized <K, V> KTable<K, V> table(final String topic, final Consumed<K, V> consumed);
public synchronized <K, V> KTable<K, V> table(final String topic, final Materialized<K, V, KeyValueStore<Bytes, byte[]>> materialized);
public synchronized <K, V> KTable<K, V> table(final String topic, final Consumed<K, V> consumed, final Materialized<K, V, KeyValueStore<Bytes, byte[]>> materialized);

Through Materialized struct, we could pass in a KeyValueStore<Bytes, byte[]> struct as the local state store. In fact, underlying KTable class by default stores data in a key-value store backed up by RocksDB. We want to also support window store which is a very natural requirement if we are materializing a windowed topic with windowed key.

Proposed Changes

We would like to add one new API to support window store as underlying storage option for windowed topic.

StreamsBuilder.java
public synchronized <K, V> KTable<Windowed<K>, V> windowedTable(final String topic, final Consumed<Windowed<K>, V> consumed, final Materialized<K, V, WindowStore<Bytes, byte[]>> materialized);

One thing user needs to notice is how to pass in Serde. The type for consumed struct is Consumed<Windowed<K>, V>, because we need to be able to deserialize struct as windowed key and value; The type for materialized however, was Materialized<K, V, WindowStore<Bytes, byte[]>> because the window store needs to store raw key instead of windowed key. By strict type enforcement, user would be alerted at compile time if they confuse the two.

We don't support windowedTable function without `materialized` or `consumed`.

The reason to keep `materialized` is that the window store requires a concrete retention time, window size and number of rolling segments to construct. On the application side Stream job could not infer the windowed topic retention or window size, so these are required information from the user. 

The reason to keep `consumed` is that in the table() API, the reason we could omit `consumed` or `materialized` is that they could share the `keySerde` and `valueSerde` during state store and node construction. This in the windowed table context, however, is not true, since we are using both windowed keySerde and raw keySerde at the same time. So both structs are required.

Compatibility, Deprecation, and Migration Plan

This KIP will not change the existing table() API, which should be backward compatible.

Rejected Alternatives

We start by changing the store type on the table API to support window store:

StreamsBuilder.java
public synchronized <K, V> KTable<K, V> table(final String topic, final Materialized<K, V, WindowStore<Bytes, byte[]>> materialized);

However, this straightfoward solution hits 2 problems:

  1. The store type could not be changed due to Java "method has same erasure" error
  2. Even if we name the API to windowedTable, it is still not ideal because we saw certain KTable return type in other classes such as in KGroupedStream:

KGroupedStream.java
<W extends Window> KTable<Windowed<K>, Long> count(final Windows<W> windows, final String queryableStoreName);

So we could see that if we return KTable<K, V> in the above table API for window store, we are introducing inconsistent API to the outside user. By defining the output as KTable<Windowed<K>, V> the user could be clear that we are using window store in the underlying implementation.

  • No labels