Table of Contents |
---|
Status
Current state: "Under Discussion"Adopted
Discussion thread: here
Vote Discussion thread: here
JIRA:
Jira | ||||||
---|---|---|---|---|---|---|
|
...
For each key, the iterator guarantees ordering of windows, starting from the oldest/earliest”earliest”
Similar guarantees are provided on other fetch and range operations. But in the case of key
ranges, there are some nuances regarding order:
The returned iterator must be safe from {@link java.util.ConcurrentModificationException}s and must not return null values. No ordering guarantees are provided.
Ordering is not guaranteed as backing structure is based on maps keyed by o.a.k.common.utils.Bytes
. Though, Bytes
support Lexicographic
byte array comparison, which defines ordering in-memory and RocksDB
stores.
These This APIs constraint the usage of local state store for some use-cases:
...
If a backward read direction option becomes available, then we could start from the latest record within a time range and go backwards, returning the first N
value more efficiently.
At Zipkin Kafka-based storage, we are planning to use this feature to replace two KeyValueStores (one for traces indexed by id, and another with trace_ids indexed by timestamp) for one WindowStore. A backward read direction will allow to support queries like: “within this time range, find the last traces that match this criteria”, and return latest values quickly.
Internally, both implementations: persistent (RocksDB), and in-memory (TreeMap) support reverse/descending iteration:
Code Block |
---|
final RocksIterator iter = db.newIterator();
iter.seekToFirst();
iter.next();
final RocksIterator reverse = db.newIterator();
reverse.seekToLast();
reverse.prev();
final TreeMap<String, String> map = new TreeMap<>();
final NavigableSet<String> nav = map.navigableKeySet();
final NavigableSet<String> rev = map.descendingKeySet(); |
Reference issues
Proposed Changes
Introduce a new StreamsConfig configuration to flag support for backwards iteration:
Code Block | ||
---|---|---|
| ||
public class StreamsConfig extends AbstractConfig {
public static final String ENABLE_BACKWARD_ITERATION_CONFIG = "enable.backward.iteration";
private static final String ENABLE_BACKWARD_ITERATION_DOC = "If true any range operations will accept (from, to) arguments to be from > to, returning recent records first";
} |
If true, ReadOnlyKeyValueStore will support (from, to) argument pairs to be: from > to, returning iteration in reverse order.
To complete support for backwards iteration, `all` operations will be companioned by a `reverseAll`:
Code Block | ||
---|---|---|
| ||
public interface ReadOnlyKeyValueStore<K, V> { KeyValueIterator<K, V> reverseAll(); }
public interface ReadOnlyWindowStore<K, V> { KeyValueIterator<Windowed<K>, V> reverseAll(); } |
StreamConfig flag will be passed to Stores via `ProcessorContext`.
Internally, both implementations: persistent (RocksDB), and in-memory (TreeMap) support reverse/descending iteration:
There are 2 important ranges in Kafka Streams Stores:
- Key Range
- Time Range
Info |
---|
For SessionStore/ReadOnlySessionStore: findSessions and findSession operations will be moved from SessionStore to ReadOnlySessionStore to align with how other stores are design. |
Reverse Key Ranges
Extend existing interface for reverse KeyValueStore
Code Block |
---|
public interface ReadOnlyKeyValueStore<K, V> {
default KeyValueIterator<K, V> reverseRange(K from, K to) {
throw new UnsupportedOperationException();
}
default KeyValueIterator<K, V> reverseAll() {
throw new UnsupportedOperationException();
}
} |
Backward Time Ranges
Window and Session stores are based on a set of KeyValue Stores (Segments) organized by a time-based index. Therefore, for these stores time-range is more important than key-range to lookup for values.
Existing stores will be extended with backward methods:
Code Block | ||
---|---|---|
public interface ReadOnlyWindowStore<K, V> {
default WindowStoreIterator<V> backwardFetch(K key, Instant from, Instant to) throws IllegalArgumentException {
throw new UnsupportedOperationException();
}
default KeyValueIterator<Windowed<K>, V> backwardFetch(K from, K to, Instant fromTime, Instant toTime) throws IllegalArgumentException {
throw new UnsupportedOperationException();
}
default KeyValueIterator<Windowed<K>, V> backwardAll() {
throw new UnsupportedOperationException();
}
default KeyValueIterator<Windowed<K>, V> backwardFetchAll(Instant from, Instant to) throws IllegalArgumentException {
throw new UnsupportedOperationException();
}
}
public interface ReadOnlySessionStore<K, AGG> {
// Moving read functions from SessionStore to ReadOnlySessionStore
default KeyValueIterator<Windowed<K>, AGG> findSessions(final K key, final long earliestSessionEndTime, final long latestSessionStartTime) {
throw new UnsupportedOperationException("Moved from SessionStore");
}
default KeyValueIterator<Windowed<K>, AGG> findSessions(final K keyFrom, final K keyTo, final long earliestSessionEndTime, final long latestSessionStartTime) {
throw new UnsupportedOperationException("Moved from SessionStore");
}
default AGG fetchSession(final K key, final long startTime, final long endTime) {
throw new UnsupportedOperationException("Moved from SessionStore");
}
// New
default KeyValueIterator<Windowed<K>, AGG> backwardFindSessions(final K key, final long earliestSessionEndTime, final long latestSessionStartTime) {
throw new UnsupportedOperationException();
}
default KeyValueIterator<Windowed<K>, AGG> backwardFindSessions(final K keyFrom, final K keyTo, final long earliestSessionEndTime, final long latestSessionStartTime) {
throw new UnsupportedOperationException();
}
default KeyValueIterator<Windowed<K>, AGG> backwardFetch(final K key) {
throw new UnsupportedOperationException();
}
default KeyValueIterator<Windowed<K>, AGG> backwardFetch(final K from, final K to) {
throw new UnsupportedOperationException();
}
}
| ||
Code Block | ||
| ||
final RocksIterator iter = db.newIterator();
iter.seekToFirst();
iter.next();
//
final RocksIterator reverse = db.newIterator();
reverse.seekToLast();
reverse.prev();
//
final TreeMap<String, String> map = new TreeMap<>();
final NavigableSet<String> nav = map.navigableKeySet();
final NavigableSet<String> rev = map.descendingKeySet(); |
Compatibility, Deprecation, and Migration Plan
StreamsConfig will mitigate affecting users that are relying on current behaviour: if from>to, then return empty iterator. Only when users are enabling flag from>to will return a reversed iterator.
Therefore this change is backwards compatible.
Rejected Alternatives
New methods will have default implementations to avoid affecting current implementations.
Rejected Alternatives
Create a parallel hierarchy of interfaces for backward operation. Even though this option seems like the best way to extend functionality, it was proved to not work in practice in KIP-614 discussion as interfaces get wrapped in different layers (Metered, Caching, Logging) so all the current hierarchy to create stores with Kafka Streams DSL will have to be duplicated.Initially it was considered to have additional parameter on all readOnlyStore methods e.g. Store#fetch(keyFrom, keyTo, timeFrom, timeTo, ReadDirection.FORWARD|BACKWARD), but has been declines as passing arguments in inverse is more intuitive. As this could cause unexpected effects in future versions, a flag has been added to overcome this.- Implicit ordering by flipping
from
andto
variables has been discouraged in favor of a more explicit approach based on new interfaces that make explicit the availability of reverse and backward fetch operations.