You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state: Under Discussion

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]

JIRA: Unable to render Jira issues macro, execution error.

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

RocksDB has functionality to collect statistics about its operations to monitor running RocksDB's instances. These statistics enable users to find bottlenecks and to accordingly tune RocksDB. RocksDB's statistics can be accessed programmatically via JNI or RocksDB can be configured to periodically dump them to disk. Although RocksDB provides this functionality, Kafka Streams does currently not expose RocksDB's statistics in its metrics. Hence users need to implement Streams' RocksDBConfigSetter to fetch the statistics. This KIP proposes to expose the subset of the most useful RocksDB's statistics in the metrics of Kafka Streams.  


Public Interfaces

Each exposed metric will have the following tags:

  • type = stream-state-metrics,
  • thread-id = [thread ID],
  • task-id = [task ID]
  • rocksdb-state-id = [store ID]

The following metrics will be exposed in the Kafka Streams' metrics

  • write-waiting-time-(avg|total) [ms]
  • bytes-written-rate [bytes/s]
  • bytes-written-total [bytes]
  • bytes-read-rate [bytes/s]
  • bytes-read-total [bytes]
  • memtable-hit-rate
  • block-cache-bytes-read-rate [bytes/s]
  • block-cache-bytes-written-rate [bytes/s]
  • block-cache-hit-rate
  • bytes-flushed-rate [bytes/s]
  • bytes-flushed-total [bytes]
  • flush-time-(avg|min|max) [ms]
  • bytes-read-compaction-rate [bytes/s]
  • bytes-written-compaction-rate [bytes/s]
  • compaction-time-(avg|min|max) [ms]
  • num-open-files
  • num-file-errors-total

Proposed Changes

In section, I will explain the meaning of the metrics listed in the previous section and why I chose them.

write-waiting-time-(avg|total)

When data is put into RocksDB, the data is written into a in-memory tree data structure called memtable. When the memtable is almost full, data in the memtable is flushed to disk by a background process. The data on disk needs to be reorganised from time to time. This reorganisatoin is called compaction and is also performed by a background process. During flush and compaction a write to the database might need to wait until these processes finish. These metrics measure the average and total waiting time of a write process until flush and compaction finish.

If flush and compaction happen too often this time may increase and signal a bottleneck. Users can then take action by, e.g., increasing the size of the memtable to decrease the rate of flushes or changing the compaction settings.

bytes-written-(rate|total)

These metrics measure the bytes written to a RocksDB instance. The metrics show the write load on the RocksDB instance.

bytes-read-(rate|total)

Analogously to bytes-written-(rate|total), these metrics measure the bytes read from a RocksDB instance. The metrics show the read load on the RockDB instance.

memtable-hit-rate

When data is read from RocksDB, the memtable is consulted firstly to find the data. This metric measures the number of hits with respect to the number of all lookups into the memtable. Hence, the formula for this metric is hits/(hits + misses).

If memtable-hit-rate is to high with respect to the workload, the memtable may be too small.


Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.

  • No labels