You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state: Under Discussion

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]

JIRA: Unable to render Jira issues macro, execution error.

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

RocksDB has functionality to collect statistics about its operations to monitor running RocksDB's instances. These statistics enable users to find bottlenecks and to accordingly tune RocksDB. RocksDB's statistics can be accessed programmatically via JNI or RocksDB can be configured to periodically dump them to disk. Although RocksDB provides this functionality, Kafka Streams does currently not expose RocksDB's statistics in its metrics. Hence users need to implement Streams' RocksDBConfigSetter to fetch the statistics. This KIP proposes to expose the subset of the most useful RocksDB's statistics in the metrics of Kafka Streams.  


Public Interfaces

Each exposed metric will have the following tags:

  • type = stream-state-metrics,
  • thread-id = [thread ID],
  • task-id = [task ID]
  • rocksdb-state-id = [store ID]

The following metrics will be exposed in the Kafka Streams' metrics

  • bytes-written-rate [bytes/s]
  • bytes-written-total [bytes]
  • bytes-read-rate [bytes/s]
  • bytes-read-total [bytes]
  • bytes-flushed-rate [bytes/s]
  • bytes-flushed-total [bytes]
  • flush-time-(avg|min|max) [ms]
  • memtable-hit-rate
  • block-cache-hit-rate
  • bytes-read-compaction-rate [bytes/s]
  • bytes-written-compaction-rate [bytes/s]
  • compaction-time-(avg|min|max) [ms]
  • write-waiting-time-(avg|total) [ms]
  • num-open-files
  • num-file-errors-total

Proposed Changes

In section, I will explain the meaning of the metrics listed in the previous section and why I chose them.

bytes-written-(rate|total)

These metrics measure the bytes written to a RocksDB instance. The metrics show the write load on a RocksDB instance.

bytes-read-(rate|total)

Analogously to bytes-written-(rate|total), these metrics measure the bytes read from a RocksDB instance. The metrics show the read load on a RockDB instance.

bytes-flushed-(rate|total) and flush-time-(avg|min|max)

When data is put into RocksDB, the data is written into a in-memory tree data structure called memtable. When the memtable is almost full, data in the memtable is flushed to disk by a background process. Metrics bytes-flushed-(rate|total) measure the average throughput of flushes and the total amount of bytes written to disk. Metrics flush-time-(avg|min|max) measure the processing time of the flush operation. 

The metrics should help to identify flushes as bottlenecks.

memtable-hit-rate

When data is read from RocksDB, the memtable is consulted firstly to find the data. This metric measures the number of hits with respect to the number of all lookups into the memtable. Hence, the formula for this metric is hits/(hits + misses).

If memtable-hit-rate is to high for the given workload, the memtable may be too small.

block-cache-hit-rate

If data is not found in the memtable, the block cache is consulted. This metric measures the number of hits with respect to the number of all lookups into the block cache. The formula for this metric is the same as for memtable-hit-rate.

If block-cache-hit-rate is to high for the given workload, the block-cache-hit-rate needs maybe some tuning.

bytes-read-compaction-rate, bytes-written-compaction-rate, and compaction-time-(avg|min|max)

After data is flushed to disk, the data needs to be reorganised on disk from time to time. This reorganisation is called compaction and is performed by a background process. For the reorganisation, the data needs to be moved from disk to memory and back.  Metrics bytes-read-compaction-rate and bytes-written-compaction-rate measure read and write throughput on average. Metrics compaction-time-(avg|min|max) measure the processing time of compaction.

The metrics should help to identify compactions as bottlenecks.

write-waiting-time-(avg|total)

As explained for bytes-flushed-(rate|total) and flush-time-(avg|min|max), when the memtable is almost full, data in the memtable is flushed to disk by a background process. During flush and compaction a write to the database might need to wait until these processes finish. These metrics measure the average and total waiting time of a write process until flush and compaction finish.

If flush and compaction happen too often this time may increase and signal a bottleneck. Users can then take action by, e.g., increasing the size of the memtable to decrease the rate of flushes or changing the compaction settings.

This 

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.

  • No labels