Table of Contents |
---|
Status
Current state: Under DiscussionAccepted
Discussion thread: here
JIRA: KAFKA-9983, KAFKA-10054
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
...
The following metrics would be added:
- record-e2e-latency-max min [ms]
- record-e2e-latency-p99 max [ms] (99th percentile)
- record-e2e-latency-p90 [ms] (90th percentile)record-e2e-latency--min avg [ms]
These will be exposed on the task-level at the recording level INFO with with the following tags
- type = stream-task-metrics
- thread-id=[threadId]
- task-id=[taskId]
We will also expose these metrics on the processor-node-level for stateful operators at the recording level DEBUG with the following tags:
- type = stream-processor-node-metrics
- thread-id=[threadId]
- task-id=[taskId]
- processor-node-id=[processorNodeId]
These will be reported for source and terminal operators at the recording level INFO
We will also expose these metrics for stateful operators at the recording level TRACE (which is also being added as part of this KIP)
In all cases the metrics will be computed at the end of the operation or subtopology. In the case of , once the processing has been complete
Update
The min and max task-level metrics for example, this means the metric reflects the end-to-end-latency at the time it leaves the sink node.INFO metrics have been added in 2.6, and the remaining metrics will ship in the next version
Proposed Changes
Imagine a simple 3-node subtopology with source node O, filter node F, aggregation A, and sink node I. For any record flowing through this with record timestamp t, let tO be the system (wallclock) time when it is sent from the source topic, tA be the time when it is finished being processed by the aggregator node, and tI be the time when it leaves the sink node for the output or repartition topic. The end-to-end latency at operator O for a given record is defined as
...
and likewise for the other operator-level end-to-end latencies. This represents the age of the record at the time is was processed by operator O. The task-level end-to-end (e2e) latency L will be computed based on the sink node, ie L = LI. The source node e2e latency reading from the user input topics therefore represent the consumption latency, the time it took for a newly-created event to be read by Streams. This can be especially interesting in cases where some records may be severely delayed: for example by a IoT device with unstable network connections, or when a user's smartphone reconnects to the internet after a flight and pushes all the latest updates. On the other side, the sink node e2e latency – which is also the task-level e2e latency, reveals how long it takes for the record to be fully processed through that subtopology. If the task is the final one in the full topology, this is the full end-to-end latency of : the time it took for a record to be fully processed through Streams.
Note that for a given record, LO <= LA <= LI . This holds true within and across subtopologies. A downstream subtopology will always have a task-level end-to-end latency greater than or equal to that of an upstream subtopology for a single task (which in turn implies the same holds true for the statistical measures exposed via the new metrics). Comparing the e2e latency across tasks (or across operators) will also be of interest as this represents the processing delay: the amount of time it took for Streams to actually process the record from point A to point B within the topology.
...
This idea was originally discussed but ultimately put to rest as it does address the specific goal set out in this KIP, to report the time for an event to be reflected in the output. This alternative metric, which we call "staleness", has some use as a gauge of the record time when received by an operator, which may have implications for its processing for some operators. However this issue is orthogonal and thus rejected in favor of measuring at the record output.
Reporting mean or median (p50)
Rejected because:
...