Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

We will also expose these metrics on the operatorprocessor-node-level for stateful operators at the recording level DEBUG with the following tags:

...

Imagine a simple 3-node subtopology with source node O, filter node F, and sink node Iaggregation A, and suppression P. For any record flowing through this with record timestamp t, let tO be the system (wallclock) time when it is read from the source topic, tFA be the time when it is processed by the filteraggregator node, and tIP be the time when it reaches the sink suppression node. The staleness at operator for a given record is defined as 

...

Note that for a given record,  SO <= SF A <= SI . This holds true within and across subtopologies. A downstream subtopology will always have a task-level end-to-end latency staleness greater than or equal to that of an upstream subtopology for a single task, which in turn implies the same holds true for the statistical measures exposed via the new metrics. Comparing the staleness across tasks (or across operators) will also be of interest as this represents the processing delay: the amount of time it took for Streams to actually process the record from point A to point B within the topology. Seeing a large processing delay indicates a possible bottleneck in the topology and may help users debug their application performance. (Debugging topology bottlenecks is not the primary motivation of this KIP, but it is a nice side effect.)Note that this does not represent the latency of processing, since the system time is only updated at the source node. Instead it represents how long the record may have sat in a cache or suppression buffer before flowing further downstream. 

Late arriving records will be included in this metric, even if they are otherwise dropped due to the grace period having passed. Although we already expose a metric for the number of late dropped records, there is no way for a user to find out how late the record was. Including them in the staleness metrics may for one thing help users to set a reasonable grace period if they see that a large number of records are being dropped. Another slight difference with the concept of late records is that they are dropped based on stream time, whereas this metric is always reported with respect to the current time. The stream-time may lag the system time in low traffic, for example, or when all records are considerably delayed. This might mean the user sees no dropped records even though the staleness is large. 

...