...
The produce latency is an important metric to monitor for the cluster performance. As we know, latency is the sum from those parts, RequestQueueTime, LocalTime, RemoteTime, ResponseSend/QueueTime, ThrottleTime, MessageConversionsTime. RemoteTime means the time spent in the purgatory.
...
However, the produce remoteTime is hard to troubleshoot. That is the time spent by the leader waiting for the follower to fetch the data and send back the confirmed watermark. It can be affected by 1. Network is congested, 2. Fetch processing itself is slow. 3. Something else.
Currently we don't have the correct metrics to tell which part is contributing to the high Produce RemoteTime. That That is because, the reported fetch latency didn't reflect the true fetch latency because it sometimes needs to stay in purgatory and wait for replica.fetch.wait.max.ms when data is more than fetch.min.bytes. This greatly skewed the real latency number.
...
Add waitTimeMs in FetchResponse, and bump the FetchResponse and FetchRequest protocol version.
Code Block | ||
---|---|---|
| ||
FetchResponse => TransactionalId RequiredAcks Timeout [FetchableTopicResponse] ThrottleTimeMs => INT32 WaitTimeMs => INT32 # Add a new field to record the request wait time in purgatory Timeout => INT32 ErrorCode => INT16 SessionId => INT32 FetchableTopicResponse => [Name [FetchablePartitionResponse] PreferredReadReplica Records] ...... |
Proposed Changes
We like to propose to track the real end to end fetch latency with those changes:
...
Gliffy Diagram macroId d7ab03b4-5318-4fc9-9227-b17eed54f89b displayName Real End-to-End Fetch Latency name Real End-to-End Fetch Latency pagePin 23
Additionally, at the leader side, we will add a new metric called TotalLocalTime, which is TotalLocalTime = TotalTime - RemoteTime. This metric measures the time spent to process the fetch request on leader side, excluding the time spent in the purgatory.
...