Table of Contents |
---|
Status
Current state:Accepted Under Discussion
Discussion thread: https://lists.apache.org/thread/yl87h1s484yc09yjo1no46hwpbv0qkwt
...
Name | Description |
---|---|
kafka.server:type=broker-metadata-metrics,name=publishermetadata-apply-error-count | Reports the number of errors encountered by the BrokerMetadataPublisher while publishing applying a new MetadataImage based on the latest MetadataDelta . |
kafka.server:type=broker-metadata-metrics,name=listener-batchmetadata-load-error-count | Reports the number of errors encountered by the BrokerMetadataListener while loading the metadata log and generating a new MetadataDelta based on the log it has received thus far. |
kafka.controller:type=KafkaController,name=ForceRenounceCountMetadataErrorCount | Reports the number of times this controller node has renounced leadership of the metadata quorum owing to encountered an error encountered during event metadata log processing |
Proposed Changes
...
Controllers
The MetadataPublisherErrorCount
metric reflects the count of errors encountered while publishing a new version of the MetadataImage
using the metadata log.
Both these metrics can be used to set up alerts so that affected nodes are visible and needed remedial actions can be performed on them.
Controllers
Any errors during metadata processing on the Active Controller
cause it to renounce the quorum leadership. These are different than the general Raft elections being triggered due to other reasons like a roll. Repeated elections being caused due to errors in the active controller could point to issues in the metadata log generation handling logic and having visibility into these would be helpful. The ControllerForceRenounceCount
metric reflects the number of times a controller node has had to renounce quorum leadership due to an error in the event processing logic.
The ForceRenounceCount
metric will be incremented anytime the controller is going to resign as a result of handling exceptions in Event Processing
MetadataErrorCount
metric is update for both active and standby controllers. For Active Controllers it is incremented anytime they hit an error in either generating a Metadata log or while applying it to memory. For standby controllers, this metric is incremented when they hit an error in applying the metadata log to memory. This metric will reflect the total count of errors that a controller encountered in metadata log processing since the last restart.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
private Throwable handleEventException(String name, OptionalLong startProcessingTimeNs, Throwable exception) { if (!startProcessingTimeNs.isPresent()) { ... ... renounce(); //**** Increment ForceRenounceCountMetadataErrorCount return new UnknownServerException(exception); } |
Brokers
The publishermetadata-apply-error-count
metric will be incremented by one every time there is an error in publishing a new MetadataImage
. This metric will reflect the count of cumulative errors since the broker started up.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
override def publish(delta: MetadataDelta, newImage: MetadataImage): Unit = { val highestOffsetAndEpoch = newImage.highestOffsetAndEpoch() try { trace(s"Publishing delta $delta with highest offset $highestOffsetAndEpoch") // Publish the new metadata image to the metadata cache. metadataCache.setImage(newImage) ... ... publishedOffsetAtomic.set(newImage.highestOffsetAndEpoch().offset) } catch { //**** Increment publishermetadata-apply-error-count case t: Throwable => error(s"Error publishing broker metadata at $highestOffsetAndEpoch", t) throw t } finally { _firstPublish = false } } |
The listener-batchmetadata-load-error-count
metric will be incremented every time there is an error in loading batches and generating MetadataDelta
from them. This metric will reflect the count of cumulative errors since the broker started up.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
class HandleCommitsEvent(reader: BatchReader[ApiMessageAndVersion]) extends EventQueue.FailureLoggingEvent(log) { override def run(): Unit = { val results = try { val loadResults = loadBatches(_delta, reader, None, None, None) ... loadResults } catch { //**** Increment listenermetadata-batch-load-error-count } finally { reader.close() } ... _publisher.foreach(publish) } } |
...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
class HandleSnapshotEvent(reader: SnapshotReader[ApiMessageAndVersion]) extends EventQueue.FailureLoggingEvent(log) { override def run(): Unit = { try { info(s"Loading snapshot ${reader.snapshotId().offset}-${reader.snapshotId().epoch}.") _delta = new MetadataDelta(_image) // Discard any previous deltas. val loadResults = loadBatches( ... } catch { //**** Increment listenermetadata-batch-load-error-count } finally { reader.close() } _publisher.foreach(publish) } } |
...
Instead of adding the specific metrics, we could have added a more generic MetadataProcessingErrorCount Metric which would be incremented when either of these (and any other similar) or any other similar errors are hit on both Brokers and Controllers. The downside to this approach would be the loss in granularity on what exactly failed on a given node. The specific metrics are more meaningful and give better control over any alarming that might be setup on these metrics.