Table of Contents |
---|
Status
Current state:Accepted Under Discussion
Discussion thread: https://lists.apache.org/thread/yl87h1s484yc09yjo1no46hwpbv0qkwt
...
Name | Description |
---|---|
kafka.server:type=broker-metadata-metrics,name=metadata-loadapply-error-count | Reports the number of errors encountered by the BrokerMetadataPublisher while publishing applying a new MetadataImage based on the latest MetadataDelta . |
kafka.server:type=broker-metadata-metrics,name=metadata-applyload-error-count | Reports the number of errors encountered by the BrokerMetadataListener while loading the metadata log and generating a new MetadataDelta based on the log it has received thus far. |
kafka.controller:type=KafkaController,name=MetadataErrorCount | Reports the number of times this controller node has renounced leadership of the metadata quorum owing to encountered an error encountered during event metadata log processing |
Proposed Changes
Controllers
The MetadataErrorCount
metric is update for both active and standby controllers. For Active Controllers the MetadataErrorCount
it is incremented anytime they hit an error in either generating a Metadata log or while applying it to memory. For standby controllers, this metric is incremented when they hit an error in applying the metadata log to memory. This metric will reflect the total count of errors that a controller (both leader and non-leader) encountered in metadata log processing since the last restart.
...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
private Throwable handleEventException(String name, OptionalLong startProcessingTimeNs, Throwable exception) { if (!startProcessingTimeNs.isPresent()) { ... ... renounce(); //**** Increment MetadataErrorCount return new UnknownServerException(exception); } |
Brokers
The metadata-loadapply-error-count
metric will be incremented by one every time there is an error in publishing a new MetadataImage
. This metric will reflect the count of cumulative errors since the broker started up.
...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
override def publish(delta: MetadataDelta, newImage: MetadataImage): Unit = { val highestOffsetAndEpoch = newImage.highestOffsetAndEpoch() try { trace(s"Publishing delta $delta with highest offset $highestOffsetAndEpoch") // Publish the new metadata image to the metadata cache. metadataCache.setImage(newImage) ... ... publishedOffsetAtomic.set(newImage.highestOffsetAndEpoch().offset) } catch { //**** Increment metadata-loadapply-error-count case t: Throwable => error(s"Error publishing broker metadata at $highestOffsetAndEpoch", t) throw t } finally { _firstPublish = false } } |
The metadata-applyload-error-count
metric will be incremented every time there is an error in loading batches and generating MetadataDelta
from them. This metric will reflect the count of cumulative errors since the broker started up.
...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
class HandleCommitsEvent(reader: BatchReader[ApiMessageAndVersion]) extends EventQueue.FailureLoggingEvent(log) { override def run(): Unit = { val results = try { val loadResults = loadBatches(_delta, reader, None, None, None) ... loadResults } catch { //**** Increment metadata-applyload-error-count } finally { reader.close() } ... _publisher.foreach(publish) } } |
...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
class HandleSnapshotEvent(reader: SnapshotReader[ApiMessageAndVersion]) extends EventQueue.FailureLoggingEvent(log) { override def run(): Unit = { try { info(s"Loading snapshot ${reader.snapshotId().offset}-${reader.snapshotId().epoch}.") _delta = new MetadataDelta(_image) // Discard any previous deltas. val loadResults = loadBatches( ... } catch { //**** Increment metadata-applyload-error-count } finally { reader.close() } _publisher.foreach(publish) } } |
...