THIS IS A TEST INSTANCE. ALL YOUR CHANGES WILL BE LOST!!!!
...
Name | Description |
---|---|
kafka.server:type=broker-metadata-metrics,name=metadata-loadapply-error-count | Reports the number of errors encountered by the BrokerMetadataPublisher while publishing applying a new MetadataImage based on the latest MetadataDelta . |
kafka.server:type=broker-metadata-metrics,name=metadata-applyload-error-count | Reports the number of errors encountered by the BrokerMetadataListener while loading the metadata log and generating a new MetadataDelta based on the log it has received thus far. |
kafka.controller:type=KafkaController,name=MetadataErrorCount | Reports the number of times this controller node has renounced leadership of the metadata quorum owing to an error encountered during event processing |
...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
private Throwable handleEventException(String name, OptionalLong startProcessingTimeNs, Throwable exception) { if (!startProcessingTimeNs.isPresent()) { ... ... renounce(); //**** Increment MetadataErrorCount return new UnknownServerException(exception); } |
Brokers
The metadata-loadapply-error-count
metric will be incremented by one every time there is an error in publishing a new MetadataImage
. This metric will reflect the count of cumulative errors since the broker started up.
...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
override def publish(delta: MetadataDelta, newImage: MetadataImage): Unit = { val highestOffsetAndEpoch = newImage.highestOffsetAndEpoch() try { trace(s"Publishing delta $delta with highest offset $highestOffsetAndEpoch") // Publish the new metadata image to the metadata cache. metadataCache.setImage(newImage) ... ... publishedOffsetAtomic.set(newImage.highestOffsetAndEpoch().offset) } catch { //**** Increment metadata-loadapply-error-count case t: Throwable => error(s"Error publishing broker metadata at $highestOffsetAndEpoch", t) throw t } finally { _firstPublish = false } } |
The metadata-applyload-error-count
metric will be incremented every time there is an error in loading batches and generating MetadataDelta
from them. This metric will reflect the count of cumulative errors since the broker started up.
...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
class HandleCommitsEvent(reader: BatchReader[ApiMessageAndVersion]) extends EventQueue.FailureLoggingEvent(log) { override def run(): Unit = { val results = try { val loadResults = loadBatches(_delta, reader, None, None, None) ... loadResults } catch { //**** Increment metadata-applyload-error-count } finally { reader.close() } ... _publisher.foreach(publish) } } |
...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
class HandleSnapshotEvent(reader: SnapshotReader[ApiMessageAndVersion]) extends EventQueue.FailureLoggingEvent(log) { override def run(): Unit = { try { info(s"Loading snapshot ${reader.snapshotId().offset}-${reader.snapshotId().epoch}.") _delta = new MetadataDelta(_image) // Discard any previous deltas. val loadResults = loadBatches( ... } catch { //**** Increment metadata-applyload-error-count } finally { reader.close() } _publisher.foreach(publish) } } |
...