Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

NameDescription
kafka.server:type=broker-metadata-metrics,name=publishermetadata-load-error-countReports the number of errors encountered by the BrokerMetadataPublisher while publishing a new MetadataImage based on the MetadataDelta 
kafka.server:type=broker-metadata-metrics,name=listenermetadata-batch-loadapply-error-countReports the number of errors encountered by the BrokerMetadataListener while generating a new MetadataDelta based on the log it has received thus far.
kafka.controller:type=KafkaController,name=ForceRenounceCountMetadataErrorCountReports the number of times this controller node has renounced leadership of the metadata quorum owing to an error encountered during event processing

...

Today, any errors during metadata processing on the Active Controller cause it to renounce the quorum leadership. These renounces are different than the ones caused by general Raft elections due to other reasons like a roll. Repeated elections caused due to errors in the active controller could point to issues in the metadata log generation/handling logic and having visibility into these makes sense. The ForceRenounceCount MetadataErrorCount metric reflects the number of times a controller node has had to renounce quorum leadership due to an error in the event processing logic.

The ForceRenounceCount MetadataErrorCount metric will be incremented anytime the controller is going to resign as a result of handling exceptions in Event Processing. This metric will reflect the count of force resignations that a controller (both leader and non-leader) underwent since the last restart.

...

Code Block
languagejava
titlehandleEventException
collapsetrue
 private Throwable handleEventException(String name,
                                           OptionalLong startProcessingTimeNs,
                                           Throwable exception) {
        if (!startProcessingTimeNs.isPresent()) {
        ...
        ...
        renounce();
		//**** Increment ForceRenounceCountMetadataErrorCount
        return new UnknownServerException(exception);
    }

Brokers

The publishermetadata-load-error-count metric will be incremented by one every time there is an error in publishing a new MetadataImage. This metric will reflect the count of cumulative errors since the broker started up.

...

Code Block
languagescala
titlePublish
collapsetrue
override def publish(delta: MetadataDelta, newImage: MetadataImage): Unit = {
    val highestOffsetAndEpoch = newImage.highestOffsetAndEpoch()

    try {
      trace(s"Publishing delta $delta with highest offset $highestOffsetAndEpoch")

      // Publish the new metadata image to the metadata cache.
      metadataCache.setImage(newImage)
   	  ...
	  ...
      publishedOffsetAtomic.set(newImage.highestOffsetAndEpoch().offset)
    } catch {
	  //**** Increment publishermetadata-load-error-count
      case t: Throwable => error(s"Error publishing broker metadata at $highestOffsetAndEpoch", t)	
        throw t
    } finally {
      _firstPublish = false
    }
  }

 

The listenermetadata-batch-loadapply-error-count metric will be incremented every time there is an error in loading batches and generating MetadataDelta from them. This metric will reflect the count of cumulative errors since the broker started up.

...

Code Block
languagejava
titleHandleCommitsEvent
collapsetrue
class HandleCommitsEvent(reader: BatchReader[ApiMessageAndVersion])
      extends EventQueue.FailureLoggingEvent(log) {
    override def run(): Unit = {
      val results = try {
        val loadResults = loadBatches(_delta, reader, None, None, None)
        ...
        loadResults
      } catch {
		//**** Increment listenermetadata-batch-loadapply-error-count
	  } finally {
        reader.close()
      }

      ...
      _publisher.foreach(publish)
    }
  }

...

Code Block
languagejava
titleHandleSnapshotEvent
collapsetrue
  class HandleSnapshotEvent(reader: SnapshotReader[ApiMessageAndVersion])
    extends EventQueue.FailureLoggingEvent(log) {
    override def run(): Unit = {
      try {
        info(s"Loading snapshot ${reader.snapshotId().offset}-${reader.snapshotId().epoch}.")
        _delta = new MetadataDelta(_image) // Discard any previous deltas.
        val loadResults = loadBatches(
       ...
      } catch {
		//**** Increment listenermetadata-batch-loadapply-error-count
	  } finally {
        reader.close()
      }
      _publisher.foreach(publish)
    }
  }

...

Instead of adding the specific metrics, we could have added a more generic MetadataProcessingErrorCount Metric which would be incremented when either of these (and any other similar) or any other similar errors are hit on both Brokers and Controllers. The downside to this approach would be the loss in granularity on what exactly failed on a given node. The specific metrics are more meaningful and give better control over any alarming that might be setup on these metrics.