Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Status

Current state:Accepted Under Discussion

Discussion thread: : https://lists.apache.org/thread/yl87h1s484yc09yjo1no46hwpbv0qkwt <TODO>

JIRA:

Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyKAFKA-14114

...

NameDescription
kafka.server:type=broker-metadata-metrics,name=publishermetadata-apply-error-countReports the number of errors encountered by the BrokerMetadataPublisher while publishing applying a new MetadataImage based on the latest MetadataDelta .
kafka.server:type=broker-metadata-metrics,name=listenermetadata-batch-load-error-countReports the number of errors encountered by the BrokerMetadataListener while loading the metadata log and generating a new MetadataDelta based on the log it has received thus far.
kafka.controller:type=KafkaController,name=ForceRenounceCountMetadataErrorCountReports the number of times this controller node has renounced leadership of the metadata quorum owing to encountered an error encountered during event metadata log processing

Proposed Changes

...

Controllers

The MetadataPublisherErrorCount metric reflects the count of errors encountered while publishing a new version of the MetadataImage using the metadata log.

Both these metrics can be used to set up alerts so that affected nodes are visible and needed remedial actions can be performed on them.

Controllers

Any errors during metadata processing on the Active Controller cause it to renounce the quorum leadership. These are different than the general Raft elections being triggered due to other reasons like a roll. Repeated elections being caused due to errors in the active controller could point to issues in the metadata log generation handling logic and having visibility into these would be helpful. The ControllerForceRenounceCount metric reflects the number of times a controller node has had to renounce quorum leadership due to an error in the event processing logic.

MetadataErrorCount metric is update for both active and standby controllers. For Active Controllers it is incremented anytime they hit an error in either generating a Metadata log or while applying it to memory. For standby controllers, this metric is incremented when they hit an error in applying the metadata log to memory. The ForceRenounceCount metric will be incremented anytime the controller is going to resign as a result of handling exceptions in Event ProcessingThis metric will reflect the total count of errors that a controller encountered in metadata log processing since the last restart.

https://github.com/apache/kafka/blob/14d2269471141067dc3c45300187f20a0a051777/metadata/src/main/java/org/apache/kafka/controller/QuorumController.java#L409 

Code Block
languagejava
titlehandleEventException
collapsetrue
 private Throwable handleEventException(String name,
                                           OptionalLong startProcessingTimeNs,
                                           Throwable exception) {
        if (!startProcessingTimeNs.isPresent()) {
        ...
        ...
        renounce();
		//**** Increment ForceRenounceCountMetadataErrorCount
        return new UnknownServerException(exception);
    }

Brokers

The publishermetadata-apply-error-count metric will be incremented by one every time there is an error in publishing a new MetadataImage . This metric will reflect the count of cumulative errors since the broker started up.

https://github.com/apache/kafka/blob/14d2269471141067dc3c45300187f20a0a051777/core/src/main/scala/kafka/server/metadata/BrokerMetadataPublisher.scala#L125

Code Block
languagescala
titlePublish
collapsetrue
override def publish(delta: MetadataDelta, newImage: MetadataImage): Unit = {
    val highestOffsetAndEpoch = newImage.highestOffsetAndEpoch()

    try {
      trace(s"Publishing delta $delta with highest offset $highestOffsetAndEpoch")

      // Publish the new metadata image to the metadata cache.
      metadataCache.setImage(newImage)
   	  ...
	  ...
      publishedOffsetAtomic.set(newImage.highestOffsetAndEpoch().offset)
    } catch {
	  //**** Increment publishermetadata-apply-error-count
      case t: Throwable => error(s"Error publishing broker metadata at $highestOffsetAndEpoch", t)	
        throw t
    } finally {
      _firstPublish = false
    }
  }

 

The listenermetadata-batch-load-error-count metric will be incremented every time there is an error in loading batches and generating MetadataDelta from them. This metric will reflect the count of cumulative errors since the broker started up.

https://github.com/apache/kafka/blob/14d2269471141067dc3c45300187f20a0a051777/core/src/main/scala/kafka/server/metadata/BrokerMetadataListener.scala#L112

Code Block
languagejava
titleHandleCommitsEvent
collapsetrue
class HandleCommitsEvent(reader: BatchReader[ApiMessageAndVersion])
      extends EventQueue.FailureLoggingEvent(log) {
    override def run(): Unit = {
      val results = try {
        val loadResults = loadBatches(_delta, reader, None, None, None)
        ...
        loadResults
      } catch {
		//**** Increment listener-batchmetadata-load-error-count
	  } finally {
        reader.close()
      }

      ...
      _publisher.foreach(publish)
    }
  }

...

Code Block
languagejava
titleHandleSnapshotEvent
collapsetrue
  class HandleSnapshotEvent(reader: SnapshotReader[ApiMessageAndVersion])
    extends EventQueue.FailureLoggingEvent(log) {
    override def run(): Unit = {
      try {
        info(s"Loading snapshot ${reader.snapshotId().offset}-${reader.snapshotId().epoch}.")
        _delta = new MetadataDelta(_image) // Discard any previous deltas.
        val loadResults = loadBatches(
       ...
      } catch {
		//**** Increment listener-batchmetadata-load-error-count
	  } finally {
        reader.close()
      }
      _publisher.foreach(publish)
    }
  }

...

Instead of adding the specific metrics, we could have added a more generic MetadataProcessingErrorCount Metric which would be incremented when either of these (and any other similar) or any other similar errors are hit on both Brokers and Controllers. The downside to this approach would be the loss in granularity on what exactly failed on a given node. The specific metrics are more meaningful and give better control over any alarming that might be setup on these metrics.