Table of Contents |
---|
Status
Current state: Under Discussion
...
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
KIP-500: Replace ZooKeeper with a Self-Managed Metadata Quorum changes the way cluster metadata is stored and managed in a Kafka cluster. It introduces
Public Interfaces
Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.
A public interface is any change to the following:
Binary log format
The network protocol and api behavior
Any class in the public packages under clientsConfiguration, especially client configuration
org/apache/kafka/common/serialization
org/apache/kafka/common
org/apache/kafka/common/errors
org/apache/kafka/clients/producer
org/apache/kafka/clients/consumer (eventually, once stable)
Monitoring
Command line tools and arguments
- Anything else that will likely break existing users in some way when they upgrade
Proposed Changes
...
introduces the concept of a replicated log that is maintained using a custom version of the Raft consensus protocol described in KIP-595: A Raft Protocol for the Metadata Quorum. The controller now utilizes this log to persist and broadcast all metadata related actions in the cluster as described in KIP-631: The Quorum-based Kafka Controller.
With these changes in place, the replicated log containing all metadata changes (henceforth called metadata log) is the source of metadata related information for all nodes in the cluster. Any errors that occur while processing the log could lead to the in-memory state for the node becoming inconsistent. It is important that such errors are made visible. The metrics proposed in the following section aim at making errors in this processing visible. These metrics can be used to set up alerts so that affected nodes are visible and needed remedial actions can be performed on them.
Public Interfaces
We propose adding the following new metrics:
Name | Description |
---|---|
kafka.server:type=broker-metadata-metrics,name=publisher-error-count | Reports the number of errors encountered by the BrokerMetadataPublisher while publishing a new MetadataImage based on the MetadataDelta |
kafka.server:type=broker-metadata-metrics,name=listener-batch-load-error-count | Reports the number of errors encountered by the BrokerMetadataListener while generating a new MetadataDelta based on the log it has received thus far. |
kafka.controller:type=KafkaController,name=ForceRenounceCount | Reports the number of times this controller node has renounced leadership of the metadata quorum owing to an error encountered during event processing |
Proposed Changes
ControllerForceRenounceCount and MetadataPublisherErrorCount
The MetadataPublisherErrorCount
metric reflects the count of errors encountered while publishing a new version of the MetadataImage
using the metadata log.
Both these metrics can be used to set up alerts so that affected nodes are visible and needed remedial actions can be performed on them.
Controllers
Any errors during metadata processing on the Active Controller
cause it to renounce the quorum leadership. These are different than the general Raft elections being triggered due to other reasons like a roll. Repeated elections being caused due to errors in the active controller could point to issues in the metadata log generation handling logic and having visibility into these would be helpful. The ControllerForceRenounceCount
metric reflects the number of times a controller node has had to renounce quorum leadership due to an error in the event processing logic.
The ForceRenounceCount
metric will be incremented anytime the controller is going to resign as a result of handling exceptions in Event Processing
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
private Throwable handleEventException(String name,
OptionalLong startProcessingTimeNs,
Throwable exception) {
if (!startProcessingTimeNs.isPresent()) {
...
...
renounce();
//**** Increment ForceRenounceCount
return new UnknownServerException(exception);
} |
Brokers
The publisher-error-count
metric will be incremented by one every time there is an error in publishing a new MetadataImage
Code Block | ||||
---|---|---|---|---|
| ||||
override def publish(delta: MetadataDelta, newImage: MetadataImage): Unit = {
val highestOffsetAndEpoch = newImage.highestOffsetAndEpoch()
try {
trace(s"Publishing delta $delta with highest offset $highestOffsetAndEpoch")
// Publish the new metadata image to the metadata cache.
metadataCache.setImage(newImage)
...
...
publishedOffsetAtomic.set(newImage.highestOffsetAndEpoch().offset)
} catch {
//**** Increment publisher-error-count
case t: Throwable => error(s"Error publishing broker metadata at $highestOffsetAndEpoch", t)
throw t
} finally {
_firstPublish = false
}
} |
The listener-batch-load-error-count
metric will be incremented every time there is an error in loading batches and generating MetadataDelta
from them.
Code Block | ||||
---|---|---|---|---|
| ||||
class HandleCommitsEvent(reader: BatchReader[ApiMessageAndVersion])
extends EventQueue.FailureLoggingEvent(log) {
override def run(): Unit = {
val results = try {
val loadResults = loadBatches(_delta, reader, None, None, None)
...
loadResults
} catch {
//**** Increment listener-batch-load-error-count
} finally {
reader.close()
}
...
_publisher.foreach(publish)
}
} |
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
class HandleSnapshotEvent(reader: SnapshotReader[ApiMessageAndVersion])
extends EventQueue.FailureLoggingEvent(log) {
override def run(): Unit = {
try {
info(s"Loading snapshot ${reader.snapshotId().offset}-${reader.snapshotId().epoch}.")
_delta = new MetadataDelta(_image) // Discard any previous deltas.
val loadResults = loadBatches(
...
} catch {
//**** Increment listener-batch-load-error-count
} finally {
reader.close()
}
_publisher.foreach(publish)
}
} |
Compatibility, Deprecation, and Migration Plan
- What impact (if any) will there be on existing users?
- If we are changing behavior how will we phase out the older behavior?
- If we need special migration tools, describe them here.
- When will we remove the existing behavior?
Test Plan
Describe in few sentences how the KIP will be tested. We are mostly interested in system tests (since unit-tests are specific to implementation details). How will we know that the implementation works as expected? How will we know nothing broke?
Rejected Alternatives
These will be newly exposed metrics and there will be no impact on existing kafka versions.
Rejected Alternatives
Instead of adding the specific metrics, we could have added a more generic MetadataProcessingErrorCount Metric which would be incremented when either of these (and any other similar) or any other similar errors are hit. The downside to this approach would be the loss in granularity on what exactly failed on a given node. The specific metrics are more meaningful and give better control over any alarming that might be setup on these metricsIf there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.