Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state:  [One of "Under Discussion", "Accepted", "Rejected"]Draft

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]

JIRA: here [Change the link from KAFKA-1 to your own ticket]

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Describe the problems you are trying to solve.

Public Interfaces

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

  • Binary log format

  • The network protocol and api behavior

  • Any class in the public packages under clientsConfiguration, especially client configuration

    • org/apache/kafka/common/serialization

    • org/apache/kafka/common

    • org/apache/kafka/common/errors

    • org/apache/kafka/clients/producer

    • org/apache/kafka/clients/consumer (eventually, once stable)

  • Monitoring

  • Command line tools and arguments

  • Anything else that will likely break existing users in some way when they upgrade

Proposed Changes

Kafka exposes many pluggable points for users to bring their custom plugins. For complex and critical plugins it's important to have metrics to monitor their behavior. Plugins can use the Metrics class from the Kafka API but when creating a new Metrics instance, it does not inherits the tags from the components it depends on (for example from a producer for a custom partitioner), or the registered metrics reporters. As most plugins are configurable, a workaround is to reimplement the tags and metric reporters logic but that cumbersome.

This issue also applies to connectors and tasks in Kafka Connect. For example MirrorMaker2 creates its own Metrics object and adds the metric reporters from the configuration.

Public Interfaces

I propose introducing a new interface Monitorable that plugins can implement.

Code Block
languagejava
titleMonitorable.java

package org.apache.kafka.common.metrics;

public interface Monitorable {

    /**
     * Get the Metrics instance from the component that instantiates the plugin.
     */
    void withMetrics(Metrics Metrics);

}


For connectors and tasks, I propose adding a metrics(Metrics metrics) method to the SinkConnectorContext, SourceConnectorContext, SinkTaskContext and SourceTaskContext interfaces. To

Code Block
languagejava
/**
 * 
 */
default void metrics(Metrics metrics) {
    return null;
}

Proposed Changes

When instantiating a class via the Utils.newInstance() helper methods, if it implements Monitorable and a Metrics object is available, useMetrics() will be called with the current Metrics instance. It will be always called after configure().Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

Compatibility, Deprecation, and Migration Plan

...

This is a new feature so it has no impact on deprecation and does not need a migration plan. Regarding compatibility, plugins and connectors that start using this feature will have to handle Metrics not being available to support older broker/Connect versions. For regular plugins they should be able to function without a call to useMetrics(), for connectors and tasks they should work even if the metrics() method returns null.

Test Plan

Describe in few sentences how the KIP will be tested. We are mostly interested in system tests (since unit-tests are specific to implementation details). How will we know that the implementation works as expected? How will we know nothing broke?

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.