Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state: "Under Discussion"

...

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

The current `Consumer#poll(Duration)` method is designed to block until data is available or the provided poll timeout expires. This implies, that if fetch requests fail the consumer retries them internally and eventually returns an empty set of records. – Thus, from a user point of view, returning an empty set of records can mean that no data is available at broker side or that the broker cannot be reached.

Besides, we sometimes wants to "peek" the records incoming, to do some testing, without affecting the offsets, like the "peek" method provided in many data structures. So, in this "peek" method, we won't increase the position offset in the partition. That means, under the `enable.auto.commit = true` (default setting), the committed offsets won't be incremented, and in the next "poll", the returned data will still include the records returned by `peek`. (of course if user manually commit the offsets, the offsets will be incremented)When consumer is created and ready to work, we'll poll() to try to get some data, and then do some processing. But if we are trying to build a new pipeline, say, to feed the polled records into elasticsearch or DB, we need some integration and troubleshooting/tuning to make the pipeline works well. Currently, when `poll()`, we'll increase the position automatically, for next poll. However, when doing troubleshooting/tuning, we don't want to increase the position, we just want to "peek" the records.


So, we should have a `consumer#peek()` to allow consumers to:

  1. peek what records existed and no increasing the position

...

Public Interfaces

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

...

Binary log format

...

The network protocol and api behavior

...

Any class in the public packages under clientsConfiguration, especially client configuration

  • org/apache/kafka/common/serialization

  • org/apache/kafka/common

  • org/apache/kafka/common/errors

  • org/apache/kafka/clients/producer

  • org/apache/kafka/clients/consumer (eventually, once stable)

...

Monitoring

...

Command line tools and arguments

  1. offsets.
  2. test if there is connection error existed between consumer and broker

Public Interfaces

...



Proposed Changes

Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.