Status

Discussion threadhttps://lists.apache.org/thread/z87m68ggzkx0s427tmrllswm4l1g7owc
Vote threadhttps://lists.apache.org/thread/p51bjoyssm2ccx8sfyvtoll8oym15sy9
JIRA

Unable to render Jira issues macro, execution error.

Release

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

There exists use-case where the users want to stop a Flink job gracefully based on the content of de-serialized records observed in the KafkaSource. For example, users might want to use a Flink job to process stock transaction data from an unbounded Kafka topic in real time. Suppose the stock market closes at 4 pm, users would want the Flink job to stop once the job has processed all the transaction data of that day. 

One intuitive way to support this use-case is to allow users to specify a lambda function that evaluates whether a de-serialized record indicates EOF for a given split. FlinkKafkaConsumer supports this solution by handling the EOF-detection logic specified in the KafkaDeserializationSchema::isEndOfStream(...). However, FlinkKafkaConsumer has been deprecated and it is supposed to be replaced by KafkaSource going forward. And KafkaSource currently can not address this use-case.

This FLIP aims to address this use-case so that users who currently depend on KafkaDeserializationSchema::isEndOfStream() can migrate their Flink job from FlinkKafkaConsumer to KafkaSource.

In order to minimize the feature gap between similar sources (e.g. Kafka and Pulsar), this FLIP also proposes to update PulsarSourceBuilder to support dynamically EOF.

Public Interfaces

1) Adds the RecordEvaluator interface to the package org.apache.flink.connector.base.source.reader.

package org.apache.flink.connector.base.source.reader;

/**
 * An interface that evaluates whether a de-serialized record should trigger certain control-flow
 * operations (e.g. end of stream).
 */
@PublicEvolving
@FunctionalInterface
public interface RecordEvaluator<T> extends Serializable {
    /**
     * Determines whether a record should trigger the end of stream for its split. The given record
     * wouldn't be emitted from the source if the returned result is true.
     *
     * @param record a de-serialized record from the split.
     * @return a boolean indicating whether the split has reached end of stream.
     */
    boolean isEndOfStream(T record);
}


2) Adds new constructors for SingleThreadMultiplexSourceReaderBase and SourceReaderBase.

The respective new constructors should accept a parameter "@Nullable RecordEvaluator<T> eofRecordEvaluator". When this eofRecordEvaluator is not null, it will be used to determine EOF.


@PublicEvolving
public abstract class SingleThreadMultiplexSourceReaderBase<
                E, T, SplitT extends SourceSplit, SplitStateT>
        extends SourceReaderBase<E, T, SplitT, SplitStateT> {
    public SingleThreadMultiplexSourceReaderBase(
            FutureCompletingBlockingQueue<RecordsWithSplitIds<E>> elementsQueue,
            SingleThreadFetcherManager<E, SplitT> splitFetcherManager,
            RecordEmitter<E, T, SplitStateT> recordEmitter,
            @Nullable RecordEvaluator<T> eofRecordEvaluator,
            Configuration config,
            SourceReaderContext context) {
        super(...)
    }
}

@PublicEvolving
public abstract class SourceReaderBase<E, T, SplitT extends SourceSplit, SplitStateT>
        implements SourceReader<T, SplitT> {
    public SourceReaderBase(
            FutureCompletingBlockingQueue<RecordsWithSplitIds<E>> elementsQueue,
            SplitFetcherManager<E, SplitT> splitFetcherManager,
            RecordEmitter<E, T, SplitStateT> recordEmitter,
            @Nullable RecordEvaluator<T> eofRecordEvaluator,
            Configuration config,
            SourceReaderContext context) {
    }
}


3) Adds the following method to KafkaSourceBuilder 

public class KafkaSourceBuilder<OUT> {
    ... // Skip the existing methods

    /**
     * Sets the optional {@link RecordEvaluator eofRecordEvaluator} for KafkaSource.
     *
     * <p>When the evaluator is specified, it is invoked for each de-serialized record to determine
     * whether the corresponding split has reached end of stream. If a record is matched by the
     * evaluator, the source would not emit this record as well as the following records in the same
     * split.
     *
     * <p>Note that the evaluator works jointly with the stopping offsets specified by the {@link
     * #setBounded(OffsetsInitializer)} or the {@link #setUnbounded(OffsetsInitializer)}. The source
     * stops consuming from a split when any of these conditions is met.
     *
     * @param eofRecordEvaluator a {@link RecordEvaluator recordEvaluator}
     * @return this KafkaSourceBuilder.
     */
    public KafkaSourceBuilder<OUT> setEofRecordEvaluator(RecordEvaluator<OUT> eofRecordEvaluator) {
        this.eofRecordEvaluator = eofRecordEvaluator;
        return this;
    }
} 


4) Adds the following method to PulsarSourceBuilder 

public class PulsarSourceBuilder<OUT> {
    ... // Skip the existing methods

    /**
     * Sets the optional {@link RecordEvaluator eofRecordEvaluator} for KafkaSource.
     *
     * <p>When the evaluator is specified, it is invoked for each de-serialized record to determine
     * whether the corresponding split has reached end of stream. If a record is matched by the
     * evaluator, the source would not emit this record as well as the following records in the same
     * split.
     *
     * <p>Note that the evaluator works jointly with the stopping criteria specified by the {@link
     * #setBoundedStopCursor(StopCursor)} or the {@link #setUnboundedStopCursor(StopCursor)}.
     * The source stops consuming from a split when any of these conditions is met.
     *
     * @param eofRecordEvaluator a {@link RecordEvaluator recordEvaluator}
     * @return this KafkaSourceBuilder.
     */
    public PulsarSourceBuilder<OUT> setEofRecordEvaluator(RecordEvaluator<OUT> eofRecordEvaluator) {
        this.eofRecordEvaluator = eofRecordEvaluator;
        return this;
    } 
} 


5) For SQL users, a new connector option 'scan.record.evaluator.class' is added to provide the custom RecordEvaluator class.

Proposed Changes

We expect user to specify the EOF-detection logic in a RecordEvaluator instance and pass this instance to KafkaSourceBuilder::setEofRecordEvaluator. Then KafkaSource would enforce the EOF-detection logic in the following way:

1) The RecordEvaluator would be passed from KafkaSource to KafkaSourceReader and SourceReaderBase.

2) SourceReaderBase would create a wrapper SourceOutput instance to intercept the records emitted by RecordEmitter. RecordEvaluator::isEndOfStream(...) is invoked on every intercepted records. 

3) When a record is matched by RecordEvaluator::isEndOfStream(...), SourceReaderBase stops emitting records from this split and informs SplitFetcherManager to stop reading this split.

Similar workflow can be made to support this feature for PulsarSource.

Note that the RecordEvaluator as well as the SourceReaderBase changes proposed above could also be used by other sources to detect end-of-stream based on de-serialized records.

Compatibility, Deprecation, and Migration Plan

The APIs added in this FLIP are backward compatible with the existing KafkaSource.

The KafkaSource (added by FLIP-27) is not backward compatible with FlinkKafkaConsumer. This FLIP intends to improve the migration path for users to migrate from FlinkKafkaConsumer to KafkaSource.

Users who currently uses FlinkKafkaConsumer together with KafkaDeserializationSchema::isEndOfStream(...) can migrate to KafkaSource by moving the isEndOfStream(...) logic into the RecordEvaluator added in this FLIP.

Test Plan

We will provide unit tests to validate the proposed changes.

Rejected Alternatives

Merge RecordEvaluator and stoppingOffsetsInitializer (currently provided via KafkaSourceBuilder's setBounded() or setUnbounded()) into one class.

For example, we can add a KafkaStopCursor class (similar to PulsarSource's StopCursor) which contains all possible stopping criteria (e.g. based on the offset, the de-serialized message, and the ConsumerRecord).

In comparison to the proposed approach, this alternative could provide a more concise and consolidated interface for users to specify the stopping criteria (i.e. via one KafkaSourceBuilder API).

This alternative has the following disadvantages compared to the proposed approach:

1) It introduces backward incompatible changes to KafkaSource. This is because we will need to replace KafkaSourceBuilder::setBounded(...) with the new API.

2) KafkaStopCursor can not be shared with other source types because different sources have different raw message formats. For example, KafkaSource uses offset and ConsumerRecord, whereas PulsarSource uses MessageId and Message. In comparison, the RecordEvaluator proposed in this FLIP (as well as the proposed implementation changes in the SourceReaderBase) could be used by other sources (e.g. PulsarSource) to detect EOF based on de-serialized records.

3) The implementation of this alternative approach will likely be harder to maintain. Note that users might want to stop the job based on the offset, de-serialized message, or both. The offset-based criteria should ideally be evaluated before records are de-serialized for performance-reason; and the criteria based on the de-serialized record should be evaluated after the record is de-serialized. Thus these criteria ideally should be evaluated at difference position in the code path. It could be inconvenient to achieve this goal while having these logics in the same class.

Let user specify eofRecordEvaluator via StreamExecutionEnvironment::fromSource(...).withEofRecordEvaluator(...)

The advantage of this approach that the feature can be used by all connectors without we having to change the implementation (e.g. KafkaSourceBuilder, PulsarSourceBuilder) of individual connectors. Thus it improves the connector developers' experience.

The disadvantage of this approach is that it requires users to pass some source configuration via StreamExecutionEnvironment::fromSource(...) and some other source configuration via e.g. KafkaSourceBuilder(...). This might create a sense of inconsistency/confusion for connectors' users.

Given that the number of connector users are much more than the number of connector developers, it is probably better to optimize the user experience in this case.