Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Public Interfaces

1) Adds the EofOnRecordEvaluator RecordEvaluator interface to the package org.apache.flink.connector.base.source.reader.

Code Block
languagejava
package org.apache.flink.connector.base.source.reader;

/**
 A* An interface that determines the end of stream for a split based the de-serialized record. evaluates whether a de-serialized record should trigger certain control-flow
 * operations (e.g. end of stream).
 */
@PublicEvolving
@FunctionalInterface
public interface EofOnRecordEvaluator<T>RecordEvaluator<T> extends Serializable {
    /**
     * Determines whether toa stoprecord consumingshould fromtrigger the currentend of splitstream basedfor onits thesplit. contentThe ofgiven therecord
     * de-serialized record. The given record wouldn't be emitted from the source if the returned
     * result is true.
     *
     * @param record a de-serialized record from the split.
     * @return a boolean indicating whether the split has reached EOFend of stream.
     */
    boolean isEndOfStream(T record);
} 


2) Adds the following method to KafkaSourceBuilder 

Code Block
languagejava
public class KafkaSourceBuilder<OUT> {
    ... // Skip the existing methods

    /**
     * Sets the optional {@link EofOnRecordEvaluatorRecordEvaluator eofOnRecordEvaluatoreofRecordEvaluator} for KafkaSource.
     *
     * <p>When the evaluator is specified, it is invoked for each de-serialized record to determine
     * whether tothe stopcorresponding consumingsplit fromhas thereached currentend splitof basedstream. If a record is matched by the
     * evaluator, the source would not emit this record as well as the following records in the same
     * split.
     *
     * <p>Note that the evaluator works jointly with the stopping offsets specified by the {@link
     * #setBounded(OffsetsInitializer)} or the {@link #setUnbounded(OffsetsInitializer)}. The source
     * stops consuming from a split when any of these conditions is met.
     *
     * @param eofOnRecordEvaluatoreofRecordEvaluator a {@link EofOnRecordEvaluatorRecordEvaluator eofOnRecordEvaluatorrecordEvaluator}
     * @return this KafkaSourceBuilder.
     */
    public KafkaSourceBuilder<OUT> setEofOnRecordEvaluatorsetEofRecordEvaluator(
            EofOnRecordEvaluator<OUT> eofOnRecordEvaluatorRecordEvaluator<OUT> eofRecordEvaluator) {
        this.eofOnRecordEvaluatoreofRecordEvaluator = eofOnRecordEvaluatoreofRecordEvaluator;
        return this;
    }
} 

Proposed Changes

We expect user to specify the EOF-detection logic in an EofOnRecordEvaluator a RecordEvaluator instance and pass this instance to KafkaSourceBuilder::setEofOnRecordEvaluatorsetEofRecordEvaluator. Then KafkaSource would enforce the EOF-detection logic in the following way:

1) The EofOnRecordEvaluator RecordEvaluator would be passed from KafkaSource to KafkaSourceReader and SourceReaderBase.

2) SourceReaderBase would create a wrapper SourceOutput instance to intercept the records emitted by RecordEmitter. EofOnRecordEvaluator . RecordEvaluator::isEndOfStream(...) is invoked on every intercepted records. 

3) When a record is matched by EofOnRecordEvaluator, SourceReaderBase  RecordEvaluator::isEndOfStream(...), SourceReaderBase stops emitting records from this split and informs SplitFetcherManager to stop fetching from reading this split.


Note that the EofOnRecordEvaluator RecordEvaluator as well as the SourceReaderBase changes proposed above could also be used by other sources to detect EOF end-of-stream based on de-serialized records.

...

The APIs added in this FLIP is are backward compatible with the existing KafkaSource.

The KafkaSource (added by FLIP-27) is not backward compatible with FlinkKafkaConsumer. This FLIP intends to provide a improve the migration path for users to migrate from FlinkKafkaConsumer to KafkaSource.

Users who currently uses FlinkKafkaConsumer together with KafkaDeserializationSchema::isEndOfStream(...) can migrate to KafkaSource by moving the isEndOfStream(...) logic into the EofOnRecordEvaluator RecordEvaluator added in this FLIP.

Test Plan

We will provide unit tests to validate the proposed changes.

Rejected Alternatives

1) Merge EofOnRecordEvaluator RecordEvaluator and stoppingOffsetsInitializer (currently provided via KafkaSourceBuilder's setBounded() or setUnbounded()) into one class.

...

In comparison to the proposed approach, this alternative could provide a more concise and consolidated interface for users to specify the stopping criteria (i.e. via one KafkaSourceBuilder API).


This alternative has

...

the following disadvantages compared to the proposed approach:

a) It

...

introduces backward

...

incompatible changes to KafkaSource

...

. This is because we will need to replace KafkaSourceBuilder::setBounded(...) with the new API.

...

b) KafkaStopCursor can not be shared with other source types because different sources have different raw message formats. For example, KafkaSource uses offset and ConsumerRecord, whereas PulsarSource uses MessageId and Message. In comparison, the RecordEvaluator proposed in this FLIP (as well as the proposed implementation changes in the SourceReaderBase) could be used by other sources (e.g. PulsarSource) to detect EOF based on de-serialized records.

c) The implementation of this alternative approach will likely be harder to maintain. Note that users might want to stop the job based on the offset, de-serialized message, or both. The offset-based criteria should ideally be evaluated before

...

records are de-serialized for performance-reason; and the criteria based on the de-serialized

...

record should be evaluated after the

...

record is de-serialized. Thus these criteria ideally should be evaluated at difference position in the code path. It could be

...

inconvenient to achieve this goal while having these logics in the same class

...

.