Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Status

...

Page properties


Discussion thread

...

...

Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyFLINK-27919

...

Release1.17


Motivation

FLIP-27 sources are non-trivial to implement. At the same time, it is frequently required to generate arbitrary events with a "mock" source. Such requirement arises both for Flink users, in the scope of demo/PoC projects, and for Flink developers when writing tests. The go-to solution for these purposes so far was using pre-FLIP-27 APIs and implementing data generators as SourceFunctions
While the new FLIP-27 Source interface introduces important additional functionality, it comes with significant complexity that presents a hurdle for Flink users for implementing drop-in replacements of the SourceFunction-based data generators.  Meanwhile, SourceFunction is effectively superseded by the Source interface and needs to be eventually deprecated. To fill this gap, this FLIP proposes the introduction of a generic data generator source based on the FLIP-27 API. 

...

Code Block
languagejava
titleDataGeneratorSource
package org.apache.flink.api.connector.source.lib;

/**
 * A data source that produces generators N events of an arbitrary type in parallel. 
 * This source is useful for testing and for cases that just need a stream of N events of any kind.
 *
 * <p>The source splits the sequence into as many parallel sub-sequences as there are parallel
 * source readers. Each sub-sequence will be produced in order. Consequently, if the parallelism is
 * limited to one, this will produce one sequence in order.
 *
 * <p>This source is always bounded. For very long sequences user may want to consider executing 
 * the application in a streaming manner, because, despite the fact that the produced stream is bounded, 
 * the end bound is pretty far away.
 */

@Public
public class DataGeneratorSource<OUT>                 
		implements Source<
                        OUT,
                        NumberSequenceSource.NumberSequenceSplit,
                        Collection<NumberSequenceSource.NumberSequenceSplit>>,
                ResultTypeQueryable<OUT> {    

    
     /**
     * Creates a new {@code DataGeneratorSource} that produces @{code count} records in
     * parallel.
     *
     * @param generatorFunction The generatorfactory functionfor thatinstantiating receivesthe indexreaders numbers andof translates
     *     them into events oftype the output typeSourceReader<OUT, NumberSequenceSplit>.
     * @param count The number of events to be produced.
     * @param typeInfo The type information of the returned events.
     */ 
    public DataGeneratorSourceV4DataGeneratorSource(
            SourceReaderFactory<OUT, NumberSequenceSplit> sourceReaderFactory,
            long count,
            TypeInformation<OUT> typeInfo) {
        this.sourceReaderFactory = checkNotNull(sourceReaderFactory);
        this.typeInfo = checkNotNull(typeInfo);
        this.numberSource = new NumberSequenceSource(0, count);
    }

     /**
     * Creates a new {@code DataGeneratorSource} that produces @{code count} records in
     * parallel.
     *
     * @param generatorFunction The generator function that receives index numbers and translates
     *     them into events of the output type.
     * @param count The number of events to be produced.
     * @param typeInfo The type information of the returned events.
     */
    public DataGeneratorSource(
            GeneratorFunction<Long, OUT> generatorFunction, long count, TypeInformation<OUT> typeInfo) {...}
    }
    

     /**
     * Creates a new {@code DataGeneratorSource} that produces @{code count} records in
     * parallel.
     *
     * @param generatorFunction The generator function that receives index numbers and translates
     *     them into events of the output type.
     * @param count The number of events to be produced.
     * @param sourceRatePerSecond The maximum number of events per seconds that this generator aims
     *     to produce. This is a target number for the whole source and the individual parallel
     *     source instances automatically adjust their rate taking based on the {@code
     *     sourceRatePerSecond} and the source parallelism.
     * @param typeInfo The type information of the returned events.
     */     
    public DataGeneratorSource(
            GeneratorFunction<Long, OUT> generatorFunction,
            long count,
            longdouble sourceRatePerSecond,
            TypeInformation<OUT> typeInfo) {...}

Proposed Changes

The sum of rates of all parallel readers has to approximate the optional user-defined sourceRatePerSecond parameter. Currently, there is no way for the SourceReaders to acquire the current parallelism of the job they are part of. In order to overcome this limitation, this FLIP proposes an extension of the SourceReaderContext interface with the currentParallelism() method:


Where GeneratorFunction supports initialization of class fields via the open() method with access to the local SourceReaderContext.

Code Block
languagejava
titleGeneratorFunction
@Public
public interface GeneratorFunction<T, O> extends Function {

    /**
     * Initialization method for the function. It is called once before the actual working process
Code Block
languagejava
titleSourceReaderContext
package org.apache.flink.api.connector.source;

/** The class that expose some context from runtime to the {@link SourceReader}. */
@Public
public interface SourceReaderContext {
	...         
	/**
     * Get the current parallelism of this Sourcemethods.
     */
    default * @return the parallelism of the Source.
     */void open(SourceReaderContext readerContext) throws Exception {}

    /** Tear-down method for the function. */
    default void close() throws Exception {}

    intO currentParallelismmap(T value) throws Exception; 
}

The parallelism can be retrieved in the SourceOperator via the RuntimeContext and can be easily provisioned during the anonymous SourceReaderContext initialization in its initReader() method.


With the parallelism accessible via SourceReaderContext, initialization of the data generating readers based on the user-provided generatorFunction could look as follows:A new SourceReaderFactory interface is introduced.

Code Block
languagejava
titleDataGeneratorSource#createrReader()SourceReaderFactory
public interface  @Override
 SourceReaderFactory<OUT, SplitT extends SourceSplit> extends Serializable {
   public SourceReader<OUT, NumberSequenceSplit>SplitT> createReadernewSourceReader(SourceReaderContext readerContext);
}

The generator source delegates the SourceReaders' creation to the factory.

Code Block
languagejava
titleDataGeneratorSource
@Public
public class DataGeneratorSource<OUT>               throws Exception {
		implements Source<
       if (maxPerSecond > 0) {
            int parallelism = readerContext.currentParallelism(); OUT,
            RateLimiter rateLimiter = new GuavaRateLimiter(maxPerSecond, parallelism);
       NumberSequenceSource.NumberSequenceSplit,
     return new RateLimitedSourceReader<>(
                 Collection<NumberSequenceSource.NumberSequenceSplit>>,
   new MappingIteratorSourceReader<>(readerContext, generatorFunction),
           ResultTypeQueryable<OUT> {   

    private final SourceReaderFactory<OUT, NumberSequenceSplit> rateLimiter)sourceReaderFactory;
 
    @Override
    public SourceReader<OUT, }NumberSequenceSplit> else {createReader(SourceReaderContext readerContext)
            returnthrows new MappingIteratorSourceReader<>(readerContext, generatorFunction);
Exception {
         }return sourceReaderFactory.newSourceReader(readerContext);
    }
}

Where RateLimiter


Proposed Changes

In order to deliver convenient rate-limiting functionality to the users of the new API, a small addition to the SourceReaderContext is required.

The sum of rates of all parallel readers has to approximate the optional user-defined sourceRatePerSecond parameter. Currently, there is no way for the SourceReaders to acquire the current parallelism of the job they are part of. To overcome this limitation, this FLIP proposes an extension of the SourceReaderContext interface with the currentParallelism() method:


Code Block
languagejava
titleSourceReaderContext
package org.apache.flink.api.connector.source;

/** The class that expose some context from runtime to the {@link SourceReader}. */
@Public
public interface SourceReaderContext {
	...         
	/**
Code Block
languagejava
titleRateLimiter
package org.apache.flink.api.common.io.ratelimiting;

/** The interface that can be used to throttle execution of methods. */
public interface RateLimiter extends Serializable {

    /**
     * Acquire method is a blocking call that is intended to be used in places where it is required
     * to limit the rate at which results are produced or other functions are called.
     *
     * @return The number of milliseconds this call blocked its caller.
     * Get the @throwscurrent InterruptedExceptionparallelism Theof interruptedthis exceptionSource.
     */
    int acquire()* throws InterruptedException;
}

---

It is desirable to reuse the functionality of IteratorSourceReader for cases where the input data type is different from the output (IN: Long from the wrapped NumberSequenceSplit, OUT: the result of applying MapFunction<Long, OUT> function provided by the user). For that purpose, the following changes are proposed:

  • New IteratorSourceReaderBase is introduced parameterized with both in and out data types generics.
  • All methods apart from pollNext() from the IteratorSourceReader are "pulled-up" to the *Base class
  • IteratorSourceReader API remains the same while implementing IteratorSourceReaderBase where input and output types are the same
  • New MappingIteratorSourceReader is introduced where input and output types are different (result of applying the MapFunction)
@return the parallelism of the Source.
     */
    int currentParallelism(); 
}


With the parallelism accessible via SourceReaderContext, initialization of the rate-limiting data generating readers can be taken care of by the SourceReaderFactories. For example:


Code Block
languagejava
titleGeneratorSourceReaderFactory
public class GeneratorSourceReaderFactory<OUT>
        implements SourceReaderFactory<OUT, NumberSequenceSource.NumberSequenceSplit> {

    public GeneratorSourceReaderFactory(
            GeneratorFunction<Long, OUT> generatorFunction, long sourceRatePerSecond){...}

    @Override
    public SourceReader<OUT, NumberSequenceSource.NumberSequenceSplit> newSourceReader(
            SourceReaderContext readerContext) {
        if (sourceRatePerSecond > 0) {
            int parallelism = readerContext.currentParallelism();
            RateLimiter rateLimiter = new GuavaRateLimiter(sourceRatePerSecond, parallelism);
            return new RateLimitedSourceReader<>(
                    new GeneratingIteratorSourceReader<>(readerContext, generatorFunction),
                    rateLimiter);
        } else {
            return new GeneratingIteratorSourceReader<>(readerContext, generatorFunction);
        }
    }
}


Where RateLimiter

Code Block
languagejava
titleRateLimiter
/** The interface that can be used to throttle execution of methods. */
interface RateLimiter extends Serializable {

    /**
     * Acquire method is a blocking call that is intended to be used in places where it is required
     * to limit the rate at which results are produced or other functions are called.
     *
     * @return The number of milliseconds this call blocked its caller.
     * @throws InterruptedException The interrupted exception.
     */
    int acquire() throws InterruptedException;
}

---

It is desirable to reuse the functionality of IteratorSourceReader for cases where the input data type is different from the output (IN: Long from the wrapped NumberSequenceSplit, OUT: the result of applying GeneratorFunction<Long, OUT>  provided by the user). For that purpose, the following changes are proposed:

  • New IteratorSourceReaderBase is introduced parameterized with both in and out data types generics.
  • All methods apart from pollNext() from the IteratorSourceReader are "pulled-up" to the *Base class
  • IteratorSourceReader API remains the same while implementing IteratorSourceReaderBase where input and output types are the same
  • New GeneratingIteratorSourceReader is introduced where input and output types are different (the result of applying GeneratorFunction)
  • GeneratingIteratorSourceReader initializes the GeneratorFunction (if needed), by calling open() method within its start() method.

Code Block
languagejava
titleIteratorSourceReaderBase
package org.apache.flink.api.connector.source.lib.util;

@Experimental
abstract class IteratorSourceReaderBase<
                E, O, IterT extends Iterator<E>, SplitT extends IteratorSourceSplit<E, IterT>>
        implements SourceReader<O, SplitT> {...}


Reader:

Code Block
languagejava
titleIteratorSourceReader
package org.apache.flink.api.connector.source.lib.util;

@Public
public class IteratorSourceReader<
                E, IterT extends Iterator<E>, SplitT extends IteratorSourceSplit<E, IterT>>
        extends IteratorSourceReaderBase<E, E, IterT, SplitT> {

    public IteratorSourceReader(SourceReaderContext context) {
        super(context);
    }

    @Override
    public InputStatus pollNext(ReaderOutput<E> output) {...}

}


Code Block
languagejava
titleGeneratingIteratorSourceReader
package org.apache.flink.api.connector.source.lib.util;

@Experimental
public class GeneratingIteratorSourceReader<
                E, O, IterT extends Iterator<E>, SplitT extends IteratorSourceSplit<E, IterT>>
        extends IteratorSourceReaderBase<E, O, IterT, SplitT> {

    public GeneratingIteratorSourceReader(
Code Block
languagejava
titleIteratorSourceReaderBase
package org.apache.flink.api.connector.source.lib.util;

@Experimental
abstract class IteratorSourceReaderBase<
               SourceReaderContext Econtext, OGeneratorFunction<E, O> IterT extends Iterator<E>, SplitT extends IteratorSourceSplit<E, IterT>>generatorFunction) {...} 

    @Override
    public InputStatus   implements SourceReader<O, SplitT>pollNext(ReaderOutput<O> output)  {...}

...

} 

    
}


RateLimitedSourceReader wraps another SourceReader (delegates to its methods) while rate-limiting the pollNext() calls.

Code Block
languagejava
titleIteratorSourceReaderRateLimitedSourceReader
package org.apache.flink.api.connector.source.lib.util;

@Public@Experimental
public class IteratorSourceReader<
RateLimitedSourceReader<E, SplitT extends SourceSplit>
             E, IterT extends Iterator<E>implements SourceReader<E, SplitTSplitT> extends{

 IteratorSourceSplit<E, IterT>>
  private final SourceReader<E, SplitT> sourceReader;
  extends IteratorSourceReaderBase<E, E,private IterT,final SplitT>RateLimiter {rateLimiter;

    public IteratorSourceReader(SourceReaderContext contextRateLimitedSourceReader(SourceReader<E, SplitT> sourceReader, RateLimiter rateLimiter) {
        this.sourceReader = super(context)sourceReader;
    }

    @Override
  this.rateLimiter = publicrateLimiter;
 InputStatus pollNext(ReaderOutput<E> output) {...}

}
Code Block
languagejava
titleMappingIteratorSourceReader
package org.apache.flink.api.connector.source.lib.util;

@Experimental
public class GeneratingIteratorSourceReader<    @Override
    public void  start() {
        E, O, IterT extends Iterator<E>, SplitT extends IteratorSourceSplit<E, IterT>>sourceReader.start();
    }

    @Override
    public InputStatus pollNext(ReaderOutput<E> output) extendsthrows IteratorSourceReaderBase<E, O, IterT, SplitT> {

Exception {
       public GeneratingIteratorSourceReaderrateLimiter.acquire();
          return sourceReader.pollNext(output);
  SourceReaderContext context, GeneratorFunction<E, O> generatorFunction) { }
  ...
}

Usage: 

The envisioned usage for functions that do not contain any class fields that need initialization looks like this:

Code Block
languagejava
titleusage
int count = 1000;

int sourceRatePerSecond = 2;
GeneratorFunction<Long, String> @Override
generator = index -> public"Event InputStatus pollNext(ReaderOutput<O> output)  {...} 
}

RateLimitedSourceReader wraps another SourceReader (delegates to its methods) while rate-limiting the pollNext() calls.

Code Block
languagejava
titleRateLimitedSourceReader
package org.apache.flink.api.connector.source.lib.util;

@Experimental
public class RateLimitedSourceReader<E, SplitT extends SourceSplit>
from index: " + index;
DataGeneratorSource<String> source = new DataGeneratorSource<>(generator, count, sourceRatePerSecond, Types.STRING);
DataStreamSource<String> watermarked =
               implements SourceReader<E, SplitT> {

  env.fromSource(
   private final SourceReader<E, SplitT> sourceReader;
    private final RateLimiter rateLimiter;

    public RateLimitedSourceReader(SourceReader<E, SplitT> sourceReader, RateLimiter rateLimiter) {source,
        this.sourceReader = sourceReader;
        this.rateLimiter = rateLimiter;
    }
WatermarkStrategy.forBoundedOutOfOrderness(Duration.ofSeconds(1)),
    @Override
    public void start() {
        sourceReader.start();
    }

    @Override
 "watermarked");

Scenarios, where GeneratorFunction requires initialization of non-serializable fields, is supported as follows:

Code Block
languagejava
titleusage
GeneratorFunction<Long, String> generator =
     publicnew InputStatusGeneratorFunction<Long, pollNextString>(ReaderOutput<E> output) throws Exception {

      transient SourceReaderMetricGroup rateLimiter.acquire()sourceReaderMetricGroup;

      @Override
     return sourceReader.pollNext(output);
    }
  ...
}

Usage: 

The envisioned usage looks like this:

Code Block
languagejava
titleusage
int count = 1000;
int sourceRatePerSecond = 2;
MapFunction<Long, String> generator = index -> "Event from index: " + index;
DataGeneratorSource<String> source = new DataGeneratorSource<>(generator, count, sourceRatePerSecond, Types.STRING);
DataStreamSource<String> watermarked =
         public void open(SourceReaderContext readerContext) {
      	  sourceReaderMetricGroup = readerContext.metricGroup();
      }

      @Override
      public String map(Long value) {
          return "Generated: >> "
          env.fromSource(
       + value.toString()
                source,
 + "; local metric group: "
                 + WatermarkStrategy.forBoundedOutOfOrderness(Duration.ofSeconds(1)),sourceReaderMetricGroup.hashCode();
            }
       };
DataGeneratorSource<String> source =   "watermarked");

It is up for discussion if an additional utility method of StreamExecutionEnvironment with default watermarking might also be desirable (similar to env.fromSequence(long from, long to) ).

Compatibility, Deprecation, and Migration Plan

This feature is a stepping stone toward deprecating the SourceFunction API (see this discussion). 

  1. After this feature is introduced, it will be documented and promoted as the recommended way to write data generators.
  2. A list of Flink tests that currently use the SourceFunction API will be compiled and follow-up tickets for migration will be created.

Test Plan

  • Unit tests will be added to verify the behavior of Source's Splits in relation to the SourceReader
  • Integration tests will be added to verify correct functioning with different levels of parallelism

Rejected Alternatives

It is possible to use a NumberSequenceSource followed by a map function to achieve similar results, however, this has two disadvantages:

  • It introduces another level of indirection and is less intuitive to use
  • It does not promote best practices of assigning watermarks (see this discussion)

POC Branch:

https://github.com/apache/flink/compare/master...afedulov:flink:FLINK-27919-generator-source

...

new DataGeneratorSource<>(generator, count, sourceRatePerSecond, Types.STRING);

Remarks:

  • It is up for discussion an addition of a utility method to StreamExecutionEnvironment with default watermarking might also be desirable (similar to env.fromSequence(long from, long to) ).
  • To be able to reuse the existing functionality of NumberSequenceSource it is required to change the visibility of NumberSequenceSource.CheckpointSerializer from private to package-private. 

Compatibility, Deprecation, and Migration Plan

This feature is a stepping stone toward deprecating the SourceFunction API (see this discussion). 

  1. After this feature is introduced, it will be documented and promoted as the recommended way to write data generators.
  2. A list of Flink tests that currently use the SourceFunction API will be compiled and follow-up tickets for migration will be created.

Test Plan

  • Unit tests will be added to verify the behavior of Source's Splits in relation to the SourceReader
  • Integration tests will be added to verify correct functioning with different levels of parallelism