Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In Flink, for a single key, we can use the window as a batch to cut out the stream, which is well known. In a sense, checkpoints are also divided into batches by triggering via time interval, but this is a more obscure concept relative to windows , because checkpoints emphasize "points", and the "snapshot" corresponding to the "point". So, in the face of two mechanisms for cutting batches on the stream, how should we choose? This depends on other considerations

Here we choose checkpoint, reasons will be given later.

...

Looking back, it is guaranteed that the characteristics of "atomic" writing are provided by the commit operation. But to ensure performance, our write operations are distributed in parallel. Therefore, in a distributed scenario, we need a coordination mechanism to commit after writing. And this

This can be achieved by depending on count the mechanism of checkpoint snapshot execution and checkpoint completion notification. When number of results from subtasks(e.g. 12 parallelisms, 12 results). when we implement it based on windows, although we can also use the checkpoint capability, it will make the whole process more complicated, and we cannot take the major advantage of windows (for example, aggregation), if some subtasks have no input data, it will not emit results to sink. But Relying on Checkpoint, We can introduce a WriteProcessOperator to mock results to send to the sink. In this way, The sink will receive exactly the num of parallelism results from WriteProcessOperator, regardless of data skew.

...

  • source: used to connect Kafka's message stream. Kafka data will be transformed into HoodieRecord here;
  • instant generate operator (customized): used to generate a globally unique instant (each batch of hudi needs to correspond to an instant), its parallelism is 1;. Before emitting a new instant, it will check the state of last instant, if it exists and not completed, it will wait until timeout.
  • keyBy: partition the data with partitionPath as the key to avoid concurrent write operations to the same partition;
  • keyed Process: It carries the main logic of the write path, including index search and file writing operations. If some subtasks have no data flow in, they will send an empty result to the sink;
  • sink: a global commit sink, its parallelism is also 1, implement CheckpointListener;

Next, we briefly introduce the mechanism and general principles of checkpoints. 

Image Removed

Flink's checkpoints are triggered periodically. The CheckpointCoordinator located in JobMaster acts as a coordinator. It triggers all source tasks to drive the actual execution of a checkpoint. Then the source tasks will broadcast a barrier event to all input channels of the downstream. These barrier events serve as markers in the data stream that drive checkpoints (carrying checkpoint numbers) downstream to the sink. After each downstream task collects the correct number of barriers, it will also execute the snapshotState method. After broadcasting the barriers, the source task will execute the snapshotState method on another thread. 

After each task executes this method, it will send an ack message to JobMaster. Next, we explain the synchronization and coordination mechanism that combines checkpoints. When JobMaster receives all ack messages, it will confirm the completion of the current round of checkpoints, and call back the notifyCheckpointComplete method for all tasks that implement the CheckpointListener interface. See below image:

Image Removed

Through the above understanding of the Flink checkpoint mechanism, we can use the collaboration mechanism between snapshotState and notifyCheckpointComplete to implement the process of writing and committing through Flink. We will implement a UDF of type KeyedProcessFunction and make the UDF inherit the CheckpointedFunction interface to customize the snapshotState method and complete the write operation in this method. When KeyedProcessFunction receives a piece of data, it will call the process method once. In this method, we will complete the index search and tagLocation process.

...

  • . This wink will count the num of results it received, it will not commit until the num equal the parallelism.


As mentioned above, the current Flink-based write implementation is very different from the existing Delta Streamer's Spark RDD-based write implementation. It is real streaming processing, not a circulating small batch processing. Therefore, in addition to the different ways of defining "batch" here, we also face the problem of how to generate instants for one batch. For the implementation of Delta Streamer, because the loop is sequential, so it can generate a unique instant, but in real streaming, we must find a way to generate a globally unique instant for each checkpoint and pass it to the downstream. Here we explicitly split it into two steps: 

...

...

Firstly, we should solve the first problem, how to generate a globally unique instant? Here we can only introduce an operator with a parallelism of 1 in the pipeline, which generates will generate an instant when the last one is completed(if not we can should wait, make sure will be only one instant inflight). so that there will be no consistency problems caused by concurrency. Since the timing of our offset saving, writing operations, and commits are all tied to Flink's checkpoint mechanism, then the timing of our instant generation should also be done. For this, we need to extend Flink's operator and rewrite its prepareSnapshotPreBarrier method. This method will be executed firstly, Then the barrier is sent to the downstream, At last, snapshotState method is executed. This ensures that when the snapshotState method is executed downstream, the upstream instant must have been generated.

Next, we solve the second problem, how to pass the instant to the downstream. Here we have considered to carry the instant in the data, but this will take advantage of some features of Flink itself, such as BroadcastState(mainly connect two streams, one stream is the data stream ingests data from kafka, the other is the instant generation stream which generates instant time, so the downstream may receive like <HoodieRecord, Tuple2<InstantTime, InstantTime>> record), considering the complexity and other engines for scalability, we decided to use an external mechanism to achieve instant delivery. Here, we have conceived two options:

  • Extend TimelineService to provide an instant generation service;
  • With the help of external storage (such as HDFS);

For the first option, as we all know, Hudi currently provides a timeline web service for external timeline query services, but currently does not provide instant generation services. If it can provide this generation work, it can be compared to a simple "global clock acquisition" service, and the synchronization mechanism is handled by the client itself.

Image Removed

If we choose this option, we need to start the service first before start the flink/spark job, the writing process will have a strong dependence on TimelineService, then we must ensure the reliability of TimelineService, considering hudi is just a lib, the option will make hudi a bit heavy currently. 

For the second option, the user specifies a storage path when starting the writing job, the instant generate operator writes to this path, and the downstream goes to change the path to get it. The disadvantage of this solution is the introduction of external storage, and Hudi itself depends on HDFS or S3. 

Image Removed

If we store the instant under Hudi's metadata path, then in a sense, it does not depend on external storage, the Hudi write path and other configuration would be passed to all tasks in open method.

The above are some of the problems and thoughts we encountered when trying to implement the first version. Of course, the index storage here still uses HBase. When we consider implementation based on BloomFilter, we may have new considerations.

draw.io Diagram
bordertrue
diagramNameexecution-plan-rfc13
simpleViewerfalse
width
linksauto
tbstyletop
lboxtrue
diagramWidth171
revision1

...