Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. During the normal execution, Flink will record states of JM (ExecutionGraph, OperatorCoordinator, etc) to persistent storage so that we can recover based on these states after JM crash. We will introduce an event-based method to record the state of JM.
  2. During the JM crashes and restarts (generally HA will be responsible for restarting JM), the shuffle service and TM will retain the partitions related to the target job and try to continuously reconnect.
  3. After JM restarts, the connection with shuffle service and TM will be reAfter JM restarts, the connection with shuffle service and TM will be re-established.  Then JM will recover the job progress based on the previous recorded states and partitions currently existing in the cluster, and restart the scheduling.

...

  1. ExecutionJobVertexInitializedEvent:  This event is responsible for recording the initialization information of ExecutionJobVertex,  This event is responsible for recording the initialization information of ExecutionJobVertex,  its content contains the decided parallelism of this job vertex, and its input information.  This event will be triggered and written out when a job vertex is initializedThis event will be triggered and written out when a job vertex is initialized.

  2. ExecutionVertexFinishedEvent:This event is responsible for recording the information of finished task. Our goal is that all finished tasks don’t need to re-run, so the simple idea is to trigger an event when a task is finished.  The content of this event contains:

    1. The state of the finished taskThe state of the finished task/ExecutionVertex,  including IO metricsincluding IO metrics,  accumulatorsaccumulators,  etc. These contents can be easily obtained from ExecutionGraph.If the job vertex which this task belongs to has operator coordinators, the states of the operator coordinators also need to be recordedetc. These contents can be easily obtained from ExecutionGraph.

    2. If the job vertex which this task belongs to has operator coordinators, the states of the operator coordinators also need to be recorded.

In order to obtain the state of operator coordinators, we will enrich the checkpointCoordinatormethod to let it accept  -1 (NO_CHECKPOINT) as the value of checkpointId, to support snapshotting the state of operator coordinator in batch jobs. After JM crashes, the operator coordinator can be restored from the previous recorded state. In addition to a simple restore(by resetToCheckpoint method), it also needs to call subtaskReset for the non-finished tasks (which may in running state before JM crashes) , because these tasks will be reset and re-run after JM crashes.

Consider that the operator coordinators may have large state, we believe it may cause large overhead if we snapshot the operator coordinator at each execution vertex finished.To solve this problem, we will add a new configuration option "execution.batch.job-recovery.operator-coordinator-snapshot.min-pause" to control the minimum interval between snapshots. When restoring, we will also reconcile the execution job vertex state with the operator coordinator state to be consistent. In other words, we will adjust the execution job vertex to the state at the time of the lastest snapshot operator coordinators.

Persistent JobEventStore

We intend to introduce a persistent JobEventStore to record the JobEvents, the store is based on the file system and has the following featuresWe intend to introduce a persistent JobEventStore to record the JobEvents, the store is based on the file system and has the following features:

  1. To avoid IO operations blocking the JM main thread, the JobEventStore will write each event out in an asynchronous thread.
  2. To avoid frequent IO operations causing great pressure on external file system, there will be a write buffer inside the JobEventStore. The JobEvents will be written to the buffer first, and then flushed to external file system when the buffer is full or the flush time is reached. The flush frequency will be controlled by the following 2 configuration options:
    1. job-event.store.write-buffer.size: The size of the write buffer, the content will be flushed to external file system once it's full.
    2. job-event.store.write-buffer.flush-interval: The flush interval of write buffer, over this time, the content will be flushed to external file system.

...

  1. When it is found that the JM lost, TM will fail all tasks belongs to the target job and and release the corresponding slots.
  2. If there are partitions belongs the target job on TM, the TM should retain the partitions, and wait for HA to notify the new JM and try to establish connection to the JM. We need to register a timeout for waiting, and release the partition after the timeout. We can reuse the existing configuration option “taskmanager.registration.timeout” here, the default value is 5 minutes.
  3. If there is no partitions belongs the target job on TM, keep the same logic as current.

If it‘s using other external shuffle services, it should be the same as TM shuffle, when it detects JM crash, it should retain the partitions and wait the JM to reconnectIf it‘s using other external shuffle services, it should be the same as TM shuffle, when it detects JM crash, it should retain the partitions and wait the JM to reconnect.

Re-schedule after JM restart

After JM restarts and becomes leader again, it will wait for a period of time for the TMs to re-establish connection with itself. The length of waiting time is controlled by "execution.batch.job-recovery.previous-worker.recovery.timeout", only TMs connected within this period will be accepted, and those that time out will be rejected. Once we have enough partitions (all partitions required to continue running are registered), we can end this wait early and continue to the next step.

After re-establish connection with TMs, JM will try to obtain all partitions existing in cluster through ShuffleMaster, and re-establish the partition information in JobMasterPartitionTracker. To do that, we need to add a new method getAllPartitionWithMetrics to ShuffleMasterAfter re-establish connection with TMs, JM will try to obtain all partitions existing in cluster through ShuffleMaster, and re-establish the partition information in JobMasterPartitionTracker. To do that, we need to add a new method getAllPartitionWithMetrics to ShuffleMaster.

After re-establish JobMasterPartitionTracker, JM begins to replay the JobEvents from the JobEventStore, recover the execution graph state, and then starts rescheduling based on the execution graph state and the partitions currently existing in the cluster:

  1. Initialize all ExecutionJobVertex whose parallelism has been decided. We can obtain the initialization information from the replayed events (ExecutionJobVertexInitializedEvent).
  2. According to the information in JobMasterPartitionTracker, the execution vertices whose produced partitions are all tracked will be marked as finished. 
  3. According to the information in JobMasterPartitionTracker, the execution vertices whose produced partitions are all tracked will be marked as finished. 
  4. For execution vertices that are not marked as finished, as mentioned above, if its corresponding job vertex has operator coordinators, we need to call For execution vertices that are not marked as finished, as mentioned above, if its corresponding job vertex has operator coordinators, we need to call subtaskReset for them.
  5. Find all sink/leaf execution vertices in ExecutionGraph. For each sink/leaf execution vertex in the non-finish state, recursively find all its upstream vertices that need to be restarted (which are in unfinished state), and then start scheduling based on thisFind all sink/leaf execution vertices in ExecutionGraph. For each sink/leaf execution vertex in the non-finish state, recursively find all its upstream vertices that need to be restarted (which are in unfinished state), and then start scheduling based on this.

interface ShuffleMaster<T extends ShuffleDescriptor> extends AutoCloseable {

    //… other methods  


    /**

     * Get all partitions and their metrics, the metrics mainly includes the meta information of partition(partition bytes, etc).

     * @param jobId ID of the target job

     * @return All partitions belongs to the target job and their metrics

     */

    Collection<PartitionWithMetrics> getAllPartitionWithMetrics(JobID jobId);


    interface PartitionWithMetrics {

        ShuffleMetrics getPartitionMetrics();


        ShuffleDescriptor getPartition();

    }


    interface ShuffleMetrics {

        ResultPartitionBytes getPartitionBytes();

    }

}

...

Only support new source

Currently,  the legacy sourcethe legacy source(SourceFunction,  InputFormat) have already been depcreated, so we intend to only support new sourceInputFormat) have already been depcreated, so we intend to only support new source.

Only work with adaptive batch scheduler

In FLIPIn FLIP-283,  adaptive batch scheduler has been the default scheduler of Flink batch jobs, so we intend to only support working with the adaptive batch scheduleradaptive batch scheduler has been the default scheduler of Flink batch jobs, so we intend to only support working with the adaptive batch scheduler.

When using ApplicationMode, only support single-execute job

As mentioned in flink docs HA in Application Mode is only supported for single-execute applications. Job Recovery relies on HA , so when using ApplicationModerelies on HA,so when using ApplicationMode, the Job Recovery can only support single-execution applications.

...