Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The Distributed Process is used to complete steps [1, 3]. To achieve the step [2] a new SnapshotFutureTask  must be developed.

...

Local snapshot

...

task

The local snapshot task is an operation that executes on each local node independently. It copies all the persistence user files from the Ignite work directory to the target snapshot directory with additional machinery to achieve consistency. This task is closely connected with the node checkpointing process due to, for instance, cache partition files are only eventually consistent on disk during the ongoing checkpoint process and fully consistent only when the checkpoint ends.

The local snapshot operation on each cluster node reflects as – SnapshotFutureTask .

Data to copy to snapshot

The following must be copied to snapshot:

  • cache partition files
  • cache configuration
  • binary meta information
  • marshaller meta information
Base copy strategy

Binary meta, marshaller meta, configurations still stored in on-heap, so it is easy to collect and keep this persistent user information consistent under the checkpoint write-lock (no changes allowed).

Another strategy must be used for cache partition files. The checkpoint process will write dirty pages from PageMemory to the cache partition files simultaneously with another process copy them to the target directory. Each cache partition file is consistent only at checkpoint end. So, to avoid long-time transaction blocks during the cluster-wide snapshot process it should not wait when checkpoint ends on each node. The process of copying cache partition files must do the following:

  1. Start copying each partition file to the target directory as he is. These files will have dirty data due to concurrent checkpoint thread writes.
  2. Collect all dirty pages related to ongoing checkpoint process and corresponding partition files and apply them to the copied file right after the copy process ends. These dirty pages will be written to special new .delta files. Each file created per each partition. (e.g. part-1.bin  with part-1.bin.delta  – fully consistent cache partition).

There are two possible cases during copy cache partition files simultaneously with the checkpoint thread:

  1. Cache partition file already copied, but the checkpoint still not ended – wait while checkpoint ends and start merging cache partition file with its delta.
  2. The current checkpoint process ended, but the cache partition file is still copying – the next checkpoint process must read and copy the old version of a page to delta file prior to writing its dirty page.
The local snapshot task process
  1. A new checkpoint starts (forced by node or a regular one).
  2. Under the checkpoint write lock – fix cache partition length for each partition (copy from 0  - to length ).
  3. The task creates new on-heap collections with marshaller meta, binary meta to copy.
  4. The task starts copying partition files.
  5. The checkpoint thread:
    1. If the associated with task checkpoint is not finished - write a dirty page to the original partition file and to delta file.
    2. If the associated with task checkpoint is finished and partition file still copying – read an original page from the original partition file and copy it to the delta file prior to the dirty page write.
  6. If partition file is copied – start merging copied partition with its delta file.
  7. The task ends then all data successfully copied to the target directory and all cache partition files merged with its deltas.

Cluster-wide snapshot

The cluster-wide snapshot is an operation which executes local snapshot task on each node. To achieve cluster-wide snapshot consistency the Partition-Map-Exchange will be reused to block for a while all user transactions.

In a short amount of time while user transactions are blocked the local snapshot task will be started by forcing the checkpoint process. These actions have the following guarantees: all current transactions are finished and all new transactions are blocked, all data from the PageMemory will be flushed on a disk at checkpoint end. This is a short time-window when all cluster data will be eventually fully consistent on the disk.

The cluster-wide snapshot task steps overview in terms of distributed process:

  1. The snapshot distributed process starts a new snapshot operation by sending an initial discovery message.
  2. The distributed process configured action initiates a new local snapshot task on each cluster node.
  3. The discovery event from the distributed process pushes a new exchange task to the exchange worker to start PME.
  4. While transactions are blocked (see onDoneBeforeTopologyUnlock) each local node forces the checkpoint thread and waits while an appropriate local snapshot task starts.
  5. The distributed process collects completion results from each cluster node:
    1. If there are no execution errors and no baseline nodes left the cluster – the snapshot created successfully.
    2. If there are some errors or some of the cluster nodes fails prior to complete local results – all local snapshots will be reverted from each node, the snapshot fails

To achieve cluster-wide snapshot consistency the Partition-Map-Exchange will be reused to block for a while all user transactions.

In a short amount of time while user transactions are blocked the local snapshot task will be started by forcing the checkpoint process. These actions have the following guarantees: all current transactions are finished and all new transactions are blocked, all data from the PageMemory will be flushed on a disk at checkpoint end. This is a short time-window when all cluster data will be eventually fully consistent on the disk.

The cluster-wide snapshot task steps overview in terms of distributed process:

  1. The snapshot distributed process starts a new snapshot operation by sending an initial discovery message.
  2. The distributed process configured action initiates a new local snapshot task on each cluster node.
  3. The discovery event from the distributed process pushes a new exchange task to the exchange worker to start PME.
  4. While transactions are blocked (see onDoneBeforeTopologyUnlock) each local node forces the checkpoint thread and waits while an appropriate local snapshot task starts.
  5. The distributed process collects completion results from each cluster node:
    1. If there are no execution errors and no baseline nodes left the cluster – the snapshot created successfully.
    2. If there are some errors or some of the cluster nodes fails prior to complete local results – all local snapshots will be reverted from each node, the snapshot fails.

Local snapshot

The local snapshot task is an operation that executes on each local node and copies all the persistence user files from the Ignite work directory to the target snapshot directory with additional machinery to achieve consistency. This task is closely connected with the node checkpointing process due to, for instance, cache partition files are only eventually consistent on disk during the ongoing checkpoint process and fully consistent only when the checkpoint ends.

The local snapshot operation on each cluster node reflects as – SnapshotFutureTask .

Data to copy to snapshot

The following must be copied to snapshot:

  • cache partition files
  • cache configuration
  • binary meta information
  • marshaller meta information

Base copy strategy

Binary meta, marshaller meta, configurations still stored in on-heap, so it is easy to collect and keep this persistent user information consistent under the checkpoint write-lock (no changes allowed).

Another strategy must be used for cache partition files. The checkpoint process will write dirty pages from PageMemory to the cache partition files simultaneously with another process copy them to the target directory. Each cache partition file is consistent only at checkpoint end. So, to avoid long-time transaction blocks during the cluster-wide snapshot process it should not wait when checkpoint ends on each node. The process of copying cache partition files must do the following:

  1. Start copying each partition file to the target directory as he is. These files will have dirty data due to concurrent checkpoint thread writes.
  2. Collect all dirty pages related to ongoing checkpoint process and corresponding partition files and apply them to the copied file right after the copy process ends. These dirty pages will be written to special new .delta files. Each file created per each partition. (e.g. part-1.bin  with part-1.bin.delta  – fully consistent cache partition).

There are two possible cases during copy cache partition files simultaneously with the checkpoint thread:

  1. Cache partition file already copied, but the checkpoint still not ended – wait while checkpoint ends and start merging cache partition file with its delta.
  2. The current checkpoint process ended, but the cache partition file is still copying – the next checkpoint process must read and copy the old version of a page to delta file prior to writing its dirty page.

Local snapshot task process

  1. A new checkpoint starts (forced by node or a regular one).
  2. Under the checkpoint write lock – fix cache partition length for each partition (copy from 0  - to length ).
  3. The task creates new on-heap collections with marshaller meta, binary meta to copy.
  4. The task starts copying partition files.
  5. The checkpoint thread:
    1. If the associated with task checkpoint is not finished - write a dirty page to the original partition file and to delta file.
    2. If the associated with task checkpoint is finished and partition file still copying – read an original page from the original partition file and copy it to the delta file prior to the dirty page write.
  6. If partition file is copied – start merging copied partition with its delta file.
  7. The task ends then all data successfully copied to the target directory and all cache partition files merged with its deltas
    1. .

Remote snapshot

The internal components must have the ability to request a consistent (not cluster-wide, but locally) snapshot from remote nodes. This process requests for execution of a local snapshot task on the remote node and collects back the execution results. The File Transmission protocol is used to transfer files between the cluster nodes. The local snapshot task can be reused independently to perform data copying and sending.from the remote node to the local node. 

The high-level overview looks like:

...