Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Initial state:
    1. No concurrent ConsistentCut process is running.
    2. lastFinishedCutId holds previous ConsistentCutId, or null.
  2. User starts a command for creating new incremental snapshot:
    1. Ignite node inits a DistributedProcess with message SnapshotOperationRequest that holds new ConsistentCutId (goal is to notify every node in a cluster about running incremental snapshot). 
    2. DistributedProcess fix the topology version topVer on which ConsistentCut started.
  3. Process of creation of incremental snapshot can be started by two events (what will happen earlier):
    1. Receive the SnapshotOperationRequest#ConsistentCutId by DiscoverySPI (by the DistributedProcess).
    2. Receive the ConsistentCutAwareMessage#ConsistentCutId by CommunicationSPI (by transaction messages - Prepare, Finish).
  4. On receiving the ConsistentCutId, every node: 
    1. Checks whether ConsistentCut has already started (ConsistentCut is running) or finished (lastFinishedCutId == id) for this id, skip if it has.
    2. In case ConsistentCut is inited by CommunicationSPI then compare the ConsistentCutAwareMessage#topVer with local node order:
      1. Node Local node order equals to current new topVer on the moment when node has joined to a cluster.
      2. If the order is higher than ConsistentCut topVer then  it means the node joined after ConsistentCut started. Skip local start ConsistentCut on this node.
    3. In the message thread atomically inits ConsistentCut:
      1. creates new ConsistentCut future.
      2. creates committingTx (this collection fill with transactions in COMMITTING state, and it Goal is to not miss transactions. This collection doesn't remove transactions unlike IgniteTxManager#activeTx does).
      3. starts wraps outgoing messages to ConsistentCutAwareMessage (contains ConsistentCutId).
    4. In the background thread:
      1. Writes a ConsistentCutStartRecord  to WAL with the received ConsistentCutId.
      2. Creates a copy (weakly-consistent) of IgniteTxManager#activeTx. Set listeners on those tx#finishFuture.
        1. For optimization it's safely exclude transactions that tx#status == ACTIVE. It's guaranteed that such transactions belongs After side.
      3. Creates a copy of committingTxs (contains transactions that are already might be cleaned from IgniteTxManager#activeTx). Set listeners on those tx#finishFuture.
      4. Set committingTxs to null.
  5. While the DistributedProcess  is running every node wraps outgoing transaction messages (Prepare, Finish) to ConsistentCutAwareMessage (if transaction has not committed yet on sender node) or ConsistentCutAwareTxFinishMessage (if transaction has committed on a sender node). Messages contain info:
    1. ConsistentCutId (to trigger ConsistentCut  on remote node, if not yet).
    2. ConsistentCutAwareTxFinishMessage messages contains additionally txCutIdIt set on the node that commits first (if it's not null then transaction starts committing After Before Consistent Cut started, otherwise After):
      1. For 2PC it is an originated node.
      2. For 1PC it is a backup node.
  6. Fills committingTxs if ConsistentCut is running and committingTxs is not null:
    1. Every transaction is added into committingTxs right before it is removed from IgniteTxManager#activeTx.
  7. For every receiving ConsistentCutAwareTxFinishMessage Ignite marks the related transaction with message#txCutId.
  8. For every listening transaction, the callback is called when transaction finished:
    1. check If transaction state is UNKNOWN or status is RECOVERY_FINISH, then complete ConsistentCut with exception.

    2. If transaction mapped to a higher topology version than ConsistentCut topVer, then put it into after (topology changed after ConsistentCut started).
    3. if tx#txCutId equals to local, then put transaction into after, otherwise put into before.
  9. After every listening transaction finished:
    1. Writes a ConsistentCutFinishRecord  into WAL with the collections ( before, after ). 
    2. Completes ConsistentCut  future.
    3. Note, that it continues to wrap messages even after local ConsistentCut finish.
  10. After ConsistentCut finish, DistributeProcess automatically notifies a node-initiator about local procedure has finished.
  11. After all nodes finished ConsistentCut, on every node:
    1. Updates lastFinishedCutId with the current id.
    2. ConsistentCut  future becomes null.
    3. Stops signing outgoing transaction messages.
  12. Node initiator checks that every node completes correctly.
    1. If any node complete exceptionally - complete Incremental Snapshot with exception.

...