Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Initial state:
    1. No concurrent ConsistentCut process is running.
    2. lastFinishedCutId holds previous ConsistentCutId, or null.
  2. User starts a command for creating new incremental snapshot:
    1. Ignite node inits a DistributedProcess with discovery message SnapshotOperationRequest that holds new ConsistentCutId (goal is to notify every node in a cluster about running incremental snapshot). 
    2. DistributedProcess store the topology version topVer on which ConsistentCut started.
  3. Process of creation of consistent cut can be started by two events (what will happen earlier):
    1. Receive the SnapshotOperationRequest#ConsistentCutId by DiscoverySPI (by the DistributedProcess).
    2. Receive the ConsistentCutAwareMessage#ConsistentCutId by CommunicationSPI (by transaction messages - Prepare, Finish).
  4. On receiving the ConsistentCutId it starts local ConsistentCut: 
    1. There are 2 roles that node might play:
      1. ROLE#1 - wraps outgoing messages - for all Ignite nodes: client, baseline, non-baseline server nodes.
      2. ROLE#2 - prepares data to be written in WAL - for baseline nodes only.
    2. Before start check:
      1. Whether ConsistentCut has already started (ConsistentCut != null) or finished (lastFinishedCutId == id) for this id, skip if it has.
      2. On non-baseline nodes In case ConsistentCut is inited by CommunicationSPI then compare the ConsistentCutAwareMessage#topVer with local node order:
        1. Local node order equals to new topVer on the moment when node joined to a cluster.
        2. If the order is higher than ConsistentCut topVer it means the node joined after ConsistentCut started. Skip start ConsistentCut on this node.
    3. ROLE#1:
      1. creates new ConsistentCut future.
        1. If node is  non-baseline (client, non-baseline servers) - complete it right after creation, and notify a node-initiator about local procedure has finished (by DistributedProcess protocol).
      2. While ConsistentCut != null wraps outgoing messages to ConsistentCutAwareMessage. It contains info:
        1. ConsistentCutId (to start ConsistentCut  on remote node, if not yet).
        2. Messages contain additional field txCutId. It is originally set on the nodes that commit first:
          1. For 2PC it is an originated node.
          2. For 1PC it is a backup node.
        3. If txCutId equals to null then transaction starts committing Before Consistent Cut started, otherwise After.
      3. On receive ConsistentCutAwareMessage that makes transaction committed (FinishRequest for 2PC, PrepareResponse for 1PC) sets tx#txCutId = message#txCutId.
    4. ROLE#2 - for baseline nodes only:
      1. In the message thread atomically inits ConsistentCut:
        1. creates new ConsistentCut future.
        2. creates empty collection removedActiveTxs (This collection doesn't remove transactions unlike IgniteTxManager#activeTx does).
      2. In the background thread:
        1. Writes a ConsistentCutStartRecord  to WAL with the received ConsistentCutId.
        2. Creates a copy (weakly-consistent) of IgniteTxManager#activeTx. Set listeners on those tx#finishFuture.
          1. For optimization it's safely exclude transactions that tx#status == ACTIVE. It's guaranteed that such transactions belongs After side.
        3. Creates a copy of removedActiveTxs (contains transactions that are might be cleaned from IgniteTxManager#activeTx). Set listeners on those tx#finishFuture.
        4. Set removedActiveTxs to null. We don't care of txs concurrently added to removedActiveTxs, they just don't land into "before" or "after" set and will be excluded from recovery.
      3. In transaction threads fills removedActiveTxs if ConsistentCut != null and removedActiveTxs != null:
        1. Every transaction is added into removedActiveTxsright before it is removed from IgniteTxManager#activeTx.
      4. For every listening transaction, the callback is called when transaction finished:
        1. check If transaction state is UNKNOWN or status is RECOVERY_FINISH, then complete ConsistentCut with exception.
        2. If transaction mapped to a higher topology version than ConsistentCut topVer, then put it into after.
        3. if tx#txCutId equals to local, then put transaction into after, otherwise put into before.
      5. After every listening transaction finished:
        1. Writes a ConsistentCutFinishRecord  into WAL with the collections ( before, after ). 
        2. Completes ConsistentCut  future.
      6. Notify a node-initiator about local procedure has finished (by DistributedProcess protocol).
  5. After all nodes finished ConsistentCut, on every node:
    1. Updates lastFinishedCutId with the current id.
    2. ConsistentCut  future becomes null.
    3. Stops signing outgoing transaction messages.
  6. Node initiator checks that every node completes correctly.
    1. If any node complete exceptionally - complete Incremental Snapshot with exception.

...