Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Paper [1] defines a distributed snapshot algorithm. It uses some definitions, let's describe them in terms of Ignite:

  1. Message  - transaction message (...FinishRequest for 2PC, ...PrepareResponse for 1PC)
    1. it is guaranteed that it's sent after all transaction DataRecords are written into WAL on sending node.
  2. Channel Channel  - TCP communication connection from one node to another, by that the Messages is sent.
  3. ChannelState  - for single channel it's a set of Messages that was sent, but not committed yet on receiver.
    1. In Ignite we can think that ChannelState is represented by active transactions in PREPARING, and greater, states.
  4. IncrementalSnapshot  - on Ignite node it is represented with 2 WAL records (ConsistentCutStartRecord  commits the WAL state, ConsistentCutFinishRecord  describes the ChannelState ). It guarantees that every node in cluster includes in the snapshot:
    1. transactions committed before ConsistentCutStartRecord  and weren't included into ConsistentCutFinishRecord#after() ;
    2. transactions committed between ConsistentCutStartRecord  and ConsistentCutFinishRecord and were included into ConsistentCutFinishRecord#beforeinto ConsistentCutFinishRecord#before() .
  5. Marker  - mark that piggy backs on the Message, and notifies a node about running snapshot.
    1. After IS IS start and before finish all PrepareRequest , FinishRequest  are wrapper by ConsistentCutMarkerMessage  instead of regular Message . This is done to notify target node ⁣⁣via communication channel about running IS .

In terms of Ignite, there are additional definitions:

  1. Consistent Cut Cut  - Successful attempt of creating IncrementalSnapshot.
  2. Inconsistent Cut  - Failed attempt of creating a IncrementalSnapshot, due to inability to correctly describe a ChannelState.

Note,  Consistent Cut ConsistentCut  can't guarantee that specific transaction that runs concurrently with the algorithm will land before or after cut, it only guarantees that set of the transactions before(or after) the cut will be the same on the each node in cluster.

...

  1. Initial state:
    1. Ignite WAL are in consistent state relatively to previous full or incremental snapshot.
    2. Every Ignite node has local ConsistentCut  future equals to null.
    3. Empty collection committingTxs  (Set<GridCacheVersion>) that goal is to track COMMITTING+ transactions, that aren't part of IgniteTxManager#activeTx.
  2. Ignite node inits a IncrementalSnapshot, by starting DistributedProcess  with special message holds new new ConsistentCutMarker .
  3. Every nodes starts a local snapshot process after receiving the marker message (whether by discovery, or by communication with transaction message) 
    1. Atomically: creates new ConsistentCut  future, creates committingTxscreates committingTxs, starts signing outgoing messages with the ConsistentCutMarker .
    2. Write a ConsistentCutStartRecord  to WAL with the received ConsistentCutMarker .
    3. Collect of active transactions - concat of IgniteTxManager#activeTx and committingTxs   
    4. Prepares 2 empty collections - beforeandafter cut (describes ChannelState ).
  4. While global Consistent Cut ConsistentCut  is running every node signs output transaction messages:
    1. Prepare messages signed with the ConsistentCutMarker  (to trigger ConsistentCut  on remote node, if not yet).
    2. Finish messages signed with the ConsistentCutMarker  (to trigger...) and transaction ConsistentCutMarker  (to notify nodes which side of cut this transaction belongs to).
    3. Finish messages is signed on node that commits first (near node for 2PC, backup or primary for 1PC).
  5. For every collected active transaction, node waits for Finish message, to extract the ConsistentCutMarker and the ConsistentCutMarker  and fills before , after  collections:
    1. if received marker is null or differs from local, then transaction on before  side
    2. if received color equals to local, then transaction on after side  side
  6. After all transactions finished:
    1. Writes a ConsistentCutFinishRecord  into WAL with ChannelState ( before, after ). 
    2. Stops filling filling committingTxs .
    3. Completes ConsistentCut  future, and notifies a node-initiator about finishing local procedure (with DistributedProcess  protocol).
  7. After all nodes finished ConsistentCut , every node stops signing outgoing transaction messages - ConsistentCut  future becomes null.

Consistent and inconsistent Cuts

Consistent Cut, in terms of Ignite implementation, is such cut that correctly finished on all baseline nodes - ConsistentCutStartRecord  and ConsistentCutFinishRecord  are written.

"Inconsistent" Cut is such a cut when one or more baseline nodes hasn't wrote ConsistentCutFinishRecord . It's possible in cases:

  1. any errors appeared during processing local Cut.
  2. if a transaction is recovered with transaction recovery protocol (tx.finalizationStatus == RECOVERY_FINISH).
  3. if transaction finished in UNKNOWN state.
  4. baseline topology change, Ignite nodes finishes local Cuts running in this moment, making them inconsistent.

...

Every ignite nodes tracks current ConsistentCutMarker :

Code Block
languagejava
titleConsistentCutVersion
class ConsistentCutMarker {
	UUID id;
}

`id` is id is just a unique ConsistentCut  ID (is assigned on the node initiator).

...

Ignite transaction protocol includes multiple messages. But only some of them affects meaningful (relating to the algorithm) that change state of transactions (PREPARED, COMMITTED):

  1. GridNearTxPrepareRequest / GridDhtTxPrepareRequest
  2. GridNearTxPrepareResponse / GridDhtTxPrepareResponse
  3. GridNearTxFinishRequest / GridDhtTxFinishRequest

Also some messages require to be signed with tx color ConsistentCutMarker to check it them on primary/backup node:

  1. GridNearTxFinishRequest / GridDhtTxFinishRequest
  2. GridNearTxPrepareResponse / GridDhtTxPrepareResponse (for 1PC algorithm).

WAL records

There are 2 records: `ConsistentCutStartRecord` ConsistentCutStartRecord  for Start event and `ConsistentCutFinishRecord` for ConsistentCutFinishRecord for Finish event. 

  • ConsistentCutStartRecord: record is written to WAL in moment when CC starts on a local node. It helps to limit amout of active transactions to check. But there is no strict guarantee for all transactions belonged to the BEFORE side to be physically committed before ConsistentCutStartRecord, and vice versa. This is the reason for having ConsistentCutFinishRecord.
  • ConsistentCutFinishRecord: This record is written to WAL after Consistent Cut stopped analyzing transactions and storing them in a particular bucket (BEFORE or AFTER).

It guarantees that the BEFORE side consist of:
1. transactions committed before ConsistentCutStartRecord and weren't included into ConsistentCutFinishRecord#after();
2. transactions committed between ConsistentCutStartRecord and ConsistentCutFinishRecord and and ConsistentCutFinishRecord and were included into ConsistentCutFinishRecord#before ConsistentCutFinishRecord#before().

It guarantees that the AFTER side consist of:
1. transactions physically committed before ConsistentCutStartRecord and were included into ConsistentCutFinishRecord#after ConsistentCutFinishRecord#after();
2. transactions physically committed after ConsistentCutStartRecord and weren't included into ConsistentCutFinishRecord#before ConsistentCutFinishRecord#before().


Code Block
languagejava
titleConsistentCutRecord
/** */
public class ConsistentCutStartRecord extends WALRecord {
	/** Marker that inits Consistent Cut. */
	private final ConsistentCutMarker marker;
}


/** */
public class ConsistentCutFinishRecord extends WALRecord {
    /**
     * Collections of TXs committed BEFORE the ConsistentCut (sent - received).
     */
    private final Set<GridCacheVersion> before;

     /**
     * Collections of TXs committed AFTER the ConsistentCut (exclude).
     */
    private final Set<GridCacheVersion> after;
 }

...