You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 73 Next »


ConsistentCut splits WAL on 2 global areas - Before and After. It guarantees that every transaction committed Before also will be committed Before on every other node participated in the transaction. It means that an Ignite nodes can safely recover themself to the consistent Before state without any coordination with each other.

The border between Before and After areas consists of two WAL records - ConsistentCutStartRecord and ConsistentCutFinishRecordIt guarantees that the Before consists of:

  1. Transactions committed before ConsistentCutStartRecord and weren't included into ConsistentCutFinishRecord#after().
  2. Transactions committed between ConsistentCutStartRecord and ConsistentCutFinishRecord and were included into ConsistentCutFinishRecord#before().

On the picture below the Before area consist of transactions colored to yellow, while After is green.

ConsistentCutRecord
/** */
public class ConsistentCutStartRecord extends WALRecord {
	/** Consistent Cut ID. */
	private final UUID cutId;
}


/** */
public class ConsistentCutFinishRecord extends WALRecord {
	/** Consistent Cut ID. */
	private final UUID cutId;

    /**
     * Collections of transactions committed BEFORE.
     */
    private final Set<GridCacheVersion> before;

     /**
     * Collections of transactions committed AFTER.
     */
    private final Set<GridCacheVersion> after;
 }

Algorithm

  1. Initial state:
    1. No concurrent ConsistentCut process is running.
    2. lastFinishedCutId holds previous ConsistentCutId, or null.
  2. User starts a command for creating new incremental snapshot:
    1. Ignite node inits a DistributedProcess with special message holds new ConsistentCutId (goal is to notify every node in a cluster about running incremental snapshot).  
  3. Process of creation of incremental snapshot can be started by two events (what will happen earlier):
    1. Receive the ConsistentCutId by discovery.
    2. Receive the ConsistentCutId by transaction message (Prepare, Finish)
  4. On receiving the ConsistentCutId, every node: 
    1. Checks whether ConsistentCut has already started or finished for this ID, skip if it has.
    2. In the message thread atomically:
      1. creates new ConsistentCut future.
      2. creates committingTx (goal is to track COMMITTING transactions, that aren't part of IgniteTxManager#activeTx)
      3. starts signing outgoing messages with the ConsistentCutId.
    3. In the background thread:
      1. Creates a copy of IgniteTxManager#activeTx. Set listeners on those tx#finishFuture.
      2. Writes a ConsistentCutStartRecord  to WAL with the received ConsistentCutId.
      3. Creates a copy of committingTxs. Set listeners on those tx#finishFuture.
      4. Set committingTxs to null.
  5. While the DistributedProcess  is running every node is signing output transaction messages:
    1. Prepare and Finish messages are signed with the ConsistentCutId (to trigger ConsistentCut  on remote node, if not yet).
    2. Finish messages are signed additionally with txCutId on the node that commits first (see below in Signing messages, if it's not null then transaction starts committing After Consistent Cut):
      1. For 2PC it is an originated node.
      2. For 1PC it is a backup node.
  6. For every receiving FinishMessage it puts the transaction into committingTxs, and marks the transaction with extracted from the message txCutId.
  7. For every listening transaction, the callback is called when transaction finished:
    1. check If transaction state is UNKNOWN or status is RECOVERY_FINISH, then complete ConsistentCut with exception.

    2. if tx#txCutId equals to local, then transaction put into after, otherwise it's put into before.
  8. After every listening transaction finished:
    1. Writes a ConsistentCutFinishRecord  into WAL with the collections ( before, after ). 
    2. Completes ConsistentCut  future.
    3. Note, that it continues to sign messages even after local ConsistentCut finish.
  9. After ConsistentCut finish, DistributeProcess automatically notifies a node-initiator about local procedure has finished.
  10. After all nodes finished ConsistentCut, on every node:
    1. Updates lastFinishedCutId with the current.
    2. ConsistentCut  future becomes null.
    3. Stops signing outgoing transaction messages.
  11. Node initiator checks that every node completes correctly.
    1. If any node complete exceptionally - complete Incremental Snapshot with exception.

Consistent and inconsistent Cuts

Consistent Cut is such cut that correctly finished on all baseline nodes - ConsistentCutStartRecord  and ConsistentCutFinishRecord  are written.

"Inconsistent" Cut is such a cut when one or more baseline nodes hasn't wrote ConsistentCutFinishRecord . It's possible in cases:

  1. any errors appeared during processing local Cut.
  2. if a transaction is recovered with transaction recovery protocol (tx.finalizationStatus == RECOVERY_FINISH).
  3. if transaction finished in UNKNOWN state.
  4. topology change

Signing messages

On the picture below on left side is a diagram of sending transaction messages. Before sending message it checks whether cut is running. If it is then wraps the message, otherwise send ordinary message (PrepareRequest in example).


Ignite transaction protocol includes multiple messages. But only some of them affects meaningful (relating to the algorithm) that change state of transactions (PREPARED, COMMITTED):

  1. GridNearTxPrepareRequest / GridDhtTxPrepareRequest
  2. GridNearTxPrepareResponse / GridDhtTxPrepareResponse
  3. GridNearTxFinishRequest / GridDhtTxFinishRequest

Those messages are wrapped in ConsistentCutAwareMessage  that is prepared right before sending message on other node. They used the current ConsistentCutId.

ConsistentCutAwareMessage
class ConsistentCutAwareMessage {
	Message msg;

	UUID cutId;
}

Also some messages require to be signed with additional ConsistentCutId to check it them on primary/backup node.

  1. GridNearTxFinishRequest / GridDhtTxFinishRequest
  2. GridNearTxPrepareResponse / GridDhtTxPrepareResponse (for 1PC algorithm).

Those messages are wrapped in ConsistentCutAwareTxFinishMessage  that is prepared right before transaction starts committing on first committing node. They used the current ConsistentCutId for this setting. If current ConsistentCutId is not null, then transaction starts committing after ConsistentCut started and it means that this transaction belongs the After side. 

ConsistentCutAwareTxFinishMessage
class ConsistentCutAwareTransactionFinishMessage extends ConsistentCutAwareMessage {
    @Nullable UUID txCutId;
}


  1. ON DISTRIBUTED SNAPSHOTS, Ten H. LAI and Tao H. YANG, 29 May 1987
  • No labels