You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 61 Next »


ConsistentCut splits WAL on 2 global areas - Before and After. It guarantees that every transaction committed Before also will be committed Before on every other node participated in the transaction. It means that an Ignite node can safely recover itself to the Before state without any coordination with other nodes.

The border between Before and After areas consists of two WAL records - ConsistentCutStartRecord and ConsistentCutFinishRecordIt guarantees that the Before consists of:
1. transactions committed before ConsistentCutStartRecord and weren't included into ConsistentCutFinishRecord#after();
2. transactions committed between ConsistentCutStartRecord and ConsistentCutFinishRecord and were included into ConsistentCutFinishRecord#before().

On the picture below the Before area consist of transactions colored to yellow, while After is green.

ConsistentCutRecord
/** */
public class ConsistentCutStartRecord extends WALRecord {
	/** Marker that inits Consistent Cut. */
	private final ConsistentCutMarker marker;
}


/** */
public class ConsistentCutFinishRecord extends WALRecord {
    /**
     * Collections of transactions committed BEFORE.
     */
    private final Set<GridCacheVersion> before;

     /**
     * Collections of transactions committed AFTER.
     */
    private final Set<GridCacheVersion> after;
 }

Algorithm

  1. Initial state:
    1. No concurrent ConsistentCut process is running.
  2. User starts a command for creating new incremental snapshot:
    1. Ignite node inits a DistributedProcess with special message holds new ConsistentCutMarker (goal is to notify every node in a cluster about running incremental snapshot).  
  3. Process of creation of incremental snapshot can be started by two events (what will happen earlier):
    1. Receive the ConsistentCutMarker by discovery.
    2. Receive the ConsistentCutMarker by transaction message (Prepare, Finish)
  4. On receiving the marker, every node: 
    1. Checks whether ConsistentCut has already started for this marker, skip if it has.
    2. Checks local topVersion  with received in marker. Skip if it is different.
    3. In message thread atomically:
      1. creates new ConsistentCut future
      2. creates committingTx, goal is to track COMMITTING+ transactions, that aren't part of IgniteTxManager#activeTx
      3. starts signing outgoing messages with the ConsistentCutMarker.
    4. In background thread:
      1. Writes a ConsistentCutStartRecord  to WAL with the received ConsistentCutMarker .
      2. Collects active transactions - concat of IgniteTxManager#activeTx and committingTxs .
  5. While the DistributedProcess  is alive every node signs output transaction messages:
    1. Prepare messages signed with the ConsistentCutMarker  (to trigger ConsistentCut  on remote node, if not yet).
    2. Finish messages signed with the ConsistentCutMarker  (to trigger...) and transaction ConsistentCutMarker  (to notify nodes which side of cut this transaction belongs to).
    3. Finish messages is signed with transaction ConsistentCutMarker on node that commits first.
  6. For every collected active transaction, it starts listening tx#finishFuture with callback. Callback is called when transaction finished:
    1. check that transaction completes in consistent way (tx#state != UNKNOWN, tx.finalizationStatus != RECOVERY_FINISH). If it isn't then this cut is inconsistent. Complete ConsistentCut exceptionally.
    2. if tx#txMarker is null or differs from local, then transaction put into before
    3. if tx#txMarker equals to local, then transaction put into after
  7. After every collected transaction finished:
    1. Writes a ConsistentCutFinishRecord  into WAL with the collections ( before, after ). 
    2. Stops filling committingTxs .
    3. Completes ConsistentCut  future, and notifies a node-initiator about finishing local procedure (with DistributedProcess  protocol).
  8. After all nodes finished ConsistentCut :
    1. every node stops signing outgoing transaction messages
    2. ConsistentCut  future becomes null.
    3. Ignite node now in the initial state again
  9. Node initiator checks that every node completes correctly and that topVer wasn't changed since start.
    1. If any node complete exceptionally, or topology changed - complete IS with exception.

Consistent and inconsistent Cuts

Consistent Cut is such cut that correctly finished on all baseline nodes - ConsistentCutStartRecord  and ConsistentCutFinishRecord  are written.

"Inconsistent" Cut is such a cut when one or more baseline nodes hasn't wrote ConsistentCutFinishRecord . It's possible in cases:

  1. any errors appeared during processing local Cut.
  2. if a transaction is recovered with transaction recovery protocol (tx.finalizationStatus == RECOVERY_FINISH).
  3. if transaction finished in UNKNOWN state.
  4. topology change

ConsistentCutMarker

Every ignite nodes tracks current ConsistentCutMarker :

ConsistentCutMarker
class ConsistentCutMarker {
	UUID id;

	AffinityTopologyVersion topVer;
}

id is just a unique ConsistentCut  ID (is assigned on the node initiator).

topVer is topology version on node initiator before Incremental Snapshot starts.


Signing messages

On the picture below on left side is a diagram of sending transaction messages. Before sending message it checks whether cut is running with cutMarker(). If it is then wrap message, otherwise send ordinary message (PrepareRequest in example).


Ignite transaction protocol includes multiple messages. But only some of them affects meaningful (relating to the algorithm) that change state of transactions (PREPARED, COMMITTED):

  1. GridNearTxPrepareRequest / GridDhtTxPrepareRequest
  2. GridNearTxPrepareResponse / GridDhtTxPrepareResponse
  3. GridNearTxFinishRequest / GridDhtTxFinishRequest

Those messages are wrapped in MarkerMessage  that is prepared right before sending message on other node. They used the current ConsistentCutMarker for setting the marker.

ConsistentCutMarkerMessage
class MarkerMessage {
	Message msg;

	ConsistentCutMarker marker;
}

Also some messages require to be signed with additional ConsistentCutMarker to check it them on primary/backup node.

  1. GridNearTxFinishRequest / GridDhtTxFinishRequest
  2. GridNearTxPrepareResponse / GridDhtTxPrepareResponse (for 1PC algorithm).

Those messages are wrapped in TransactionFinishMarkerMessage  that is prepared right before transaction starts committing on first committing node. They used the current ConsistentCutMarker  for setting the txMarker . txMarker  can be null, if transaction starts committing before ConsistentCut starts.

ConsistentCutMarkerFinishMessage
class TransactionFinishMarkerMessage extends MarkerMessage {
    @Nullable ConsistentCutMarker txMarker;
}


  1. ON DISTRIBUTED SNAPSHOTS, Ten H. LAI and Tao H. YANG, 29 May 1987
  • No labels