You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 32 Next »

Definitions

Paper [1] defines a distributed snapshot algorithm. It uses some definitions, let's describe them in terms of Ignite:

  1. Message - transaction message (...FinishRequest for 2PC, ...PrepareResponse for 1PC)
    1. it is guaranteed that it's sent after all transaction DataRecords are written into WAL on sending node.
  2. Channel - TCP communication connection from one node to another, by that the Messages is sent.
  3. ChannelState - for single channel it's a set of Messages that was sent, but not committed yet on receiver.
    1. In Ignite we can think that ChannelState is represented by active transactions in PREPARING, and greater, states.
  4. IncrementalSnapshot - on Ignite node it is represented with 2 WAL records (ConsistentCutStartRecord commits the WAL state, ConsistentCutFinishRecord describes the ChannelState). It guarantees that every node in cluster includes in the snapshot:
    1. transactions committed before ConsistentCutStartRecord and weren't included into ConsistentCutFinishRecord#after();
    2. transactions committed between ConsistentCutStartRecord and ConsistentCutFinishRecord and were included into ConsistentCutFinishRecord#before().
  5. Marker - mark that piggy backs on the Message, and notifies a node about running snapshot.
    1. After IS start and before finish all PrepareRequest, FinishRequest are wrapper by ConsistentCutMarkerMessage instead of regular Message. This is done to notify target node ⁣⁣via communication channel about running IS.

In terms of Ignite, there are additional definitions:

  1. Consistent Cut - Successful attempt of creating IncrementalSnapshot.
  2. Inconsistent Cut - Failed attempt of creating a IncrementalSnapshot, due to inability to correctly describe a ChannelState.

Note, Consistent Cut can't guarantee that specific transaction that runs concurrently with the algorithm will land before or after cut, it only guarantees that set of the transactions before(or after) the cut will be the same on the each node in cluster.

Algorithm

For Ignite implementation it's proposed to use only single node to coordinate algorithm. User starts a command for creating new incremental snapshot on single node:

  1. Initial state:
    1. Ignite WAL are in consistent state relatively to previous full or incremental snapshot.
    2. Every Ignite node has local ConsistentCut future equals to null.
    3. Empty collection committingTxs (Set<GridCacheVersion>) that goal is to track COMMITTING+ transactions, that aren't part of IgniteTxManager#activeTx.
  2. Ignite node inits a IncrementalSnapshot, by starting DistributedProcess with special message holds new ConsistentCutMarker.
  3. Every nodes starts a local snapshot process after receiving the marker message (whether by discovery, or by communication with transaction message) 
    1. Atomically: creates new ConsistentCut future, creates committingTxs, starts signing outgoing messages with the ConsistentCutMarker.
    2. Write a ConsistentCutStartRecord to WAL with the received ConsistentCutMarker.
    3. Collect of active transactions - concat of IgniteTxManager#activeTx and committingTxs 
    4. Prepares 2 empty collections - before and after cut (describes ChannelState).
  4. While global Consistent Cut is running every node signs output transaction messages:
    1. Prepare messages signed with the ConsistentCutMarker (to trigger ConsistentCut on remote node, if not yet).
    2. Finish messages signed with the ConsistentCutMarker (to trigger...) and transaction ConsistentCutMarker (to notify nodes which side of cut this transaction belongs to).
    3. Finish messages is signed on node that commits first (near node for 2PC, backup or primary for 1PC).
  5. For every collected active transaction, node waits for Finish message, to extract the ConsistentCutMarker and fills before, after collections:
    1. if received marker is null or differs from local, then transaction on before side
    2. if received color equals to local, then transaction on after side
  6. After all transactions finished:
    1. Writes a ConsistentCutFinishRecord into WAL with ChannelState (before, after). 
    2. Stops filling committingTxs.
    3. Completes ConsistentCut future, and notifies a node-initiator about finishing local procedure (with DistributedProcess protocol).
  7. After all nodes finished ConsistentCut, every node stops signing outgoing transaction messages - ConsistentCut future becomes null.

Consistent and inconsistent Cuts

Consistent Cut, in terms of Ignite implementation, is such cut that correctly finished on all baseline nodes - ConsistentCutStartRecord and ConsistentCutFinishRecord are written.

"Inconsistent" Cut is such a cut when one or more baseline nodes hasn't wrote ConsistentCutFinishRecord. It's possible in cases:

  1. any errors appeared during processing local Cut.
  2. if a transaction is recovered with transaction recovery protocol (tx.finalizationStatus == RECOVERY_FINISH).
  3. if transaction finished in UNKNOWN state.
  4. baseline topology change, Ignite nodes finishes local Cuts running in this moment, making them inconsistent.

ConsistentCutMarker

Every ignite nodes tracks current ConsistentCutMarker:

ConsistentCutVersion
class ConsistentCutMarker {
	UUID id;
}

`id` is just a unique ConsistentCut ID (is assigned on the node initiator).


Signing messages

Ignite transaction protocol includes multiple messages. But only some of them affects meaningful (relating to the algorithm) that change state of transactions (PREPARED, COMMITTED):

  1. GridNearTxPrepareRequest / GridDhtTxPrepareRequest
  2. GridNearTxPrepareResponse / GridDhtTxPrepareResponse
  3. GridNearTxFinishRequest / GridDhtTxFinishRequest

Also some messages require to be signed with tx color to check it them on primary/backup node:

  1. GridNearTxFinishRequest / GridDhtTxFinishRequest
  2. GridNearTxPrepareResponse / GridDhtTxPrepareResponse (for 1PC algorithm).

WAL records

There are 2 records: `ConsistentCutStartRecord` for Start event and `ConsistentCutFinishRecord` for Finish event. 

  • ConsistentCutStartRecord: record is written to WAL in moment when CC starts on a local node. It helps to limit amout of active transactions to check. But there is no strict guarantee for all transactions belonged to the BEFORE side to be physically committed before ConsistentCutStartRecord, and vice versa. This is the reason for having ConsistentCutFinishRecord.
  • ConsistentCutFinishRecord: This record is written to WAL after Consistent Cut stopped analyzing transactions and storing them in a particular bucket (BEFORE or AFTER).

It guarantees that the BEFORE side consist of:
1. transactions physically committed before ConsistentCutStartRecord and weren't included into ConsistentCutFinishRecord#after();
2. transactions physically committed between ConsistentCutStartRecord and ConsistentCutFinishRecord and were included into ConsistentCutFinishRecord#before().

It guarantees that the AFTER side consist of:
1. transactions physically committed before ConsistentCutStartRecord and were included into ConsistentCutFinishRecord#after();
2. transactions physically committed after ConsistentCutStartRecord and weren't included into ConsistentCutFinishRecord#before().


ConsistentCutRecord
/** */
public class ConsistentCutStartRecord extends WALRecord {
	/** Marker that inits Consistent Cut. */
	private final ConsistentCutMarker marker;
}


/** */
public class ConsistentCutFinishRecord extends WALRecord {
    /**
     * Collections of TXs committed BEFORE the ConsistentCut (sent - received).
     */
    private final Set<GridCacheVersion> before;

     /**
     * Collections of TXs committed AFTER the ConsistentCut (exclude).
     */
    private final Set<GridCacheVersion> after;
 }

Unstable topology

There are some cases to handle for unstable topology:

  1. Client or non-baseline server node leaves – no need to handle.
  2. Server node leaves:
    1. all nodes finish local running ConsistentCut, making them in-consistent
  3. Server node joins:
    1. all nodes finish local running ConsistentCut, making them in-consistent
    2. new node checks whether rebalance was required for recovering. If it is required, then handle it TBD

TBD: Which ways to use to avoid inconsistency between data and WAL after rebalance. There are options:

  1. First solution: set flag ConsistentCutManager#inconsistent  to true , and persist this flag within local MetaStorage.
    1. On receiving new ConsistentCutVersion check the flag and raise an exception.
    2. Clean flag on recovering from ClusterSnapshot, or after creating a ClusterSnapshot.
    3. + Simple handling in runtime.
    4. - Need to rebalance a single node after PITR with file-based (or other?) rebalance: more time for recovery, file-based rebalance has not-resolved issues yet(?). 

  2. Do not disable WAL during rebalance:
    + WAL now is consistent with data, can use it for PITR after rebalancing
    - Too many inserts during rebalance may affect rebalance speed, IO utilization

  3. Automatically create snapshots for rebalancing cache groups:
    + Guarantee of consistency (snapshot is created on PME, after rebalance).
    + Faster recovery.
    - Complexity of handling - Ignite should provide additional tool for merging WAL and multiple snapshots created in different time
    ? is it possible to create snapshot only on single rebalanced node?
    ? Is it possible to sync current PME (on node join) with starting snapshot?

  4. Write a special Rebalance record to WAL with description of demanded partition:
    1. During restore read this record and repeat the historical rebalance at this point, after rebalance resume recovery with existing WALs.
    2. In case Record contains full rebalance - stops recovering with WAL and fallback to full rebalance.
      ? Is it possible to rebalance only specific cache groups, and continue to WAL recovery for others.
      - For historical rebalance during recovery need separate logic for extracting records from WAL archives from other nodes. 
  1. ON DISTRIBUTED SNAPSHOTS, Ten H. LAI and Tao H. YANG, 29 May 1987
  • No labels