Paper [1] defines a distributed snapshot algorithm. It uses some definitions, let's describe them in terms of Ignite:
Message
- transaction message (...FinishRequest
for 2PC, ...PrepareResponse
for 1PC)Channel
- TCP communication connection from one node to another, by that the Messages is sent.ChannelState
- for single channel it's a set of Messages that was sent, but not committed yet on receiver.IncrementalSnapshot
- on Ignite node it is represented with 2 WAL records (ConsistentCutStartRecord
commits the WAL state, ConsistentCutFinishRecord
describes the ChannelState
). It guarantees that every node in cluster includes in the snapshot:ConsistentCutStartRecord
and weren't included into ConsistentCutFinishRecord#after()
;ConsistentCutStartRecord
and ConsistentCutFinishRecord and were included into ConsistentCutFinishRecord#before()
.Marker
- mark that piggy backs on the Message, and notifies a node about running snapshot.After I
S start and before finish all PrepareRequest
, FinishRequest
are wrapper by ConsistentCutMarkerMessage
instead of regular Message
. This is done to notify target node via communication channel about running IS
.
In terms of Ignite, there are additional definitions:
Consistent Cut
- Successful attempt of creating IncrementalSnapshot.Inconsistent Cut
- Failed attempt of creating a IncrementalSnapshot, due to inability to correctly describe a ChannelState.Note, ConsistentCut
can't guarantee that specific transaction that runs concurrently with the algorithm will land before or after cut, it only guarantees that set of the transactions before(or after) the cut will be the same on the each node in cluster.
For Ignite implementation it's proposed to use only single node to coordinate algorithm. User starts a command for creating new incremental snapshot on single node:
ConsistentCut
future equals to null
.committingTxs
that goal is to track COMMITTING+ transactions, that aren't part of IgniteTxManager#activeTx
.DistributedProcess
with special message holds new ConsistentCutMarker
.ConsistentCut
future, creates committingTxs, starts signing outgoing messages with the ConsistentCutMarker
.ConsistentCutStartRecord
to WAL with the received ConsistentCutMarker
.IgniteTxManager#activeTx
and committingTxs
ChannelState
).ConsistentCut
is running every node signs output transaction messages:ConsistentCutMarker
(to trigger ConsistentCut
on remote node, if not yet).ConsistentCutMarker
(to trigger...) and transaction ConsistentCutMarker
(to notify nodes which side of cut this transaction belongs to).ConsistentCutMarker
and fills before
, after
collections:before
sideafter
sideConsistentCutFinishRecord
into WAL with ChannelState
( before
, after
). committingTxs
.ConsistentCut
future, and notifies a node-initiator about finishing local procedure (with DistributedProcess
protocol).ConsistentCut
, every node stops signing outgoing transaction messages - ConsistentCut
future becomes null.Consistent Cut, in terms of Ignite implementation, is such cut that correctly finished on all baseline nodes - ConsistentCutStartRecord
and ConsistentCutFinishRecord
are written.
"Inconsistent" Cut is such a cut when one or more baseline nodes hasn't wrote ConsistentCutFinishRecord
. It's possible in cases:
tx.finalizationStatus
== RECOVERY_FINISH).Every ignite nodes tracks current ConsistentCutMarker
:
class ConsistentCutMarker { UUID id; }
id
is just a unique ConsistentCut
ID (is assigned on the node initiator).
Ignite transaction protocol includes multiple messages. But only some of them affects meaningful (relating to the algorithm) that change state of transactions (PREPARED, COMMITTED):
GridNearTxPrepareRequest / GridDhtTxPrepareRequest
GridNearTxPrepareResponse / GridDhtTxPrepareResponse
GridNearTxFinishRequest / GridDhtTxFinishRequest
Also some messages require to be signed with ConsistentCutMarker
to check it them on primary/backup node:
GridNearTxFinishRequest / GridDhtTxFinishRequest
GridNearTxPrepareResponse / GridDhtTxPrepareResponse
(for 1PC algorithm).There are 2 records: ConsistentCutStartRecord
for Start event and ConsistentCutFinishRecord
for Finish event.
ConsistentCutStartRecord
: record is written to WAL in moment when CC starts on a local node. It helps to limit amout of active transactions to check. But there is no strict guarantee for all transactions belonged to the BEFORE side to be physically committed before ConsistentCutStartRecord, and vice versa. This is the reason for having ConsistentCutFinishRecord.ConsistentCutFinishRecord
: This record is written to WAL after Consistent Cut stopped analyzing transactions and storing them in a particular bucket (BEFORE or AFTER).It guarantees that the BEFORE side consist of:
1. transactions committed before ConsistentCutStartRecord
and weren't included into ConsistentCutFinishRecord#after()
;
2. transactions committed between ConsistentCutStartRecord
and ConsistentCutFinishRecord and were included into ConsistentCutFinishRecord#before()
.
It guarantees that the AFTER side consist of:
1. transactions physically committed before ConsistentCutStartRecord
and were included into ConsistentCutFinishRecord#after()
;
2. transactions physically committed after ConsistentCutStartRecord
and weren't included into ConsistentCutFinishRecord#before().
/** */ public class ConsistentCutStartRecord extends WALRecord { /** Marker that inits Consistent Cut. */ private final ConsistentCutMarker marker; } /** */ public class ConsistentCutFinishRecord extends WALRecord { /** * Collections of TXs committed BEFORE the ConsistentCut (sent - received). */ private final Set<GridCacheVersion> before; /** * Collections of TXs committed AFTER the ConsistentCut (exclude). */ private final Set<GridCacheVersion> after; }
There are some cases to handle for unstable topology:
TBD: Which ways to use to avoid inconsistency between data and WAL after rebalance. There are options:
ConsistentCutManager#inconsistent
to true
, and persist this flag within local MetaStorage.