ConsistentCut splits timeline on 2 global areas - BEFORE and AFTER. It guarantees that every transaction committed BEFORE also will be committed BEFORE on every other node. It means that an Ignite node can safely recover itself to this point without any coordination with other nodes.
DistributedProcess
with special message holds new ConsistentCutMarker
.ConsistentCutMarker
by discovery.ConsistentCutMarker
by transaction message (Prepare, Finish)ConsistentCut
futurecommittingTx,
goal is to track COMMITTING+ transactions, that aren't part of IgniteTxManager#activeTx
ConsistentCutMarker
.ConsistentCutStartRecord
to WAL with the received ConsistentCutMarker
.IgniteTxManager#activeTx
and committingTxs
.DistributedProcess
is alive every node signs output transaction messages:ConsistentCutMarker
(to trigger ConsistentCut
on remote node, if not yet).ConsistentCutMarker
(to trigger...) and transaction ConsistentCutMarker
(to notify nodes which side of cut this transaction belongs to).ConsistentCutMarker
on node that commits first.ConsistentCutMarker
and prepares before
, after
collections:before
sideafter
sidebefore
, after
). committingTxs
.ConsistentCut
future, and notifies a node-initiator about finishing local procedure (with DistributedProcess
protocol).ConsistentCut
:ConsistentCut
future becomes null.Consistent Cut is such cut that correctly finished on all baseline nodes - ConsistentCutStartRecord
and ConsistentCutFinishRecord
are written.
"Inconsistent" Cut is such a cut when one or more baseline nodes hasn't wrote ConsistentCutFinishRecord
. It's possible in cases:
tx.finalizationStatus
== RECOVERY_FINISH).Every ignite nodes tracks current ConsistentCutMarker
:
class ConsistentCutMarker { UUID id; }
id
is just a unique ConsistentCut
ID (is assigned on the node initiator).
Ignite transaction protocol includes multiple messages. But only some of them affects meaningful (relating to the algorithm) that change state of transactions (PREPARED, COMMITTED):
GridNearTxPrepareRequest / GridDhtTxPrepareRequest
GridNearTxPrepareResponse / GridDhtTxPrepareResponse
GridNearTxFinishRequest / GridDhtTxFinishRequest
Also some messages require to be signed with ConsistentCutMarker
to check it them on primary/backup node:
GridNearTxFinishRequest / GridDhtTxFinishRequest
GridNearTxPrepareResponse / GridDhtTxPrepareResponse
(for 1PC algorithm).There are 2 records: ConsistentCutStartRecord
for Start event and ConsistentCutFinishRecord
for Finish event.
ConsistentCutStartRecord
: record is written to WAL in moment when CC starts on a local node. It helps to limit amout of active transactions to check. But there is no strict guarantee for all transactions belonged to the BEFORE side to be physically committed before ConsistentCutStartRecord, and vice versa. This is the reason for having ConsistentCutFinishRecord.ConsistentCutFinishRecord
: This record is written to WAL after Consistent Cut stopped analyzing transactions and storing them in a particular bucket (BEFORE or AFTER).It guarantees that the BEFORE side consist of:
1. transactions committed before ConsistentCutStartRecord
and weren't included into ConsistentCutFinishRecord#after()
;
2. transactions committed between ConsistentCutStartRecord
and ConsistentCutFinishRecord and were included into ConsistentCutFinishRecord#before()
.
It guarantees that the AFTER side consist of:
1. transactions physically committed before ConsistentCutStartRecord
and were included into ConsistentCutFinishRecord#after()
;
2. transactions physically committed after ConsistentCutStartRecord
and weren't included into ConsistentCutFinishRecord#before().
/** */ public class ConsistentCutStartRecord extends WALRecord { /** Marker that inits Consistent Cut. */ private final ConsistentCutMarker marker; } /** */ public class ConsistentCutFinishRecord extends WALRecord { /** * Collections of TXs committed BEFORE the ConsistentCut (sent - received). */ private final Set<GridCacheVersion> before; /** * Collections of TXs committed AFTER the ConsistentCut (exclude). */ private final Set<GridCacheVersion> after; }
There are some cases to handle for unstable topology:
TBD: Which ways to use to avoid inconsistency between data and WAL after rebalance. There are options:
ConsistentCutManager#inconsistent
to true
, and persist this flag within local MetaStorage.