Table of Contents
Paper [1] defines ConsistentCut is a distributed snapshot algorithm. It uses some definitions, let's describe them in terms of Ignite:algorithm that splits WAL on 2 global areas - Before
and After
. It guarantees that every transaction committed Before
also will be committed Before
on every other node participated in the transaction. It means that an Ignite nodes can safely recover themself to the consistent Before
state without any coordination with each other.
The border between Before
and After
areas consists of two WAL records - ConsistentCutStartRecord
and ConsistentCutFinishRecord
. It guarantees that the Before
consists of:
ConsistentCutStartRecord
AND Message
- transaction message (...FinishRequest
for 2PC, ...PrepareResponse
for 1PC)Channel
- TCP communication connection from one node to another, by that the Messages is sent.ChannelState
- for single channel it's a set of Messages that was sent, but not committed yet on receiver.IncrementalSnapshot
- on Ignite node it is represented with 2 WAL records (ConsistentCutStartRecord
commits the WAL state, ConsistentCutFinishRecord
describes the ChannelState
). It guarantees that every node in cluster includes in the snapshot:transactions committed before ConsistentCutStartRecord
and weren't included into ConsistentCutFinishRecord#after()
;.
ConsistentCutStartRecord
and ConsistentCutFinishRecord and ConsistentCutFinishRecord
AND were included into ConsistentCutFinishRecord#before()
.Marker
- mark that piggy backs on the Message, and notifies a node about running snapshot.After I
S start and before finish all PrepareRequest
, FinishRequest
are wrapper by ConsistentCutMarkerMessage
instead of regular Message
. This is done to notify target node via communication channel about running IS
.
In terms of Ignite, there are additional definitions:
Consistent Cut
- Successful attempt of creating IncrementalSnapshot.Inconsistent Cut
- Failed attempt of creating a IncrementalSnapshot, due to inability to correctly describe a ChannelState.Note, ConsistentCut
can't guarantee that specific transaction that runs concurrently with the algorithm will land before or after cut, it only guarantees that set of the transactions before(or after) the cut will be the same on the each node in cluster.
...
On the picture below the Before
area consist of transactions colored to yellow, while After
is green.
Code Block | ||||
---|---|---|---|---|
| ||||
/** */
public class ConsistentCutStartRecord extends WALRecord {
/** Consistent Cut ID. */
private final UUID cutId;
}
/** */
public class ConsistentCutFinishRecord extends WALRecord {
/** Consistent Cut ID. */
private final UUID cutId;
/**
* Collections of transactions committed BEFORE.
*/
private final Set<GridCacheVersion> before;
/**
* Collections of transactions committed AFTER.
*/
private final Set<GridCacheVersion> after;
}
|
Picture bellow illustrates steps of the algorithm on single node:
lastFinishedCutId
holds previous ConsistentCutId
, or null....
ConsistentCut
future equals to null
.committingTxs
that goal is to track COMMITTING+ transactions, that aren't part of IgniteTxManager#activeTx
.DistributedProcess
with special message holds new ConsistentCutMarker
.SnapshotOperationRequest
that holds new ConsistentCutId
(goal is to notify every node in a cluster about running incremental snapshot). SnapshotOperationRequest#ConsistentCutId
by DiscoverySPI (by the DistributedProcess).ConsistentCutAwareMessage#ConsistentCutId
by CommunicationSPI (by transaction messages - Prepare, Finish).ConsistentCutId
it starts local ConsistentCut: ConsistentCut
!= null) or finished (lastFinishedCutId
== id) for this id, skip if it has.ConsistentCutAwareMessage#topVer
with local node order:ConsistentCut
future. ConsistentCut != null
wraps outgoing messages to ConsistentCutAwareMessage
. It contains info:ConsistentCutId
(to start ConsistentCut ConsistentCut
future, creates committingTxs, starts signing outgoing messages with the ConsistentCutMarker
.ConsistentCutStartRecord
to WAL with the received ConsistentCutMarker
.IgniteTxManager#activeTx
and committingTxs
ChannelState
).While global ConsistentCut
is running every node signs output transaction messages:ConsistentCutMarker
(to trigger ConsistentCut
ConsistentCutMarker
(to trigger...) and transaction ConsistentCutMarker
(to notify nodes which side of cut this transaction belongs to).ConsistentCutMarker
and fills before
, after
collections:before
sideafter
sidetxCutId
equals to null then transaction starts committing Before
Consistent Cut started, otherwise After
.ConsistentCutAwareMessage
that makes transaction committed (FinishRequest for 2PC, PrepareResponse for 1PC) sets tx#txCutId = message#txCutId
.ConsistentCut
future.removedActiveTxs
(This collection doesn't remove transactions unlike IgniteTxManager#activeTx
does).ConsistentCutStartRecord
to WAL with the received ConsistentCutId
.IgniteTxManager#activeTx
. Set listeners on those tx#finishFuture
.tx#status == ACTIVE
. It's guaranteed that such transactions belongs After side.removedActiveTxs
(contains transactions that are might be cleaned from IgniteTxManager#activeTx
). Set listeners on those tx#finishFuture
.removedActiveTxs
to null. We don't care of txs concurrently added to removedActiveTxs
, they just don't land into "before" or "after" set and will be excluded from recovery.removedActiveTxs
if ConsistentCut != null
and removedActiveTxs != null
:removedActiveTxs
right before it is removed from IgniteTxManager#activeTx
.tx#txCutId
equals to local, then put transaction into after, otherwise put into before.ConsistentCutFinishRecord
into WAL with ChannelState
committingTxs
ConsistentCut
futureConsistentCut
future becomes null.ConsistentCut
future becomes nullConsistent Cut , in terms of Ignite implementation, is such cut that correctly finished on all baseline nodes - ConsistentCutStartRecord
and ConsistentCutFinishRecord
are written.
...
tx.finalizationStatus
== RECOVERY_FINISH).Every ignite nodes tracks current ConsistentCutMarker
:
Code Block | ||||
---|---|---|---|---|
| ||||
class ConsistentCutMarker {
UUID id;
} |
id
is just a unique ConsistentCut
ID (is assigned on the node initiator).
...
Ignite transaction protocol includes multiple messages. But only some of them affects meaningful (relating to the algorithm) that change state of transactions (PREPARED, COMMITTED):
GridNearTxPrepareRequest / GridDhtTxPrepareRequest
GridNearTxPrepareResponse / GridDhtTxPrepareResponse
GridNearTxFinishRequest / GridDhtTxFinishRequest
Those messages are wrapped in ConsistentCutAwareMessage
that is prepared right before sending message on other node. They used the current ConsistentCutId
.
Also some messages require to be signed with ConsistentCutMarker
combine with additional ConsistentCutId
to check it them on primary/backup node:
GridNearTxFinishRequest / GridDhtTxFinishRequest
GridNearTxPrepareResponse / GridDhtTxPrepareResponse
(for 1PC algorithm).There are 2 records: ConsistentCutStartRecord
for Start event and ConsistentCutFinishRecord
for Finish event.
ConsistentCutStartRecord
: record is written to WAL in moment when CC starts on a local node. It helps to limit amout of active transactions to check. But there is no strict guarantee for all transactions belonged to the BEFORE side to be physically committed before ConsistentCutStartRecord, and vice versa. This is the reason for having ConsistentCutFinishRecord.ConsistentCutFinishRecord
: This record is written to WAL after Consistent Cut stopped analyzing transactions and storing them in a particular bucket (BEFORE or AFTER)....
Those messages are filled with txCutId
that is prepared right before transaction starts committing on first committing node. They used the current ConsistentCutId
for this setting. If current ConsistentCutId
is not null, then transaction starts committing after ConsistentCut started and it means that this transaction belongs the After
side.
Code Block | |||||
---|---|---|---|---|---|
| |||||
class ConsistentCutAwareMessage { /** Original transaction message. */ public class ConsistentCutStartRecord extends WALRecord { Message msg; /** Consistent Cut ID. */ UUID cutId; /** Consistent Cut MarkerID thatafter initswhich Consistenttransaction Cutcommitted. */ private final ConsistentCutMarker marker; } /** @Nullable UUID txCutId; /** Cluster topology version on which Consistent Cut started. */ long topVer; } |
A new field added to IgniteInternalTx
Code Block | ||||
---|---|---|---|---|
| ||||
class IgniteInternalTx { public class ConsistentCutFinishRecord extends WALRecord { /** * @param CollectionsID of TXs{@link committedConsistentCut} BEFOREAFTER thewhich ConsistentCutthis (senttransaction - received). */ private final Set<GridCacheVersion> before; was committed, {@code null} if transaction /** * Collections of TXs committed AFTER the ConsistentCut (exclude). * committed BEFORE. */ private final Set<GridCacheVersion> after; } |
There are some cases to handle for unstable topology:
TBD: Which ways to use to avoid inconsistency between data and WAL after rebalance. There are options:
...
public void cutId(@Nullable UUID id);
} |
Code Block | ||||
---|---|---|---|---|
| ||||
// Class is responsible for managing all stuff related to Consistent Cut. It's an entrypoint for transaction threads to check running consistent cut.
class ConsistentCutManager extends GridCacheSharedManagerAdapter {
// Current Consistent Cut. All transactions threads wraps outgoing messages if this field is not null. */
volatile @Nullable ConsistentCut cut;
// Entrypoint for handling received new Consistent Cut ID.
void handleConsistentCutId(UUID id);
} |
Code Block | ||||
---|---|---|---|---|
| ||||
class ConsistentCut extends GridFutureAdapter<WALPointer> {
Set<GridCacheVersion> beforeCut;
Set<GridCacheVersion> afterCut;
Set<IgniteInternalFuture<IgniteInternalTx>> removedActive;
} |