You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 20 Next »

Problem Description And Test Case

In Ignite 1.x implementation general reads performed out-of-transaction (such as getAll() or SQL SELECT) do not respect transaction boundaries. This problem is two-fold. First, local node transaction visibility is not atomic with respect to multi-entry read. Committed entry version is made visible immediately after entry is updated. Second, there is no visible version coordination when a read involves multiple nodes. Thus, even if local transaction visibility is made atomic, this does not solve the issue.

The problem can be easily described using a test case. Let's say we have a bank system with a fixed number of accounts and we continuously run random money transfers between random pairs of accounts. In this case the sum of account balances is a system invariant and must be the same for any getAll() or SQL query.

General Approach Overview

The main idea is that every node should store not only the current (last) entry value, but also some number of previous values in order to allow consistent distributed reads. To do this, we need to introduce a separate node role - transaction version coordinators - which will be responsible for assigning a monotonically growing transaction version as well as maintaining versions of in-progress transactions and in-progress reads. The last committed transaction ID and IDs of pending transactions define the versions that should be visible for any subsequent read. The IDs of pending reads defines the value versions that are no longer needed and can be discarded.

Version Coordinator(s)

In the initial version of distributed MVCC we will use single transaction coordinator that will define the global transaction order for all transactions in the cluster. The coordinator may be a dedicated node in the cluster. Upon version coordinator failure a new coordinator should be elected in such a way that the new coordinator will start assigning new versions (tx XIDs as well) that is guaranteed to be greater than all previous transaction versions (using two longs: coordinator version which is a topology major version and starting from zero counter). To be able to restore all tx states on cluster restart or coordinator failure a special structure (TxLog) is introduced. TxLog is a table (can be persistent in case persistence enabled) which contains XID to transactions states [active, preparing, committed, aborted, etc] mappings.

Only MVCC coordinator has the whole table, other nodes have TxLog subset relaited to node-local TXs.

On MVCC coordinator failure new coordinator collects and merges all TxLog subsets from other nodes, after that it starts. At this time MVCC counter cannot be assigned or acknowleged, so that all new and committing TXs are waiting for the operation is completed.

Internal Data Structures Changes

BTree leafs structure is changed as follow:

|           key part          |       |         |        |
|-----------------------------|  xid  |  flags  |  link  |
| cache_id | hash | ver | cid |       |         |        |

 

cache_id - cache ID if it is a cache in a cache group
hash - key hash
ver - XID of transaction who created the row
xid - xid of transaction who holds a lock on the row
cid - operation counter, the number of operation in transaction this row was changed by.
flags - allows to fast check whether the row visible or not
link - link to the data

Rows with the same key are placed from newest to oldest.

Index BTree leafs structure is changed as follow:

|     key part     |       |
|------------------| flags |
| link | ver | cid |       |

 

link - link to the data
ver - XID of transaction who created the row
cid - operation counter, the number of operation in tx this row was changed by.
flags - allows to fast check whether the row visible or not

Data row payload structure is changed as follow:

|              |         |         |           |         |           |             |             |             |
| payload size | xid_min | xid_max | next_link |cache_id | key_bytes | value_bytes | row_version | expire_time |
|              |         |         |           |         |           |             |             |             |

 

xid_min - TX id which created this row.
xid_max - TX id which updated this row or NA in this is the last row version (used during secondary index scans).

other fields are obvious.

Locks

During DML or SELECT FOR UPDATE tx aquires locks one by one.

If the row is locked by another tx, current tx saves the context (cursor and current position in it) and register itself as a tx state listener As soon as previous tx is committed or rolled back it fires an event. This means all locks, acquired by this tx, are released. So, waiting on locked row tx is notified and continues locking.

TxLog is used to determine lock state, if tx with XID equal to row 'xid' field (see BTree leafs structure) is active, the row is locked by this TX. All newly created rows have 'xid' field value the same as 'ver' field value. Since, as was described above, rows with the same key are placed from newest to oldest, we can determine lock state checking the first version of row only.

Transactional Protocol Changes

Commit Protocol Changes

Commit

  1. When a tx is started a new version is assighned and MVCC coordinator adds a local  TxLog record with XID and ACTIVE flag
  2. the first change request to a datanode within the transaction produces a local TxLog record with XID and ACTIVE flag at the data node.
  3. at the commit stage each tx node adds a local TxLog record with XID and PREPARED flag and sends an acknowledge to TX coordinator
  4. TX coordinator sends to MVCC coordinator node a tx committed message.
  5. MVCC coordinator adds TxLog record with XID and COMMITTED flag, all the changes become visible.
  6. MVCC coordinator sends to participants a commit acknowledged message, all tx datanodes mark tx as COMMITTED, all resources are released.

Note: since commit acknowledge is processed asynchronously, tx which is not active in tx snapshot but at PREPARED state in local TxLog (during read operation) is considered and marked as COMMITTED.

An error during commit

  1. When a tx is started a new version is assighned and MVCC coordinator adds a local  TxLog record with XID and ACTIVE flag
  2. the first change request to a datanode within the transaction produces a local TxLog record with XID and ACTIVE flag at the data node.
  3. at the commit stage each tx node adds a local TxLog record with XID and PREPARED flag and sends an acknowledge to TX coordinator 
  4. In case at least one participant does not confirm commit, TX coordinator sends to each participant rollback message.
  5. each tx node adds a local TxLog record with XID and ABORTED flag and sends an acknowledge to TX coordinator.
  6. TX coordinator sends to MVCC coordinator node a tx rolled back message.
  7. MVCC coordinator adds TxLog record with XID and ABORTED flag.
  8. MVCC coordinator sends to participants a rollback acknowledged message, all resources are released.

Rollback

  1. When a tx is started a new version is assighned and MVCC coordinator adds a local  TxLog record with XID and ACTIVE flag
  2. the first change request to a datanode within the transaction produces a local TxLog record with XID and ACTIVE flag at the data node.
  3. at the rollback stage each tx node adds a local TxLog record with XID and ABORTED flag and sends an acknowledge to TX coordinator
  4. TX coordinator sends to MVCC coordinator node a tx rolled back message.
  5. MVCC coordinator adds TxLog record with XID and ABORTED flag.
  6. MVCC coordinator sends to participants a rollback acknowledged message, all resources are released.

Recovery Protocol Changes

There are several participant roles:

  • MVCC coordinator
  • TX coorinator
  • Primary data node
  • Backup datanode

Each participant may have several roles at the same time.

So, there are steps to recover each type of participant:

On MVCC coordinator failure:

  1. A new coordinator is elected (the oldest server node, may be some additional filters)
  2. During exchange each node sends its TxLog
  3. The new coordinator merges all the TxLog chunks and checks all local states for each TX. In case data nodes have state conflicts next rules are used:
    1. if there is at least one node with TX in ABORTED state tx rollback message is send to all datanodes and whole TX is marked as ABORTED.
    2. if there is at least one node with TX in COMMITTED state whole TX is marked as COMMITTED and commit acknowledged message is send to all datanodes.
    3. if all datanodes have TX in PREPARED state whole TX is marked as COMMITTED and commit acknowledged message is send to all datanodes.
    4. TX cannot be in COMMITTED and ABORTED state at the same time on different nodes. In case a node cannot mark PREPARED tx as COMMITTED this node has to be forcibly stopped.
  4. After merge is done it continues versions requests processing.

On TX coordinator failure:

  1. A new coordinator is elected (an oldest server tx datanode it becames a Tx coordinator)
    1. in case the oldest server tx datanode has already finished the tx and released resources, nothing happens (that means that MVCC coordinator whether started acknowleging or failed and tx will be recovered during MVCC coordinator recovery)
  2. A new coordinator checks other nodes:
    1. In case at least one nodes has already finished the tx and released resources it does nothing (that means that MVCC coordinator whether started acknowleging or failed and tx will be recovered during MVCC coordinator recovery)
  3. In case the new coordinator has the transaction in ACTIVE state
    1. A tx rollback message is send to all tx data nodes.
    2. A tx rolled back message is send to MVCC coordinator node.
    3. MVCC coordinator sends to participants a rollback acknowledged message, all resources are released.
  4. In case the new coordinator has the transaction in COMMITTED state
    1. A tx committed message is send to MVCC coordinator node.
    2. MVCC coordinator sends to participants a commit acknowledged message, all tx datanodes mark tx as COMMITTED, all resources are released.
  5. In case the new coordinator has the transaction in ABORTED state:
    1. A tx rollback message is send to all tx data nodes.
    2. A tx rolled back message is send to MVCC coordinator node.
    3. MVCC coordinator sends to participants a rollback acknowledged message, all resources are released.
  6. In case the new coordinator has the transaction in PREPARED state
    1. A new coordinator checks all tx data nodes
    2. In case all participants have the tx in PREPARED state or at least one tx data node has the tx in COMMITTED state:
      1. A tx committed message is send to MVCC coordinator node.
      2. MVCC coordinator sends to participants a commit acknowledged message, all tx datanodes mark tx as COMMITTED, all resources are released.
    3. In case at least one datanode has tx in ABORTED state:
      1. A tx rollback message is send to all tx data nodes.
      2. A tx rolled back message is send to MVCC coordinator node.
      3. MVCC coordinator sends to participants a rollback acknowledged message, all resources are released.

On primary data node failure

If primary node fails during update we may apply some strategy and whether retry statement ignoring previous changes (using cid) or rollback tx.

if primary node fails during prepare we check whether partitions have not been lost and continue commit procedure in case all partitions are still available.

On loss of partition

We add a special meta record (let's say TxDataLostRecord) with tx, node id and list of lost partitions.

This means that we cannot clean out tx info with XID heigher than the lowest XID from the list.

The transaction during which the partition was lost will be aborted or committed in case PREPARE stage has been already compleeted.

The transaction will be fully finished on partition return or manually (deleting corresponding TxDataLostRecord) in case the partition is lost eternally.

During further rebalances nodes in addition to rows sends TxDataLostRecords and tx state records to finish corresponding transactions properly on failed node rejoin.

On tx datanode rejoin

In case there is a TxDataLostRecord for rejoining node, partitions from the record will not be invalidated or rebalanced.

Corresponding transactions are finished on all nodes (using tx->partitions map from tx metadata), TxDataLostRecords are deleted.

Read (getAll and SQL)

Each read operation outside active transaction creates a special read only transaction and uses its tx snapshot for versions filtering.

Each read operation within active READ_COMMITTED transaction creates a special read only transaction and uses its tx snapshot for versions filtering.

Each read operation within active REPEATABLE_READ transaction uses its tx snapshot for versions filtering.

During get operation the first passing MVCC filter item is returned.

During secondary indexes scans 'ver' field of tree item is checked, if row version is visible (the row was added by current or committed tx) 'xid_max' field of referenced data row is checked - the row considered as visible if it is the last version of row 'xid_max' is NA or ACTIVE or ABORTED or higher than assigned.

During primary indexes scans 'ver' field of tree item is checked, the first passing MVCC filter item is returned, all next versions of row are skipped.

Update (putAll and DML)

Update consist of next steps:

  1. obitain a lock (write current version into 'xid' field)
  2. add a row with new version.
  3. delete aborted versions of row if exist (may be omitted for performance reasons)
  4. update xid_max of previous committed row if exist
  5. delete previous committed versions (less or equal to cleanup version of tx snapshot) if exist (may be omitted for performance reasons)

Cleanup of old versions

 

  • No labels