Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This section elaborates on locking modes and isolation levels supported in Ignite. The high-level overview is given in Ignite technical documentation.

Pessimistic Locking

 

In multi-user applications, different users can modify the same data simultaneously. To deal with reads and updates of the same data sets happening in parallel, transactional subsystems of products such as Ignite implement optimistic and pessimistic locking. In the pessimistic mode, an application will acquire locks for all the data it plans to change and will apply the changes after all the locks are owned exclusively while in the optimistic mode the locks acquisition is postponed to a later phase when a transaction is being already committed.

Locks acquisition time also depends on a type of isolation level. Let's start with the review of isolation levels in conjunction with the pessimistic mode.    

 

In pessimistic & read committed mode the locks are acquired before the changes brought by write operations such (as put or putAll) are applied as it's shown below:

 
Picture 6.

However, Ignite never obtains locks for read operations (such as get or getAll) in this mode which might be inappropriate for some of the use cases. To ensure the locks are acquired prior every read or write operation, use either repeatable read or serializable isolation level in Ignite:
Picture 7.
 

The pessimistic mode holds locks until the transaction is finished that prevents accessing locked data from other transactions. Optimistic transactions in Ignite might increase the throughput of an application by lowering contention among transactions by moving the locks acquisition to a later phase.

Optimistic Locking

In optimistic transactions, locks are acquired on primary nodes during the "prepare" phase, then promoted to backup nodes and released once the transaction is committed. Depending on an isolation level, if Ignite detects that a version of an entry has been changed since the time it was requested by a transaction then the transaction will fail at the "prepare" phase and it will be up to an application to decide whether to restart the transaction or not. This is exactly how optimistic & serializable transactions (aka. deadlock-free transactions) work in Ignite:


Picture 8.

On the other hand, repeatable read and read committed optimistic transactions never check if a version of an entry is changed. This mode might bring extra performance benefits but does not give any atomicity guarantees and, thus, rarely used in practice:  

 

Picture 9.

Transaction Lifecycle

Now let's review the entire lifecycle of a transaction in Ignite. Presently it's assumed that the cluster is stable and no any outages happen.

 

 
Picture 10.
 The transaction starts once tx.start (step 1) method is called by the application which results in the creation (step 2) of a structure for transaction context management (aka. IgniteInternalTx). In addition to that, the following happens on the near node side:
 
  • a unique transaction identifier is generated;

  • the start time of the transaction is recorded;

  • current topology version/state is recorded;

  • etc.

Once after that, the transaction status is set to "active" and Ignite starts executing read/write operations that are a part of the transaction following rules of either optimistic or pessimistic mode and specific isolation levels.

 Transaction Commit

 

When the application executes tx.commit() method (step 9 in Picture 10 above) the near node (transaction coordinator) initiates the 2-phase commit protocol by preparing the "prepare" message enclosing information about the transaction's context into it.

As a part of the "prepare" phase, every primary node receives information about all updated or new key-value pairs and about an order the locks have to be acquired (the latter depends on a combination of locking modes & isolation levels). 

The primary nodes on their turn perform the following in response to the "prepare" message:

 
  • check up that a version of the cluster topology recorded in the transaction's context matches the current topology version;

  • obtain all the required locks;

  • create a DHT context for the transaction and storing all the necessary data therein;

  • depending on the cache configuration parameters, wait or skip waiting while backup nodes confirm that the "phase" phase is over;

  • inform the near node that it's time to execute the "commit" phase.

After that, the near node sends the "commit" message, waits for an acknowledgment and moves transaction in status "committed".