Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: spelling

...

Consistency - a property that moves a database from one consistent state to another after the finish. Meaning The meaning of the consistent state is defined by a user.

...

Interactive transaction - a transaction whose operation set is not known a priori. Can It can be aborted at any time if not committed yet.

...

Here we take into account the isolation property of a transaction. The strongest isolation is known to beSerializable, implying all transactions pretend to execute sequentially. This is very convenient forusersr for users because it prevents hidden data corruption and security issues. The This price for this may be reduced throughput/latency due to increased overhead from CC protocol. An additional option is to allow a user to choose a weaker isolation level, likeSNAPSHOT. The ultimate goal is to implement Serializability without sacrificing performance too much, having Serializable as the default isolation level. 

...

Such transactions can be used to build analytical reports, which can take minutes without affecting (and being affected by) concurrent OLTP load. Any SQL select for read query is naturally mapped to this type of aransaction. These transactions can run for several minutes.  Such transactions can also read snapshot data in the past , at some timestamp. This is also known as HTAP.

Score: 3

Consistent replica reads

Very useful A handy feature for load balancing, especially in conjunction with the previous. 

Score: 3

Unlimited or very

...

huge action size

Some databases limit the number and total size of records enlisted in a transaction, because they buffer temporary uncommitted read or written records.. This is not convenient for a user. 

...

Score: 1

Data loss toleration

It's important to know essential how many node failures we can tolerate until declaring the unavailability due to temporary data loss (or full unavailability, in case of in-memory deployment). This is known as k-safety. More is better. 

Score: 2

High-level observations

Looking at the evaluation, it's easy to notice that our protocol design is of “one size fits all” types. It is an intentional choice , because Apache Ignite is intended to be a general use case database designed for commodity hardware and work “fine” out of the box in common typical cases. Of course, there are cases where specialized solutions would work better. Additional optimizations and tuning capabilities can be introduced later.

Let’s define key the key  points of a design. It’s necessary to have:

  1. Interactive transactions 
  2. Read-only (long-running ) queries, which are able to can execute on replicas.
  3. Serializable isolation
  4. Unlimited (or very large) huge transaction size

The first requirement disables deterministic protocols like Calvin, because  because they need to know the transaction read-write set in advance (or require the expensive reconnaissance step). 

...

The third requirement implies a CC protocol which that allows for serialized schedules.

...

The system also has to be horizontally scalable. To achieve scalability, the data will be partitioned using a hash or range partitioning method.  The exact partitioning method is not important essential for the purpose of this document. We treat a partition here as a synonym for a shard. Each partition is assigned to a cluster node and replicated to a predefined number of replicas to achieve high availability. Adding more nodes increases a the number of partitions in the cluster (or reduces a the number of partitions per node in case of static partitioning), thus increasing the scalability.

...

The performance aspect is not a central goal in the proposed design. We just need the a "good" level of performance at the beginning and the a great level of usability. We can optimize later, following the step-by-step improvement of the product.

Turns It turns out we want aGoogle Spanner clone. It seems it was designed keeping the similar goals in mind. Other notable Spanner clones are Cockroachdb,  and Yugabyte.

Table store

Transaction protocol describes the interaction of nodes forming a transactional table store - an (indexed) distributed records store supporting operations with ACID guarantees.

...

  • A transaction is started
  • Some operations are enlisted into the transaction.
  • A transaction is committed or aborted by the user, or it is aborted by the system because of serialization conflict or lease expiration and must be retried

Native API

The API entry point is the IgniteTransactions facade:

...

To enlist the operation into a transaction, the Transaction instance must be passed to the corresponding transaction’s store methods. Each method accepts a transaction argument. Consider, for example, a method for updating a tuple:

...

Multiple operations accepting a the same transaction instance are executed atomically within a passed transaction.

...

Defines how the table data (including indexes) is split between data nodes. Correct A correct partitioning scheme is essential for scalable cluster workloads. TBD partitioning IEP ref, add diagrams

Partition key

Partition A partition (or affinity) key is a set of attributes used to compute a partition. For example, a prefix set of a primary key can be used to calculate a partition. A table partition key matches its primary index partition key. A secondary index is not required to have a partition key - such indexes are implicitly partitioned (as table rows).

...

Data is assigned to nodes using some kind of a hash function, calculated over a set of attributes. There are several approaches to hash partitioning with different trade-offs offs (consistent vs. rendezvous).

Range partitioning

Key space is split into range partitions (ranges) , according to predefined rules (statically) or dynamically. Each range is mapped to a data node and represents a sorted map. The range search tree is used to locate a key for a node. Order by for such a table is just traversing a search tree from left to right and concatenating each range.

...

Some data is considered co-located , if they have the same partition key. For example, two tables are co-located if they have the same partition key for primary indexes.

Note what that table and index can reside on the same data node, but they can not be co-located.

If the index has no explicit partition key, its data is partitioned implicitly, the same as the PK partition key.

Sub-partitioning

...

We will rely on a replication protocol to consistently store data on multiple replicas consistently. Essentially, a replication protocol provides transactional semantics for a single partition update. We will build a cross partition transactional protocol on top of a replication protocol, as suggested in IEP-61.

The actual protocol type is not so important , because we don’t want to depend on specific protocol features - this breaks an abstraction. 

...

  • Majority-based protocols
    • This class of protocols requires the majority of nodes to respond in order it to commit a write, making it naturally tolerant to failures provided that a majority is responsive
  • Membership-based protocols
    • Protocols in this class require all operational nodes in the replica group to acknowledge each write. Membership-based protocols are supported by a reliable membership (RM), based on majority replication.

...

CC is responsible for controlling the schedules of RO and RW transactions to be serializable. It defines the types of schedules allowed by concurrent transactions and is a key point of a protocol.

...

Example 1. Assume we have 3 three transactions:

T1 = r1[x] r1[z] w1[x], T2 = r2[y] r2[z] w2[y], T3 = w3[x] r3[y] w3[z]

...

We assume each transaction is committed. What can we tell about the serializability of S1 and S2 ? Recall the serializable schedule definition: to be serializable, ; it must be equivalent to some serial execution order of transactions T1, T2, and T3.

Two actions on the same data object, modified by different transactions, conflict, if at least one of them is a write. The three anomalous situations can be described in terms of when the actions of two transactions T1 and T2 ,conflict with each other: in a write-read (WR) conflict T2 reads a data object previously written by T1; we define read-write (RW) and write-write (WW) conflicts similarly. These conflicts can cause anomalies like dirty reads, unrepeatable reads, lost updates, and others.

The S1 is obviously serial: it corresponds to the execution sequence: T3, T2, T1. It's not that obvious for S2 if it's serializable or not. To prove it, we should find an equivalent serializable schedule. We can attempt to swap a non-conflicting operation (preserving the order of conflicting) until the equivalent schedule is produced.

...

Schedules are called conflict equivalent if they can be converted one into another by swapping non-conflicting operations, which have no effect on do not affect execution outcome. This also means they have the same order of conflicting operations. A schedule is called conflict serializable if it’s conflict equivalent to a serial schedule. Every conflict serializable schedule is serializable, but not vice versa. This class of schedules is defined as CSR.

...

  • A node for each committed transaction in S.
  • An arc from Ti to Tj if an action of Ti precedes and conflicts with one of Tj's actions.

A schedule S is a conflict serializable if and only if its precedence graph is acyclic. An equivalent serial schedule, in this case is given by anytopological sort over the precedence graph.

...