Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Accord
In order to exploit the above innovations, we must select a timestamp protocol. To ensure reads are scalable we must also derive dependencies, so Caesar is the only candidate in the literature. Unfortunately Caesar has suboptimal characteristics: ACKs to older transactions may be delayed by newer ones (in theory indefinitely), and three round-trips may be required. Fortunately this can be remedied. Caesar appears to suffer this fate due to assembling a precise set of dependencies so that only one possible set of dependencies may be committed (and perhaps also because of its recovery protocol). Accord instead assembles an inconsistent set of dependencies. By inconsistent we mean that it may differ between coordinators, and we are OK with different sets of dependencies being committed to different replicas. We only require that all of these sets of dependencies are a superset of those that may be committed with a lower timestamp.

Preliminaries
We use hybrid logical clocks that are globally unique, that is each replica has its own unique id that is appended to each logical clock value.

Fast Path
A coordinator C proposes a timestamp t0 to at least a quorum of a fast path electorate. If t0 is larger than all timestamps witnessed for all prior conflicting transactions, t0 is accepted by a replica. If a fast path quorum of responses accept, the transaction is agreed to execute at t0. Replicas respond with the set of transactions they have witnessed that may execute with a lower timestamp, i.e. those with a lower t0.

Slow Path
If a replica refuses to accept t0 it responds with a higher t than any other it has witnessed. If C fails to reach fast path consensus it takes the highest t it witnessed from its responses, which constitutes a simple Lamport clock value imposing a valid total order. This value is proposed to at least a simple majority of nodes, along with the union of the dependenciesreceived in the preaccept phase. This round’s purpose is only to record durably which Lamport clock value that might be derived was selected (as multiple valid Lamport clock values might be obtained depending on which responses were received by C), so that if C crashes a recovery coordinator will pick the same timestamp. The inclusion of dependencies in the proposal is solely to facilitate Recovery of other transactions that may be incomplete - these are stored on each replica to facilitate decisions at recovery. Replicas as a result always accept the proposal (unless a newer ballot has been issued by a recovery coordinator to take over the transaction), and once a majority have accepted the proposal it is durably decided. Replicas respond with a new set of dependencies containing those transactions they have witnessed with a lower t0 than the t they received. The coordinator discards the dependencies it received previously on the fast path and uses these new dependencies for execution.

Execution
The union of all dependencies received during consensus is derived before t is disseminated via Commit and simultaneously a Read is issued by C to a member of each participating shard (preferably in the same DC), with those dependencies known to participate in that shard attached. This replica waits for all dependencies to be committed before filtering out those that are assigned a later t. The remaining dependencies are waited on until they execute and their result applied on this replica, before the read is evaluated and returned to the coordinator. C combines these responses to compute an update and client response, which is then disseminated by Apply to all replicas and returned to the client (respectively).

...

Code Block
Fast Path
Coordinator C: 
    Send PreAccept(X, t0) to replicas of all shards
Replica R: 
    if (have witnessed a newer conflicting timestamp) then 
        t = some new higher timestamp issued by R
    else

        t = t0
    PreAccepted[X] = true
    Reply (t, deps = {conflicting transactions where t0 < t})
Coordinator C (with at least a simple quorum from each shard):
    If (a fast-path quorum of responses from each shard had t = t0) then
        send Commit(X, t0, t0, union of all deps)
        go to Execution
    ...

Slow Path
If a replica refuses to accept t0 it responds with a higher t than any other it has witnessed. If C fails to reach fast path consensus it takes the highest t it witnessed from its responses, which constitutes a simple Lamport clock value imposing a valid total order. This value is proposed to at least a simple majority of nodes, along with the union of the dependenciesreceived in the preaccept phase. This round’s purpose is only to record durably which Lamport clock value that might be derived was selected (as multiple valid Lamport clock values might be obtained depending on which responses were received by C), so that if C crashes a recovery coordinator will pick the same timestamp. The inclusion of dependencies in the proposal is solely to facilitate Recovery of other transactions that may be incomplete - these are stored on each replica to facilitate decisions at recovery. Replicas as a result always accept the proposal (unless a newer ballot has been issued by a recovery coordinator to take over the transaction), and once a majority have accepted the proposal it is durably decided. Replicas respond with a new set of dependencies containing those transactions they have witnessed with a lower t0 than the t they received. The coordinator discards the dependencies it received previously on the fast path and uses these new dependencies for execution.

Code Block
Slow Path
Coordinator C: 
    ... else
        t = maximum t from responses
        send Accept(X, t0, t, deps) to replicas of all shards
Replica R receiving Accept(X, t0, t, deps): 
    Accepted[X] = true
    Reply (deps = {conflicting transactions where t0 < t})
Coordinator C (with a simple quorum from each shard):
    send Commit(X, t0, t0, union of all deps)
    go to Execution

Execution
The union of all dependencies received during consensus is derived before t is disseminated via Commit and simultaneously a Read is issued by C to a member of each participating shard (preferably in the same DC), with those dependencies known to participate in that shard attached. This replica waits for all dependencies to be committed before filtering out those that are assigned a later t. The remaining dependencies are waited on until they execute and their result applied on this replica, before the read is evaluated and returned to the coordinator. C combines these responses to compute an update and client response, which is then disseminated by Apply to all replicas and returned to the client (respectively).

Code Block

Execution
Replica R receiving Commit(X, deps):
    Committed[X] = true
Coordinator C: 
    send a read to one or more (preferably local) replicas of each shard 
        (containing those deps that apply on the shard)
Replica R receiving Read(X, t, deps): 
    Wait for deps to be committed
    Wait for deps with a lower t to be applied locally
    Reply with result of read
Coordinator C (with a response from each shard):
    result = execute(read responses)
    send Apply(result) to all replicas of each shard
    send result to client
Replica R receiving Apply(X, t, deps, result):
    Wait for deps to be committed
    Wait for deps with a lower t to be applied locally
    Apply result locally
    Applied[X] = true

Recovery
Recovery of a transaction X is not terribly complicated. It assumes that it is executed by a replica R that has witnessed X. Replicas are contacted for their knowledge of the transaction. If they do not already know of X, they initially perform the same steps as they would in the fast path to record its arrival time. Then the replica consults its local state for any transactions that it has recorded that would have witnessed X if X had reached fast path consensus. If any are present that did not witness X this fact is returned to the coordinator. Any transactions with a lower t0 that have been proposed a higher t are also returned, alongside the replica’s other state for X.

R assembles a simple quorum, and picks the highest step to complete. That is to say, if it sees an Apply, Execute, Commit or Slow Path round it picks them up in that order of precedence. Otherwise it must decide if it should pick t0 or some t it found from the majority it contacted to propose in a new Slow Path round. The simple intuition here is that either it is safe to propose t0 or we know that we did not reach fast path consensus - this is determined by the replica responses indicating if they have witnessed a newer transaction that did not witness X. In this case we know X did not reach a fast path quorum else it would have been witnessed. If we witness no such transaction, then it is safe to propose t0 as every newer transaction knows of X and is waiting for it to commit. The only edge case here is a transaction with a lower t0 that is proposing a newer t but has not yet committed. If we get to this point, we must wait for these transactions to commit before retrying.


Code Block
Recovery
Coordinator C: 
    send Recover(X) to replicas of each shard
Replica R receiving Recover(X): 
    Ensure X is PreAccepted; if only PreAccepted compute deps
    Wait = {Accepted transactions with lower t0 but higher t than X}
    Superceding = {Accepted transactions with higher t0 that did not witness X}
                ∪ {Committed transactions with higher t than t0 of X, that did not witness X}
    Reply (local state for X, Wait, Superceding)
Coordinator C (with a quorum of responses from each shard):
    If (any R in responses had X as Applied, Committed, or Accepted) then continue the state machine from there
    Otherwise 
        If (any shard has insufficient R where t = t0 to have taken the fast path) then propose the highest t of any response on the Slow Path
        If (any R.Superceding is non-empty) then propose the highest t in any response on the Slow Path
        If (any R.Wait is non-empty) then wait for them to commit and retry
        Otherwise propose t0 on the Slow Path

Test Plan

This work will be tested extensively using a combination of isolated randomised testing using verification of the standalone library using Jepsen.io tooling, dedicated linearizability and serializability verification of the standalone library, and similar facilities in situ in Cassandra using the Simulator facility coming with CEP-10

...

.