You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 5
Next »
ID | IEP-95 |
Author | |
Sponsor | |
Created | |
Status | DRAFT |
Motivation
Ignite distributes table data across cluster nodes. Every row belongs to a specific node. Currently, clients are not aware of this distribution, which may result in additional network calls. For example, client calls server A to read a row, server A has to call server B where the row is stored.
In optimal scenario, client knows that the row is stored on server B and makes a direct call there.
Description
Currently, clients can already establish connections to multiple server nodes. Handhsake response includes node id and name.
Update client implementation:
- Retrieve and maintain up-to-date partition assignment - an array of node ids, where Nth element is the leader node ID for partition N.
- Use the same logic as server to compute row key hash (see Row#colocationHash).
- Calculate partition number as rowKeyHash % partitionCount.
- Get node id from partition assignment (p1).
- If a connection to the resulting node exists, perform a direct call. Otherwise, use default connection.
Exceptions:
- Transaction belongs to a specific node and client connection. When a non-null transaction is provided by the user, partition awareness logic is skipped.
Protocol Changes
- Add new op to get partition assignment for a table
- Add response flag to track assignment changes
- Include ColocationKeys into ClientSchema
Tracking Assignment Changes
There are three potential ways to
Discussion Links
// TODO dev list link
// TODO: PR link
Tickets
key |
summary |
type |
created |
updated |
due |
assignee |
reporter |
priority |
status |
resolution |
JQL and issue key arguments for this macro require at least one Jira application link to be configured
|