You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 8
Next »
ID | IEP-95 |
Author | |
Sponsor | |
Created | |
Status | DRAFT |
Motivation
Ignite distributes table data across cluster nodes. Every row belongs to a specific node. Currently, clients are not aware of this distribution, which may result in additional network calls. For example, client calls server A to read a row, server A has to call server B where the row is stored.
In optimal scenario, client knows that the row is stored on server B and makes a direct call there.
Description
Currently, clients can already establish connections to multiple server nodes. Handhsake response includes node id and name.
Update client implementation:
- Retrieve and maintain up-to-date partition assignment - an array of node ids, where Nth element is the leader node ID for partition N.
- Use the same logic as server to compute row key hash (see Row#colocationHash).
- Calculate partition number as rowKeyHash % partitionCount.
- Get node id from partition assignment (p1).
- If a connection to the resulting node exists, perform a direct call. Otherwise, use default connection.
Exceptions:
- Transaction belongs to a specific node and client connection. When a non-null transaction is provided by the user, partition awareness logic is skipped.
Protocol Changes
- Add new op to get partition assignment for a table
- Add response flag to track assignment changes
- Include ColocationKeys into ClientSchema
Tracking Assignment Changes
There are three potential ways to keep partition assignment up-to-date on the client:
- Response flag. All server responses include flags field, and server sets a flag when the assignment has changed since the last response. It is up to the client to retrieve updated assignment when needed. This mechanism is used in Ignite 2.x.
Pros: Low overhead, no extra network traffic.
Cons: Idle clients do not get the update.
- Server → client notification. As soon as assignment changes, server sends a message to all clients.
Pros: Immediate update for all clients.
Cons: Increased network traffic and server load. Some clients may not need the update at all (not all APIs require this). - PrimaryReplicaMissException (suggested in
Unable to render Jira issues macro, execution error.
comments).
Pros: No protocol changes.
Cons: Retry is required on replica miss (complicated & inefficient). Using exceptions for control flow.
First approach is battle tested and seems to be the most optimal.
Discussion Links
Tickets
key |
summary |
type |
created |
updated |
due |
assignee |
reporter |
priority |
status |
resolution |
JQL and issue key arguments for this macro require at least one Jira application link to be configured
|