You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Clustering and Partitioning

 

 

Picture 1.

A cluster in Ignite is a set of interconnected nodes where server nodes (Node 0, Node 1, etc. in Picture 1) are organized in a logical ring and client nodes (applications) are connected to them. Both server and client nodes can execute all range of APIs supported in Ignite. The main difference between the two is that the clients do not store data.

Data that is stored on the server nodes is represented in a form of key-value pairs. The pairs in their turn are located in specific partitions which belong to individual Ignite caches as it's shown in Picture 2:

 

Picture 2.


To ensure data consistency and comply with the high-availability principle, server nodes are capable of storing a primary as well as backup copies of data. Basically, there is always a primary copy of a partition with all its key-value pairs in the cluster and might be 0 or more backup copies of the same partition depending on the configuration parameters.

Each cluster node (servers and clients) are aware of all primary and backup copies of every partition. This information is collected and broadcasted to all the nodes from a coordinator (the oldest server node) via internal partition map exchange messages.

However, all the data related requests/operations (get, put, SQL, etc.) go to primary partitions except for some read operations when CacheConfiguration.readFromBackup is enabled. If it's an update operation (put, INSERT, UPDATE) then Ignite ensures that both the primary and backup copies are updated and stay in a consistent state.


  • No labels