Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Start-up script `start-node.sh` creates ClusterMain, the program entry point of a cluster node;
  2. ClusterMain creates a MetaClusterServer, which receives MetaGroup RPCs from internal_meta_port;
  3. MetaClusterServer checks its configuration with other nodes, ensuring that at least over half of the nodes have consistent configuration;
  4. MetaClusterServer initializes the underlying IoTDB;
  5. MetaClusterServer creates a MetaMember, which is initialized as an ELECTOR, handles MetaGroup RPCs, and manages a partition table; a Coordinator, which is a coordinator for non-query;
  6. MetaMember tries to load the partition table from the local storage if it exists;
  7. MetaMember creates its MetaHeartbeatThread;
  8. MetaHeartbeatThread sends election requests to other cluster nodes;
  9. The quorum of the MetaGroup agree with the election and send responses to MetaClusterServer;
  10. MetaClusterServer lets MetaMember handle these responses;
  11. MetaMember gathers the responses and confirms that it has become a LEADER, then create a partition table if there is none;
  12. MetaMember creates DataClusterServer, which receives DataGroup RPCs from internal_data_port;
  13. DataClusterServer creates DataMembers depending on the partition table and the number of replications k; k DataMembers will be created, each for a DataGroup the node is in;
  14. DataMembers establish their own DataHeartbeatThreads, and by following similar procedures, they become FOLLOWERs or LEADERs;
  15. MetaMember creates ClientServer, which receives requests from clients, so by now, the node is ready to serve.

...

  1. Start-up script `start-node.sh` creates ClusterMain, the program entry point of a cluster node;
  2. ClusterMain creates a MetaClusterServer, which receives MetaGroup RPCs from internal_meta_port;
  3. MetaClusterServer checks its configuration with other nodes, ensuring that at least over half of the nodes have consistent configuration;
  4. MetaClusterServer initializes the underlying IoTDB;
  5. MetaClusterServer creates a MetaMember, which is initialized as an ELECTOR, handles MetaGroup RPCs, and manages a partition table;a Coordinator, which is a coordinator for non-query;
  6. MetaMember tries to load the partition table from the local storage if it exists;
  7. MetaMember creates its MetaHeartbeatThread;
  8. The leader of MetaGroup sends a heartbeat to MetaClusterServer;
  9. MetaClusterServer lets MetaMember handle the heartbeat;
  10. MetaMember becomes a LEADER, then update its partition table from the heartbeat if the heartbeat provides a newer heartbeat;
  11. MetaMember creates DataClusterServer, which receives DataGroup RPCs from internal_data_port;
  12. DataClusterServer creates DataMembers depending on the partition table and the number of replications k; k DataMembers will be created, each for a DataGroup the node is in;
  13. DataMembers establish their own DataHeartbeatThreads, and by following similar procedures, they become FOLLOWERs or LEADERs;
  14. MetaMember creates ClientServer, which receives requests from clients, so by now, the node is ready to serve.

...

  1. A client sends a request to ClientServer;
  2. ClientServer reads the request of a socket and lets Coordinator handle it;
  3. Coordinator lets MetaMember handle the request;
  4. MetaMember creates a log for the operation and appends it to its RaftLogManager;
  5. MetaMember sends the log to its followers;
  6. When MetaMember gathers enough responses from the followers, it commits the log through its RaftLogManager;
  7. Depending on what the operation is, its RaftLogManager applies the log to the underlying IoTDB or the partition table;
  8. The result of the operation is returned to the client;

...

Data in the cluster module means both timeseries schemas and timeseries data since they are partitioned into multiple DataGroups and not stored globally, so coordinators may be necessary to find the right nodes that should store the corresponding data. As the partition table is the sole data structure that may help coordination, its owner, MetaMember, works as a coordinator in the cluster moduleCoordinator need the help of MetaMember. Fig.5 shows the whole procedure involving cluster data related operations.

...

  1. A client sends a request to the coordinator's ClientServer;
  2. ClientServer parses the request and lets MetaMember Coordinator handle it;
  3. MetaMember Coordinator routes the request with the help of its coordinatormetaMember;
  4. MetaMember Coordinator sends the request to the DataGroup(s) that should process it; the request may be split before sending to each DataGroup;
  5. The receivers process the request and return their responses to MetaMemberCoordinator;
  6. MetaMember Coordinator concludes the results and return it to the client;

...

  1. A client sends a request to the leader's ClientServer;
  2. ClientServer parses the request and lets MetaMember Coordinator handle it;
  3. MetaMember Coordinator routes the request with the help of its coordinatormetaMember;
  4. Finding out that the node should process the request, MetaMember forwards the request to its DataClusterServer;
  5. DataClusterServer finds the associated DataMember that should process it;
  6. DataMember creates a log for the request and appends it to its RaftLogManager;
  7. DataMember sends the log to other nodes in its DataGroup;
  8. After sending to enough replicas, DataMember commit the log to its RaftLogManager;
  9. RaftLogManager then applies the log to the underlying IoTDB;
  10. The result is returned to the client.

...