ID | IEP-77 |
Author | Aleksandr Polovtsev |
Sponsor | |
Created |
|
Status | DRAFT |
When building a cluster of Ignite nodes, users need to be able to establish some restrictions on the member nodes based on cluster invariants in order to avoid breaking the consistency of the cluster. Such restrictions may include: having the same product version across the cluster, having consistent table and memory configurations, enforcing a particular cluster state.
This document describes the process of a new node joining a cluster, which includes a validation step where a set of rules are applied to determine whether the incoming node is able to enter the current topology. These rules may include node-local information (e.g. node version) as well as cluster-wide information (e.g. data encryption algorithm), which means that the validation component may require access to the Meta Storage (it is assumed that Meta Storage contains the consistent cluster-wide information, unless some other mechanism is proposed). The problem is that, according to the Node Lifecycle description, a cluster can exist in a "zombie" state, during which the Meta Storage is unavailable. This means the the validation process can be split into 2 steps:
Apart from the 2-step validation, there are also the following questions that need to be addressed:
Local validation approach requires the joining node to retrieve some information from a random node/Meta Storage and deciding to join the cluster based on that information.
This approach has the following pros and cons:
Remote validation approach requires the joining node to send some information about itself to a remote node, which decides whether to allow the new node to join or not.
This approach has the following pros and cons:
Discussion needed: At the time of writing this document, it is assumed that validation protocol is going to be remote.
The "init" command is supposed to move the cluster from the "zombie" state into the "active" state. It is supposed to have the following characteristics (note that the "init" command has not been specified at the moment of writing, so all statements are approximate and can change in the future):
The following process is proposed as the join protocol.
Current TopologyService
will be renamed to NetworkTopologyService
. It is proposed to extend this service to add validation handlers that will validate the joining nodes on the network level.
/** * Class for working with the cluster topology on the network level. */ public interface NetworkTopologyService { /** * This topology member. */ ClusterNode localMember(); /** * All topology members. */ Collection<ClusterNode> allMembers(); /** * Handlers for topology events (join, leave). */ void addEventHandler(TopologyEventHandler handler); /** * Returns a member by a network address */ @Nullable ClusterNode getByAddress(NetworkAddress addr); /** * Handlers for validating a joining node. */ void addValidationHandler(TopologyValidationHandler handler); }
The new service will have the same API, but will work on top of the Meta Storage, and will provide methods to work with the list of validated nodes. In addition to that, it will perform the validation of incoming nodes against the Meta Storage, based on the registered validation handlers.
/** * Class for working with the cluster topology on the Meta Storage level. Only fully validated nodes are allowed to be present in such topology. */ public interface TopologyService { /** * This topology member. */ ClusterNode localMember(); /** * All topology members. */ Collection<ClusterNode> allMembers(); /** * Handlers for topology events (join, leave). */ void addEventHandler(TopologyEventHandler handler); /** * Returns a member by a network address */ @Nullable ClusterNode getByAddress(NetworkAddress addr); /** * Handlers for validating a joining node. */ void addValidationHandler(TopologyValidationHandler handler); }
// Links to discussions on the devlist, if applicable.