ID | IEP-73 |
Author | |
Sponsor | |
Created |
|
Status | DRAFT |
In order to unblock business-related logic (atomic protocol, transactions, table management, etc) above the service-related logic(network, discovery protocol through meta storage, etc) it's required to specify components communication flow and initialization logic. It seems natural to choose node startup as an entry point for such high-level needs. Thus, node startup should provide:
From a bird's eye view, the set of the components and their connections may look like this:
where:
Few more words about components responsibilities and inner flow:
Vault is responsible for handling local keys, including distributed projections. During initialization, VaultManager checks whether there is any configuration within Vault's PDS, if not it uses the customer's bootstrap configuration if provided. Bootstrap configuration goes through the local configuration manager.
// Vault Component startup. VaultManager vaultMgr = new VaultManager(); boolean cfgBootstrappedFromPds = vaultMgr.bootstrapped(); List<RootKey<?, ?>> rootKeys = new ArrayList<>(Collections.s ingletonList(NetworkConfiguration.KEY)); List<ConfigurationStorage> configurationStorages = new ArrayList<>(Collections.singletonList(new LocalConfigurationStorage(vaultMgr))); // Bootstrap local configuration manager. ConfigurationManager locConfigurationMgr = new ConfigurationManager(rootKeys, configurationStorages); if (!cfgBootstrappedFromPds) try { locConfigurationMgr.bootstrap(jsonStrBootstrapCfg); } catch (Exception e) { log.warn("Unable to parse user specific configuration, default configuration will be used", e); } else if (jsonStrBootstrapCfg != null) log.warn("User specific configuration will be ignored, cause vault was bootstrapped with pds configuration");
Manager | Depends On | Used By |
---|---|---|
VaultManager | - |
|
LocalConfigurationManager | VaultManager |
|
It's possible to instantiate network manager when local configuration manager with vault underneath it is ready
NetworkView netConfigurationView = locConfigurationMgr.configurationRegistry().getConfiguration(NetworkConfiguration.KEY).value(); // Network startup. Network net = new Network( new ScaleCubeNetworkClusterFactory( localMemberName, netConfigurationView.port(), Arrays.asList(netConfigurationView.networkMembersNames()), new ScaleCubeMemberResolver())); NetworkCluster netMember = net.start();
Manager | Depends On | Used By |
---|---|---|
NetworkManager | LocalConfigurationManager |
|
After starting network member Raft Manager is instantiated. Raft Manager is responsible for handling raft servers and services life cycle.
// Raft Component startup. Loza raftMgr = new Loza(netMember);
Manager | Depends On | Used By |
---|---|---|
Raftmanager | NetworkManager |
|
Now it's possible to instantiate MetaStorage Manager and Configuration Manager that will handle both local and distributed properties.
// MetaStorage Component startup. MetaStorageManager metaStorageMgr = new MetaStorageManager( netMember, raftMgr, locConfigurationMgr ); // Here distirbuted configuraion keys are registered. configurationStorages.add(new DistributedConfigurationStorage(metaStorageMgr)); // Start configuration manager. ConfigurationManager configurationMgr = new ConfigurationManager(rootKeys, configurationStorages);
Manager | Depends On | Used By |
---|---|---|
MetaStorageManager |
|
|
ConfigurationManager |
|
|
At this point it's possible to start business logic components like Baseline Manager, Affinity Manager, Schema Manager and Table Manager. The exact set of such components is undefined.
// Baseline manager startup. BaselineManager baselineMgr = new BaselineManager(configurationMgr, metaStorageMgr, netMember); // Affinity manager startup. AffinityManager affinityMgr = new AffinityManager(configurationMgr, metaStorageMgr, baselineMgr); SchemaManager schemaManager = new SchemaManager(configurationMgr); // Distributed table manager startup. TableManager distributedTblMgr = new TableManagerImpl( configurationMgr, netMember, metaStorageMgr, affinityMgr, schemaManager); // Rest manager also goes here.
Manager | Depends On | Used By |
---|---|---|
BaselineManager |
| AffinityManager in order to retrieve the current baseline. |
AffinityManager |
| TableManager strictly or indirectly through corresponding private distributed affinityAssignment key. |
SchemaManager |
| TableManager in order to handle corresponding schema changes. |
TableManager |
| IgniteImpl |
Finally, it's possible to deploy registered watches and create IgniteImpl that will inject top-level managers in order to provide table and data manipulation logic to the user.
// Deploy all resisted watches cause all components are ready and have registered their listeners. metaStorageMgr.deployWatches(); return new IgniteImpl(configurationMgr, distributedTblMgr);
In general from some point of view node is a collaboration of components mentioned above. In order to satisfy needs of:
it makes sense to describe component flow in more detail. Here it is:
Historical here reflects a certain similarity with the historical rebalance design. It is worth mentioning that the main problem of upgrading components to the state that is newer than node's state is an impossibility of doing consistant reads of beneath components. In other words, if we start TableManager from within appliedRevision 10 and SchemaManager already have appliedRevision 20, schemaManager.getScheama(tableId) will return schema for revision 20 that is not expected by TableManager that processes table updates for revision 11. In order to provide consistent reads, the requested component could analyze the callers context and either recalculate requested data based on callers applied revision or return previously cached historical data. In any case this logic seems to be non-trivial and might be implemented later as a sort of optimization.
Another solution to satisfy component update that will preserve consistent cross components reads will include:
// N/A
https://github.com/apache/ignite-3/tree/main/modules/runner#readme
Umbrella Ticket:
Initial Implementation: