You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »


IDIEP-73
Author
Sponsor
Created

 

StatusDRAFT

Motivation

In order to unblock business related logic (atomic protocol, transactions, table management, etc) above the service related logic(network, discovery protocol though meta storage, etc) it's required to specify components communication flow and initialization logic . It seems natural to choose node startup as an entry point for such high level needs. Thus, node startup should provide:

  • control over components initialization.
  • control over components communications channels.

Description

From a bird's eye view, the set of the components and their connections may look like this:

where:

  • Number in front of component's name shows an order in which components are initialized. So the very first component to be initialized during node startup is Vault. There are few components that should be instantiated before node start-up: cli and ignite-runner, however they are out of the scope of node startup process.
  • Arrows shows direct method calls. For example Affinity component could retrieve baseline from Baseline component using some sort of baseline() method. In order to decrease mess a bit, two explicit group of arrows are introduced:
    • Green Arrows shows direct method calls of DMS Component.
    • Blue Arrows shows direct method calls of Configuration Component.
  • There's also an upwards communication flow through listeners/watches mechanism. However within the scope of alpha 2 release it's only possible to listen Vault, MetaStorage and Configuration updates.

Few more words about components responsibilities and inner flow:

VaultManager and LocalConfigurationManager

Vault is responsible for handling local keys, including distributed projections . During initialization VaultManager checks whether there is any configuration within Vault's PDS, if not it uses customer's bootstrap configuration if provided. Bootstrap configuration goes through local configuration manager.

Vault and Local Configuration Manager
// Vault Component startup.
VaultManager vaultMgr = new VaultManager();

boolean cfgBootstrappedFromPds = vaultMgr.bootstrapped();

List<RootKey<?, ?>> rootKeys = new ArrayList<>(Collections.s
ingletonList(NetworkConfiguration.KEY));

List<ConfigurationStorage> configurationStorages =
	new ArrayList<>(Collections.singletonList(new LocalConfigurationStorage(vaultMgr)));

// Bootstrap local configuration manager.
ConfigurationManager locConfigurationMgr = new ConfigurationManager(rootKeys, configurationStorages);

if (!cfgBootstrappedFromPds)
	try {
    
	locConfigurationMgr.bootstrap(jsonStrBootstrapCfg);
    }
    catch (Exception e) {
    	log.warn("Unable to parse user specific configuration, default configuration will be used", e);
    }
else if (jsonStrBootstrapCfg != null)
	log.warn("User specific configuration will be ignored, cause vault was bootstrapped with pds configuration");
ManagerDepends On
Used By
VaultManager-
  • LocalConfigurationManager in order to store local configuration and update it consistently through listeners.
  • MetastorageManager in order to commit processed DMS watch notifications atomically with corresponding applied revision.
LocalConfigurationManagerVaultManager
  • NetworkManager in order to bootstrap itself with network configuration including sort of IPFinder and handle corresponding configuration changes.
  • MetaStorageManager in order to handle meta storage group changes.
  • ConfigurationManager indirectly through LocalConfigurationStorage for the purposes of handling local configuration changes.

NetworkManager

Cause local configuration manager with vault underneath it are ready it's possible to instantiate network manager.

Network Manager
NetworkView netConfigurationView =
	locConfigurationMgr.configurationRegistry().getConfiguration(NetworkConfiguration.KEY).value();

// Network startup.
Network net = new Network(
	new ScaleCubeNetworkClusterFactory(
    	localMemberName,
        netConfigurationView.port(),
        Arrays.asList(netConfigurationView.networkMembersNames()),
        new ScaleCubeMemberResolver()));

NetworkCluster netMember = net.start();
ManagerDepends OnUsed By
NetworkManagerLocalConfigurationManager
  • MetaStorageManager in order to handle cluster init message.
  • RaftManager in order to handle RaftGroupClientService requests and for the purposes of inner raft group communication.
  • BaselineManger in order to retrieve information about current network members.

RaftManager <Loza>

After starting network member Raft Manager is instantiated. Raft Manager is responsible for handling raft servers and services life cycle.

RaftManager
// Raft Component startup.
Loza raftMgr = new Loza(netMember);
ManagerDepends OnUsed By
RaftmanagerNetworkManager
  • MetaStorageManager in order to instantiate and handle distributes metaStorage raft group.
  • TableManager in order to instantiate and handle partitioned/ranged raft groups.

MetaStorageManger and ConfigurationManger

Now it's possible to instantiate MetaStorage Manager and Configuration Manager that will handle both local and distributed properties.

MetaStorage Manager and Configuration Manager
// MetaStorage Component startup.
MetaStorageManager metaStorageMgr = new MetaStorageManager(
	netMember,
    raftMgr,
    locConfigurationMgr
);

// Here distirbuted configuraion keys are registered.
configurationStorages.add(new DistributedConfigurationStorage(metaStorageMgr));


// Start configuration manager.
ConfigurationManager configurationMgr = new ConfigurationManager(rootKeys, configurationStorages);
ManagerDepends OnUsed By
MetaStorageManager
  • VaultManager
  • NetworkManager
  • RaftManager
  • LocalConfigurationManager
  • ConfigurationManager in order to store and handle distributed configuration changes.
  • BaselineManager in order to watch private distributed keys, cause ConfigurationManger handles only public keys.
  • AffinityManager for the same purposes.
  • Probably SchemaManager for the same purposes.
  • TableManager for the same purposes.
ConfigurationManager
  • LocalConfigurationManager
  • MetaStoragemanager
  • BaselineManager in order to watch public keys.
  • AffinityManager for the same purposes.
  • Probably SchemaManager for the same purposes.
  • TableManager for the same purposes.
  • IgnitionImpl

Business logic components: BaselineManager, AffinityManager, SchemaManager, TableManager, etc.

At this point it's possible to start business logic components like Baseline Manager, Affinity Manager, Schema Manager and Table Manager. Exact set of such components it undefined.

Top Level Managers
// Baseline manager startup.
BaselineManager baselineMgr = new BaselineManager(configurationMgr, metaStorageMgr, netMember);

// Affinity manager startup.
AffinityManager affinityMgr = new AffinityManager(configurationMgr, metaStorageMgr, baselineMgr);

SchemaManager schemaManager = new SchemaManager(configurationMgr);

// Distributed table manager startup.
TableManager distributedTblMgr = new TableManagerImpl(
	configurationMgr,
    netMember,
    metaStorageMgr,
	affinityMgr,
    schemaManager);


// Rest manager also goes here.
ManagerDepends OnUsed By
BaselineManager
  • ConfigurationManager
  • MetaStorageManager
  • NetworkManager
AffinityManager in order to retrieve current baseline.
AffinityManager
  • ConfigurationManager
  • MetaStorageManager
  • BaselineManager
TableManager strictly or indirectly through corresponding private distributed affinityAssignment key.
SchemaManager
  • ConfigurationManager
  • Probabaly MetaStorageManager
TableManager in order to handle corresponding schema changes.
TableManager
  • ConfigurationManager
  • MetaStorageManager
  • NetworkManager
  • AffinityManager
  • SchemaManager
IgnitionImpl

Delpoying wathes and preparing IgnitionImpl

Finally it's possible to deploy registered watches and create IgniteImpl that will inject top level managers in order to provide table and data manipulation logic to the user.

Deploy registered watches and create IgniteImpl
// Deploy all resisted watches cause all components are ready and have registered their listeners.
metaStorageMgr.deployWatches();

return new IgniteImpl(configurationMgr, distributedTblMgr);

Risks and Assumptions

// N/A

Discussion Links

http://apache-ignite-developers.2346864.n4.nabble.com/Terms-clarification-and-modules-splitting-logic-td52026.html#a52058

Reference Links

https://github.com/apache/ignite-3/tree/main/modules/runner#readme

Tickets

Umbrella Ticket: Unable to render Jira issues macro, execution error.

Initial Implementation: Unable to render Jira issues macro, execution error.


  • No labels