You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

IDIEP-17
AuthorDenis Mekhanikov
Sponsor Anton Vinogradov
Created 
StatusACTIVE


Motivation

Current service deployment procedure depends on an internal replicated cache. Each service deployment is a distributed transaction on this cache. This procedure proved to be deadlock-prone on unstable topology.

Also current implementation doesn't imply passing service deployment results to the deploying node. IgniteServices#deploy* methods return even before Service#init() start execution. So there is no way to know, whether a service is successfully deployed, or not. Even if an exception is thrown from Service#init() method, then deploying side will consider the service successfully deployed anyway.

It is also impossible to have data-free server nodes, that are only responsible for running services and compute tasks, because the system cache is always present on all server nodes.

Currently when service implementation or configuration changes, you can't make existing instances be redeployed without manual undeployment. GridServiceProcessor has access to the serialized representation of services only, so it can't tell, if anything have changed since previous deployment.

Description

This section contains a description of the proposed service deployment protocol.

Discovery-based deployment

To make service deployment process more reliable on unstable topology and to avoid stuck deployments, that are possible in current architecture, service deployment should be based on custom discovery messages distribution.

Successful scenario

Deployment starts with sending of a custom discovery event, that notifies all nodes in the cluster about the ongoing deployment. This message contains serialized service instance and its configuration. It is delivered to the coordinator node first, that calculates the service deployment assignments and adds this information to the message. During the following round-trip of this message, nodes save information about service deployment assignments to some local storage, and the ones, that were chosen to deploy the services, perform initialisation asynchronously in a dedicated thread pool.

Once a node finishes the deployment procedure and Service#init() method execution, it connects to the coordinator, using the communication SPI, and sends the deployment result to it, i.e. either acknowledgement about successful deployment, or a serialized exception.

Once all deployment results are collected, coordinator sends another discovery message, notifying all nodes about successful deployment. This is the moment, when deployment futures are completed and the control is returned from IgniteServices#deploy* methods. Also Service#execute() method starts its work on successful deployment message arrival.

Failure during deployment

There are three types of errors, that should be handled correctly.

  • Error during service initialization on a node, included into assignment. In this case the problematic node sends failure details to the coordinator over the communication protocol. Once the coordinator receives the failure details, it recalculates assignments and sends another discovery message with updated assignments, if needed. All nodes should be retried in turn. If all nodes suitable for deployment fail to deploy a service, then coordinator sends a discovery message, containing information about the failure or a partial deployment.
  • Failure of a node, included into assignment. This situation triggers recalculation of service deployment assignments. Coordinator node sends another discovery message with a set of new assignments in it, if needed.
  • Coordinator failure. This situation is processed in a similar way as the previous one. The only difference is that the nodes should resend deployment results to the new coordinator.

Deployment results

There are three possible outcomes of IgniteServices#deploy* method execution:

  • All services have been deployed successfully. In this case method execution just returns normally.
  • None of the services have been deployed. In this case ServiceDeploymentException is thrown, containing information about deployment failure.
  • Some of the assigned services have been deployed. In this case PartialServiceDeploymentException is thrown, containing information about failed deployments and number of deployed service instances. There should be a policy for processing of such outcomes, or an easy way to cancel the partially deployed services.

Also deployment of each service instance should trigger a system event, containing information about the service and a node, where it took place. The same should be done for deployment failures.

But these events shouldn't be triggered for services, that are not considered deployed in the cluster yet. Events about initial deployments should be triggered only after the discovery message about successful deployment is sent.

Deployment on new nodes

When a new node comes, it receives information about existing service assignments in the data attached to an initial discovery messages. Information about ongoing deployments should also be included into the discovery data.

If a new node is included into assignments, then it should start the deployment procedure asynchronously.

If assignment recalculation is needed, then coordinator performs it and sends a reassignment message.

If a node comes during service deployment, and it is suitable for deployment, then the new node should start it. Coordinator should detect such node and wait for the result from it as well.

Reducing the number of messages

Not all discovery events or deployment failures require assignment recalculation.

Services, that have configuration like (maxPerCluster = 0, maxPerNode > 0) should lead to creation of only one assignment. It should look like (eachNode=N), or in some similar way.

When a new node comes to the topology, or initialization failure happens, assignment recalculation shouldn't be triggered for such services.

TODO: How should other nodes know, which nodes succeeded in service deployment, if assignment looks like (eachNode=N)? Should they listen to system events about service deployment?

Also information about already deployed services should be included into the discovery data. Otherwise a node, that just case to the cluster, won't be able to tell, which nodes have which services.

Service cancellation

IgniteServices#cancel() method triggers sending of a discovery message, containing information about services, that are being cancelled.

Each node should call Service#cancel() methods on its services and undeploy them. Also all ongoing deployments should be interrupted.

Hot redeployment

It should be possible to update service implementation without downtime. Employment of Deployment SPI should solve this problem.

Service processor should subscribe to class deployments and restart corresponding services, when their classes change.

The basic usage scenario involves enabling UriDeploymentSpi and updating the JAR files, containing implementation classes. It will lead to existing services cancellation and reployment. It implies, that services should be ready to sudden cancellations. Documentation should contain explanation of this fact with examples.

To make redeployment with an updated class possible, service's properties and its class should be separated. ServiceConfiguration should contain the following properties:

  • String serviceClassName – name of a service implementation class.
  • Map<String, Object> properties – properties, that a service can use during initialization and work. Properties should be included into the ServiceContext object.

It will also help the service processor distinguish between different services configurations, when only properties change. 

ServiceConfiguration#service property should be removed.

Service classe should have an empty constructor, that will be used by deploying nodes.

A possible point of improvement here is to start redeployment with a random delay to avoid denial of service on the whole cluster.

Risks and Assumptions

These changes will break compatibility with previous versions of Apache Ignite completely.

Also there will be no way to preserve services between cluster restarts. Even though such possibility currently exists.

Further work

There are still some flaws in the current service grid design, that are not covered in this IEP.

Moving existing services to new nodes

Suppose, we have one server node, and we deploy several cluster-singleton services on it.

If you add more nodes to it, then service instances won't be moved to them, so the only node will be serving all the requests. This situation should be solved by proper service rebalancing.

Changes, described in current IEP don't interfere with service rebalancing in any way, so we can do it as a separate task after the proposed changes are implemented.

Service persistence

In previous design services could be persisted to disk along with other data in system caches.

Since the utility cache will become abandoned after the proposed changes, there will be no way to preserve deployed services between cluster restarts.

If we deside to keep this feature, then we should implement storing the service configurations. Also a way to configure this behaviour should be developed.

Discussion Links

Service grid redesign: http://apache-ignite-developers.2346864.n4.nabble.com/Service-grid-redesign-td28521.html

Service versioning: http://apache-ignite-developers.2346864.n4.nabble.com/Service-versioning-td20858.html

Tickets

Unable to render Jira issues macro, execution error.

Unable to render Jira issues macro, execution error.

Unable to render Jira issues macro, execution error.

Unable to render Jira issues macro, execution error.

Unable to render Jira issues macro, execution error.

Unable to render Jira issues macro, execution error.

Unable to render Jira issues macro, execution error.

Unable to render Jira issues macro, execution error.

Unable to render Jira issues macro, execution error.

Unable to render Jira issues macro, execution error.

 

 

  • No labels