You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

IDIEP-17
AuthorDenis Mekhanikov
Sponsor 
Created 
StatusDRAFT


Motivation

Current service deployment procedure depends on an internal replicated cache. Each service deployment is a distributed transaction on this cache. This procedure proved to be deadlock-prone on unstable topology.

Also current implementation doesn't imply passing service deployment results to the deploying node. IgniteServices#deploy* methods return even before Service#init() start execution. So there is no way to know, whether a service is successfully deployed, or not. Even if an exception is thrown from Service#init() method, then deploying side will consider the service successfully deployed anyway.

It is also impossible to have data-free server nodes, that are only responsible for running services and compute tasks, because the system cache is always present on all server nodes.

Currently when service implementation or configuration changes, you can't make existing instances be redeployed without manual undeployment. GridServiceProcessor has access to the serialized representation of services only, so it can't tell, if anything have changed since previous deployment.

Description

This section contains a description of the proposed service deployment protocol.

Discovery-based deployment

To make service deployment process more reliable on unstable topology and to avoid stuck deployments, that are possible in current architecture, service deployment should be based on custom discovery messages distribution.

Successful scenario

Deployment starts with sending of a custom discovery event, that notifies all nodes in the cluster about the ongoing deployment. This message contains serialized service instance and its configuration. It is delivered to the coordinator node first, that calculates the service deployment assignments and adds this information to the message. During the following round-trip of this message, nodes save information about service deployment assignments to some local storage, and the ones, that were chosen to deploy the services, do it asynchronously in a dedicated thread pool.

Once the node finishes the deployment procedure and Service#init() method execution, it connects to the coordinator, using the communication SPI, and sends the deployment result to it, i.e. either acknowledgement about successful deployment, or a serialized exception.

Once all deployment results are collected, coordinator sends another discovery message, notifying all nodes about successful deployment. This is the moment, when deployment futures are completed and the control is returned from IgniteServices#deploy* methods. Also Service#execute() method starts its work on successful deployment message arrival.

Failure during deployment

There are three types of errors, that should be handled correctly.

  • Error during service initialization on a node, included into assignment. In this case the problematic node sends failure details to the coordinator over the communication protocol. Once the coordinator receives the failure details, it sends a discovery message, containing this information, to all nodes, so the deploying methods can throw a corresponding exception.
  • Failure of a node, included into assignment. This situation triggers recalculation of service deployment assignments. Coordinator node sends another discovery message with a set of new assignments in it. If a node already initialized a service and it is not present in the new assignments set, then the service should be cancelled.
  • Coordinator failure. This situation is processed in a similar way as the previous one. The only difference is that the nodes should resend deployment results to the new coordinator.

Deployment on new nodes [In progress]

When a new node connects to the existing cluster, all needed services are deployed and initialised on it by the time it is accepted to the topology.

This is what happens, when a new node joins the cluster:

  1. connecting node sends a TcpDiscoveryJoinRequestMessage;
  2. coordinator recalculates service assignments and attaches them to the successive TcpDiscoveryNodeAddedMessage;
  3. connecting node receives the assignments, initialises all needed services and sends confirmation to the coordinator on completion over communication;
  4. coordinator sends TcpDiscoveryNodeAddFinishedMessage only when it receives confirmation about deployed services from the joining node.

Hot redeployment

It should be possible to update service implementation without downtime. Employment of Deployment SPI should solve this problem.

Service processor should subscribe to class deployments and restart corresponding services, when their classes change.

The basic usage scenario involves enabling UriDeploymentSpi and updating the JAR files, containing implementation classes. It will lead to existing services cancellation and reployment. It implies, that services should be ready to sudden cancellations. Documentation should contain explanation of this fact with examples. 

Risks and Assumptions

These changes will break compatibility with previous versions of Apache Ignite completely.

Also there will be no way to preserve services between cluster restarts. Even though there is such possibility currently.

Further work

There are still some flaws in the current service grid design, that are not covered in this IEP.

Moving existing services to new nodes

Suppose, we have one server node, and we deploy several cluster-singleton services on it.

If you add more nodes to it, then service instances won't be moved to them, so the only node will be serving all the requests. This situation should be solved by proper service rebalancing.

Changes, described in current IEP don't interfere with service rebalancing in any way, so we can do it as a separate task after the proposed changes are implemented.

Service persistence

In previous design services could be persisted to disk along with other data in system caches.

Since the utility cache will become abandoned after the proposed changes, there will be no way to preserve deployed services between cluster restarts.

If we deside to keep this feature, then we should implement storing the service configurations. Also a way to configure this behaviour should be developed.

Discussion Links

Service grid redesign: http://apache-ignite-developers.2346864.n4.nabble.com/Service-grid-redesign-td28521.html

Service versioning: http://apache-ignite-developers.2346864.n4.nabble.com/Service-versioning-td20858.html

Tickets

Unable to render Jira issues macro, execution error.

Unable to render Jira issues macro, execution error.

Unable to render Jira issues macro, execution error.

  • No labels