ID | IEP-17 |
Author | Denis Mekhanikov |
Sponsor | |
Created | |
Status | DRAFT |
Current service deployment procedure depends on an internal replicated cache. Each service deployment is a distributed transaction on this cache. This procedure proved to be deadlock-prone on unstable topology.
Also current implementation doesn't imply passing service deployment results to the deploying node. IgniteServices#deploy*
methods return even before Service#init()
start execution. So there is no way to know, whether a service is successfully deployed, or not. Even if an exception is thrown from Service#init()
method, then deploying side will consider the service successfully deployed anyway.
It is also impossible to have data-free server nodes, that are only responsible for running services and compute tasks, because the system cache is always present on all server nodes.
Currently when service implementation or configuration changes, you can't make existing instances be redeployed without manual undeployment. GridServiceProcessor
has access to the serialized representation of services only, so it can't tell, if anything have changed since previous deployment.
This section contains a description of the proposed service deployment protocol.
To make service deployment process more reliable on unstable topology and to avoid stuck deployments, that are possible in current architecture, service deployment should be based on custom discovery messages distribution.
Deployment starts with sending of a custom discovery event, that notifies all nodes in the cluster about the ongoing deployment. This message contains serialized service instance and its configuration. It is delivered to the coordinator node first, that calculates the service deployment assignments and adds this information to the message. During the following round-trip of this message, nodes save information about service deployment assignments to some local storage, and the ones, that were chosen to deploy the services, perform initialisation asynchronously in a dedicated thread pool.
Once a node finishes the deployment procedure and Service#init()
method execution, it connects to the coordinator, using the communication SPI, and sends the deployment result to it, i.e. either acknowledgement about successful deployment, or a serialized exception.
Once all deployment results are collected, coordinator sends another discovery message, notifying all nodes about successful deployment. This is the moment, when deployment futures are completed and the control is returned from IgniteServices#deploy*
methods. Also Service#execute()
method starts its work on successful deployment message arrival.
There are three types of errors, that should be handled correctly.
When a new node connects to the existing cluster, all needed services are deployed and initialised on it by the time it is accepted to the topology.
This is what happens, when a new node joins the cluster:
It should be possible to update service implementation without downtime. Employment of Deployment SPI should solve this problem.
Service processor should subscribe to class deployments and restart corresponding services, when their classes change.
The basic usage scenario involves enabling UriDeploymentSpi and updating the JAR files, containing implementation classes. It will lead to existing services cancellation and reployment. It implies, that services should be ready to sudden cancellations. Documentation should contain explanation of this fact with examples.
These changes will break compatibility with previous versions of Apache Ignite completely.
Also there will be no way to preserve services between cluster restarts. Even though there is such possibility currently.
There are still some flaws in the current service grid design, that are not covered in this IEP.
Suppose, we have one server node, and we deploy several cluster-singleton services on it.
If you add more nodes to it, then service instances won't be moved to them, so the only node will be serving all the requests. This situation should be solved by proper service rebalancing.
Changes, described in current IEP don't interfere with service rebalancing in any way, so we can do it as a separate task after the proposed changes are implemented.
In previous design services could be persisted to disk along with other data in system caches.
Since the utility cache will become abandoned after the proposed changes, there will be no way to preserve deployed services between cluster restarts.
If we deside to keep this feature, then we should implement storing the service configurations. Also a way to configure this behaviour should be developed.
Service grid redesign: http://apache-ignite-developers.2346864.n4.nabble.com/Service-grid-redesign-td28521.html
Service versioning: http://apache-ignite-developers.2346864.n4.nabble.com/Service-versioning-td20858.html