Problem Statement
Currently, when a developer wants to change the configuration of a cluster, say, create a region (or destroy an index, or update an async event queue) and have the change persisted in the cluster configuration for incoming servers, there is no public API for them to do so. They must replicate the effort of the equivalent gfsh command to achieve the same effect. It would be nice if we can expose what these commands do to a public API.
Product Goals
The developer should be able to:
- Save a configuration to the Cluster Management Service without having to restart the servers
- Obtain the cluster management service from a cache when calling from a client or a server
- Pass a config object to the cluster management service
- Use CRUD operations to manage config objects
User Goals
Create a more modular product to allow for easy extension and integration.
The beneficiaries of this work are those who want to change the configuration of the cluster (create/destroy regions, indices or gateway receivers/senders etc), and have these changes replicated on all the applicable servers and persisted in the configuration persistence service for new joining servers. This includes developers working on different parts of the code such as Spring Data for Apache, Queries for Lucene index, storage for the JDBC connector, and other Geode developers.
What We Have Now:
Our admin rest API "sort of" already serves this purpose, but it has these shortcomings:
- It's not a public API
- The API is restricted to the operations implemented as gfsh commands, as the argument to the API is a gfsh command string.
- Each command does similar things, yet commands may not be consistent with each other.
Below is a diagram of the current state of things:
From the current state of commands, It's not easy to extract a common interface for all the commands. And developers do not want to use gfsh command strings as a "makeshift" API to call into the command. We are in need of a unified interface and a unified workflow for all the commands.
Proposal
We propose a new Cluster Management Service (CMS) which has two responsibilities:
- Update runtime configuration of servers (if any running)
- Persist configuration (if enabled)
The CMS API is exposed as a new endpoint as part of "Admin REST APIs", accepting configuration objects (JSON) that need to be applied to the cluster. CMS adheres to the standard REST semantics, so users can use POST, PATCH, DELETE and GET to create, update, delete or read, respectively. The API returns a JSON body that contains a message describing the result along with standard HTTP status codes.
Create API
API | Status Code | Response Body |
---|---|---|
Endpoint: http://locator:8080/geode/v2/regions/Foo Method: POST Headers: security-username: user1 security-password: password1 Body: Request Body { "regionConfig": { "refId": "REPLICATE" } } | 201 | Success Response { "Metadata": { "Url": "/geode/v2/regions/Foo" } } |
304 | Success Response { "message": "Region /Foo already exists" } | |
400 | Error Response { "message": "Region type is a required parameter" } | |
401 | Error Response { "message": "Missing authentication credential header(s)" } | |
403 | Error Response { "message": "User1 not authorized for DATA:MANAGE" } | |
500 | Error Response { "message": "Failed to create region /Foo because of <reason>" } |
Note that the CREATE endpoint is idempotent – i.e. it should be a NOOP if the region already exists.
Get API
API | Status Code | Response Body |
---|---|---|
Endpoint: http://locator:8080/geode/v2/regions Method: GET Headers: security-username: user1 security-password: password1
| 200 | Success Response { "Total_results": 10, "Regions" : [ { "Name": "Foo", "Url": "/geode/v2/regions/Foo" }, ... ] } |
401 | Error Response { "message": "Missing authentication credential header(s)" } | |
403 | Error Response { "message": "User1 not authorized for DATA:MANAGE" } |
API | Status Code | Response Body |
---|---|---|
Endpoint: http://locator:8080/geode/v2/regions/Foo Method: GET Headers: security-username: user1 security-password: password1
| 200 | Success Response { "Name": "Foo", "Data_Policy": "partition", "Hosting_Members": [ "s1", "s2", "s3" ], "Size": 0, "Indices": [ { "Id": 111, "Url": "/geode/v2/regions/Customer/index/111" } ] } |
401 | Error Response { "message": "Missing authentication credential header(s)" } | |
403 | Error Response { "message": "User1 not authorized for DATA:MANAGE" } | |
404 | Error Response { "message": "Region with name '/Foo' does not exist" } |
Update API
API | Status Code | Response Body |
---|---|---|
Endpoint: http://locator:8080/geode/v2/regions/Foo Method: PATCH Headers: security-username: user1 security-password: password1 Body: Request Body { "regionConfig": { "enable_subscription": true } }
| 200 | Success Response { "Metadata": { "Url": "/geode/v2/regions/Foo" } } |
400 | Error Response { "message": "Invalid parameter specified" } | |
401 | Error Response { "message": "Missing authentication credential header(s)" } | |
403 | Error Response { "message": "User1 not authorized for DATA:MANAGE" } | |
404 | Error Response { "message": "Region with name '/Foo' does not exist" } | |
500 | Error Response { "message": "Failed to update region /Foo because of <reason>" } |
Delete API
API | Status Code | Response Body |
---|---|---|
Endpoint: http://locator:8080/geode/v2/regions/Foo Method: DELETE Headers: security-username: user1 security-password: password1
| 204 | <Successful deletion> |
401 | Error Response { "message": "Missing authentication credential header(s)" } | |
403 | Error Response { "message": "User1 not authorized for DATA:MANAGE" } | |
404 | Error Response { "message": "Region with name '/Foo' does not exist" } | |
410 | Error Response { "message": "Region with name /Foo has already been deleted" } | |
500 | Error Response { "message": "Failed to delete region /Foo because of <reason>" } |
Let's look at some code to see how users can use this service. The below example shows how to create a region using CMS.
Curl (any standard REST client)
curl http://locator.host:8080/geode/v2/regions/Foo -XPOST -d ' { "regionConfig": { "refId" : "REPLICATE" } }'
On Client
public class MyApp { public static void main(String[] args) { //1. Get the service from Cache ClientCache cache = new ClientCacheFactory().addPoolLocator("127.0.0.1", 10334).create(); ClusterManagementService cms = cache.getClusterManagementService(); //2. Create the config object, these are just JAXB generated POJOs RegionConfig regionConfig = new RegionConfig(); //These are JAXB generated configuration objects regionConfig.setrefId("REPLICATE"); //3. Invoke create, update, delete or get depending on what you want to do. ConfigResult result = cms.createRegion("Foo", regionConfig); //create(regionName, config) returns a ConfigResult or throws an exception } }
On Server
Here's how one can use CMS on a server.
public class MyFunction implements Function<String> { @Override public void execute(FunctionContext context) { //1. Get the service from cache Cache cache = context.getCache(); ClusterConfigurationService cms = Cache.getClusterManagementService(); //2. Create the config object, these are just JAXB generated POJOs RegionConfig regionConfig = new RegionConfig(); //These are JAXB generated configuration objects regionConfig.setrefId("REPLICATE"); //3. Invoke create, update, delete or get depending on what you want to do. ConfigResult result = cms.createRegion("Foo", regionConfig); //create(regionName, config) returns a ConfigResult or throws an exception } }
Behind the scenes
Following the effort here, Configuration Persistence Service, we already have a set of configuration objects derived from the cache XML schema. This would serve a common object that the developer would use to configure the config instance. The developer would then ask the cluster management service to persist it, either on the cache (creating the real thing on an existing cache) or on the configuration persistence service (persisting the configuration itself).
Pros and Cons:
Pros:
- A common interface to call either on the locator/server/client side
- A common workflow to enforce behavior consistency
- Modularized implementation. The configuration object needs to implement the additional interfaces in order to be used in this API. This allows us to add functionality gradually and per function groups.
Cons:
- Existing gfsh commands need to be refactored to use this API as well, otherwise we would have duplicate implementations, or have different behaviors between this API and gfsh commands.
- When refactoring gfsh commands, some commands' behaviors will change if they want to strictly follow this workflow, unless we add additional APIs for specific configuration objects.
Migration Strategy:
Our current commands uses numerous options to configure the behavior of the commands. We will have to follow these steps to refactor the commands.
- Combine all the command options into one configuration object inside the command itself.
- Have the command execution call the public API if the command conforms to the new workflow. In this step, the config objects needs to implement the ClusterConfigElement.
- If the command can't use the common workflow, make a special method in the API for that specific configuration object. (We need to evaluate carefully - we don't want to make too many exceptions to the common workflow.)
The above work can be divided into functional groups so that different groups can share the workload.
Once all the commands are converted using the ClusterManagementService API, each command class can be reduced to a facade that collects the options and their values, builds the config object and calls into the API. At this point, the command objects can exist only on the gfsh client.
The end architecture would look like this:
Project Milestones
- API is clearly defined
- All commands are converted using this API
- Command classes exist only on a gfsh client. The GfshHttpInvoker uses the REST API to call this ClusterConfigurationService with the configuration objects directly.