Table of Contents |
---|
Problem Statement
Top-Level Goal
The top-level goal is a single API for managing cluster configuration.
The beneficiaries of this work are those who want Currently, when a developer wants to change the configuration of a the cluster , say, create a region (or destroy an index, or update an async event queue) and have the change (create/destroy regions, indices or gateway receivers/senders etc), and have these changes replicated on all the applicable servers and persisted in the cluster configuration for incoming servers, there is no public API for them to do so. They must replicate the effort of the equivalent gfsh command to achieve the same effect. It would be nice if we can expose what these commands do to a public API.service. In addition to developers building Geode-based applications, the target user group includes developers working on different parts of the Geode code such as Spring Data for Apache, queries for Lucene index, or storage for the JDBC connector.
Problem Statement
In the current implementation:
- Most cluster configuration tasks are possible, but only by coordinating XML file-based configuration files, properties files, and gfsh commands.
- Many of the desired outcomes are achievable through multiple paths.
- Establishing a consistent configuration and persisting it across the cluster is difficult, sometimes impossible.
Product Goals Product Goals
The developer should be able to:
Create regions/indices on the fly.
Persist the configuration and apply it to the cluster (when a new node joins, it has the config; when the server restarts, it has the config)
Obtain a consistent view of the current configuration
Apply the same change to the cluster in the same way
Be able to change the configuration in one place
Obtain this configuration without being on the cluster
Proposed Solution
The proposed solution includes:
- Address the multiple path issue by presenting a single public API for configuring the cluster, including such tasks as creating a region destroying an index, or update an async event queue.
- Provide a means to persist the change in the cluster configuration.
- Save a configuration to the Cluster Management Service without having to restart the servers
- Obtain the cluster management service from a cache when calling from a client or a server
- Pass a config object to the cluster management service
- Use CRUD operations to manage config objects
This solution should meet the following requirements:
The user needs to be authenticated and authorized for each API call based on the resource he/she is trying to access.
- Enable Security Manager
- with Finer
User
Create a more modular product to allow for easy extension and integration.
can call the API from either the client side or the server side.
The outcome (behavior) is the same on both client and server:
affects cluster wide
idempotent
What We Have Now
:Our admin rest API "sort of" already serves this purpose, but it has these shortcomings:
- It's not a public API
- The API is restricted to the operations implemented as gfsh commands, as the argument to the API is a gfsh command string.
- Each command does similar things, yet commands may not be consistent with each other.
Below is a diagram of the current state of things:
Gliffy Diagram | ||||
---|---|---|---|---|
|
From the current state of commands, It's not easy to extract a common interface for all the commands. And developers do not want to use gfsh command strings as a "makeshift" API to call into the command. We are in need of a unified interface and a unified workflow for all the commands.
Proposal
We propose a new Cluster Management Service (CMS) which has two responsibilities:
- Update runtime configuration of servers (if any running)
- Persist configuration (if enabled)has to be enabled to use CMS)
Note that in order to use this API, Cluster Configuration needs to be enabled.
Gliffy Diagram | ||||||
---|---|---|---|---|---|---|
|
The CMS API is exposed as a new endpoint as part of "Admin REST APIs", accepting configuration objects (JSON) that need to be applied to the cluster. CMS adheres to the standard REST semantics, so users can use POST, PATCH, DELETE and GET to create, update, delete or read, respectively. The API returns a JSON body that contains a message describing the result along with standard HTTP status codes.
Create API
Endpoint: http://locator:8080/geode/v2/regions/Foo
Method: POST
Headers:
security-username: user1
security-password: password1
Body:
Code Block | ||||
---|---|---|---|---|
| ||||
{
"regionConfig": {
"refId": "REPLICATE"
}
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"Metadata": {
"Url": "/geode/v2/regions/Foo"
}
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Region /Foo already exists"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Region type is a required parameter"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Missing authentication credential header(s)"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "User1 not authorized for DATA:MANAGE"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Failed to create region /Foo because of <reason>"
} |
Note that the CREATE endpoint is idempotent – i.e. it should be a NOOP if the region already exists.
Get API
Endpoint: http://locator:8080/geode/v2/regions
Method: GET
Headers:
security-username: user1
security-password: password1
200
Code Block | ||||
---|---|---|---|---|
| ||||
{
"Total_results": 10,
"Regions" : [
{
"Name": "Foo",
"Url": "/geode/v2/regions/Foo"
},
...
]
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Missing authentication credential header(s)"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "User1 not authorized for DATA:MANAGE"
} |
Endpoint: http://locator:8080/geode/v2/regions/Foo
Method: GET
Headers:
security-username: user1
security-password: password1
200
Code Block | ||||
---|---|---|---|---|
| ||||
{
"Name": "Foo",
"Data_Policy": "partition",
"Hosting_Members": [
"s1",
"s2",
"s3"
],
"Size": 0,
"Indices": [
{
"Id": 111,
"Url": "/geode/v2/regions/Customer/index/111"
}
]
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Missing authentication credential header(s)"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "User1 not authorized for DATA:MANAGE"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Region with name '/Foo' does not exist"
} |
Update API
Endpoint: http://locator:8080/geode/v2/regions/Foo
Method: PATCH
Headers:
security-username: user1
security-password: password1
Body:
Code Block | ||||
---|---|---|---|---|
| ||||
{
"regionConfig": {
"enable_subscription": true
}
} |
200
Code Block | ||||
---|---|---|---|---|
| ||||
{
"Metadata": {
"Url": "/geode/v2/regions/Foo"
}
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Invalid parameter specified"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Missing authentication credential header(s)"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "User1 not authorized for DATA:MANAGE"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Region with name '/Foo' does not exist"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Failed to update region /Foo because of <reason>"
} |
Delete API
Endpoint: http://locator:8080/geode/v2/regions/Foo
Method: DELETE
Headers:
security-username: user1
security-password: password1
204
<Successful deletion>
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Missing authentication credential header(s)"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "User1 not authorized for DATA:MANAGE"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Region with name '/Foo' does not exist"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Region with name /Foo has already been deleted"
} |
Code Block | ||||
---|---|---|---|---|
| ||||
{
"message": "Failed to delete region /Foo because of <reason>"
} |
Let's look at some code to see how users can use this service. The below example shows how to create a region using CMS.
Curl (any standard REST client)
Code Block | ||||
---|---|---|---|---|
| ||||
curl http://locator.host:8080/geode/v2/regions/Foo -XPOST -d '
{
"regionConfig": {
"refId" : "REPLICATE"
}
}' |
On Client
Code Block | ||||
---|---|---|---|---|
| ||||
public class MyApp {
public static void main(String[] args) {
//1. Get the service from Cache
ClientCache cache = new ClientCacheFactory().addPoolLocator("127.0.0.1", 10334).create();
ClusterManagementService cms = cache.getClusterManagementService();
//2. Create the config object, these are just JAXB generated POJOs
RegionConfig regionConfig = new RegionConfig(); //These are JAXB generated configuration objects
regionConfig.setrefId("REPLICATE");
//3. Invoke create, update, delete or get depending on what you want to do.
ConfigResult result = cms.createRegion("Foo", regionConfig); //create(regionName, config) returns a ConfigResult or throws an exception
}
} |
On Server
Here's how one can use CMS on a server.
Code Block | ||||
---|---|---|---|---|
| ||||
public class MyFunction implements Function<String> {
@Override
public void execute(FunctionContext context) {
//1. Get the service from cache
Cache cache = context.getCache();
ClusterConfigurationService cms = Cache.getClusterManagementService();
//2. Create the config object, these are just JAXB generated POJOs
RegionConfig regionConfig = new RegionConfig(); //These are JAXB generated configuration objects
regionConfig.setrefId("REPLICATE");
//3. Invoke create, update, delete or get depending on what you want to do.
ConfigResult result = cms.createRegion("Foo", regionConfig); //create(regionName, config) returns a ConfigResult or throws an exception
}
} |
Behind the scenes
Following the effort here, Configuration Persistence Service, we already have a set of configuration objects derived from the cache XML schema. This would serve a common object that the developer would use to configure the config instance. The developer would then ask the cluster management service to persist it, either on the cache (creating the real thing on an existing cache) or on the configuration persistence service (persisting the configuration itself).
On the locator side, the configuration service framework will just handle the workflow. It's up to each individual ClusterConfigElement to implement how it needs to be persisted and applied.
How does it work
On the locator side, the configuration service framework will just handle the workflow. It's up to each individual ClusterConfigElement to implement how it needs to be persisted and applied.
Gliffy Diagram | ||||
---|---|---|---|---|
|
This is what happens inside the LocatorClusterManagementService for a create operation:
Gliffy Diagram | ||||
---|---|---|---|---|
|
This is what happens inside the LocatorClusterManagementService for a list operation:
Gliffy Diagram | ||||
---|---|---|---|---|
|
Pros and Cons:
Pros:
- A common interface to call either on the locator/server/client side
- A common workflow to enforce behavior consistency
- Modularized implementation. The configuration object needs to implement the additional interfaces in order to be used in this API. This allows us to add functionality gradually and per function groups.
Cons:
- Existing gfsh commands need to be refactored to use this API as well, otherwise we would have duplicate implementations, or have different behaviors between this API and gfsh commands.
- When refactoring gfsh commands, some commands' behaviors will change if they want to strictly follow this workflow, unless we add additional APIs for specific configuration objects.
Migration Strategy:
Our current commands uses numerous options to configure the behavior of the commands. We will have to follow these steps to refactor the commands.
- Combine all the command options into one configuration object inside the command itself.
- Have the command execution call the public API if the command conforms to the new workflow. In this step, the config objects needs to implement the ClusterConfigElement.
- If the command can't use the common workflow, make a special method in the API for that specific configuration object. (We need to evaluate carefully - we don't want to make too many exceptions to the common workflow.)
The above work can be divided into functional groups so that different groups can share the workload.
Once all the commands are converted using the ClusterManagementService API, each command class can be reduced to a facade that collects the options and their values, builds the config object and calls into the API. At this point, the command objects can exist only on the gfsh client.
The end architecture would look like this:
Gliffy Diagram | ||||
---|---|---|---|---|
|
Project Milestones
- API is clearly defined
- All commands are converted using this API
- Command classes exist only on a gfsh client. The GfshHttpInvoker uses the REST API to call this ClusterConfigurationService with the configuration objects directly.