...
Code Block | ||||
---|---|---|---|---|
| ||||
ConfigurationManagementService cms = ClientCache.getConfigurationManagementService(); RegionConfig regionConfig = new RegionConfig(); //These are JAXB generated configuration objects regionConfig.setName("Foo"); regionConfig.setrefId("REPLICATE"); ConfigResult result = cms.create(regionConfig, null, true); //create(config, memberOrGroup, ifNotExists) returns a ProxyRegionConfigResult or throws an exception |
On Server
Here's how one can use CMS on serverside.
Code Block | ||||
---|---|---|---|---|
| ||||
public class MyFunction implements Function<String> { @Override public void execute(FunctionContext context) { //1. Get the service from cache Cache cache = context.getCache(); ConfigurationManagementService cms = Cache.getConfigurationManagementService(); //2. Create the config object, these are just JAXB generated POJOs RegionConfig regionConfig = new RegionConfig(); //These are JAXB generated configuration objects regionConfig.setName("Foo"); regionConfig.setrefId("REPLICATE"); //3. Invoke create, update, delete or get dependening on what you want to do. ConfigResult result = cms.create(regionConfig, null, true); //create(config, memberOrGroup, ifNotExists) returns a Region ConfigResult or throws an exception } } |
Gliffy Diagram | ||||
---|---|---|---|---|
|
Following the effort here, Configuration Persistence Service, we already have a set of configuration objects derived from the cache XML schema. This would serve a common object that developers would use to configure the config instance first and then ask the cluster management service to persist it, either on the cache(create the real thing on an existing cache) or on the configuration persistence service (persist the configuration itself).
Gliffy Diagram | ||||
---|---|---|---|---|
|
Pros and Cons:
Pros:
- a common interface to call either on the locator/server/client side.
- a common workflow to enforce behavior consistency.
- Modularized implementation. The configuration object needs to implement the additional interfaces in order to be used in this API. This allows us to add functionality gradually and per function groups.
Cons:
- Existing gfsh commands need to be refactored to use this API as well, otherwise we would have duplicate implementations, or have different behaviors between using this API and using gfsh commands.
- When refactoring gfsh commands, some commands' behavior will change if they want to strictly follow this workflow, unless we add additional APIs for specific configuration objects.
Migration Strategy:
Our current commands uses numerous command options to configure the behavior of the commands. We will have to follow these steps to refactor the commands.
- combine all the command options into one configuration objects inside the command itself.
- have the command execution call the public API if the command conforms to the new workflow. In this step, the config objects needs to implement the ClusterConfigElement
- If the command can't use the common workflow, make a special method in the api for that specific configuration object (we need to evaluate carefully. we don't want to make too many exceptions to the common workflow)
The above work can be divided into functional groups and have the different group share the workload.
Once all the commands are converted using the ClusterManagementService API, All the command classes are just a facade of collecting the options values, build the config object and call into the API. At this point, the command objects can only exist on the gfsh client.
The end architecture would look like this:
Gliffy Diagram | ||||
---|---|---|---|---|
|
Project Milestones
- API is clearly defined
- All commands are converted using this API
- Command classes only exist on Gfsh client. The GfshHttpInvoker uses the rest API to call this ClusterConfigurationService with the configuration objects directly.