Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 4.0

...

  1. Represent the widely distributed nature of a typical SOA so that SCA presents a cross enterprise description of assembled components
  2. Policy Support policy matching where components require particular resources and hence particular, and separate, nodes
  3. HA/Load balancing/Performance scenarios where a single component appears on multiple nodes
  4. Load balancing/Performance scenarios where domain is spread across multiple nodes (same as 1 & 2 I believe)
  5. Dynamic wiring/Registry based service location, i.e. the SCA binding is called upon to automatically locate services based on registry entries.(overlaps with all of the above)

Terminology

SCADomain, Composite, Component, Service, Reference - as described in the SCA specifications. Note that a Domain may span multiple runtime nodes.  A Composite may also span multiple runtime nodes.

Distributed Domain

An SCA Domain (complete runtime configuration) that is "distributed over a series of interconnected runtime nodes".

Runtime

The logical container for one or more SCA Domains containing components. A runtime groups together one or more (distributed) runtime nodes.

Node

Provides an environment inside which SCA component instances execute. It's an operating system process, separate from other Nodes. Its form may be as simple as a single Java VM or it may take the more scalable and reliable form, such as a compute cluster.

Each node must be capable of supporting at least:

  • one implementation type
  • one binding type (which may be restricted to binding.sca)

A runtime node must be able to expose the service endpoints required by the components it runs. It must be able to support the client technology for the references of associated components.

Domain Node

The part of a Distributed Domain that runs on a Node.

Component Instance

The running component that services requests. A single component definition in a SCDL file may give rise to one or more component instances depending on how the component is scoped (using the @Scope annotation).

See http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Terminology

Scoping The Distribution Problem

Image Removed
There are many existing technologies that deal with managing compute nodes and job scheduling. So it's probably safe to start by ignoring the issue of how the system picks processors on which runtime nodes will run (1). So the runtime management of the nodes themselves is out of scope.

There are also many technologies that provide scalable, robust and/or high performance service hosting solutions. So we can also ignore the issue of how component instances are actually constructed as the runtime representation of components deployed to a runtime (3). For example if a JVM clustering solution is chosen to implement a node then we assume that local method calls within that cluster will be handled by the clustering technology and no special action is required. If higher level clustering technology is in operation then intergration with the runtime is required. In this case, where each node in the cluster runs part of the domain a component can be mapped to multiple nodes, the most natural integration point is the SCA binding which must interact with the clustering technology in order locate target component services.

So the initial area of consideration is how the components of a domain are associated with runtime nodes (2).

Image Added

Cardinality

Image Modified
In the non-distributed case a single runtime node loads all contributions and runs all components.
In the distributed case, A Domain may span many nodes.

...

Answer: Yes. Multiple Domains can run on the same runtime. It is up to the runtime implementation to ensure that appropriate partitioning is achieved, since SCA Domains are intended to be isolated (for example, a reference in one domain cannot directly reference a service in another domain through it SCA component & service names).

Scenario - Simple Distributed

...

Components

Image Modified

Scenario - Web Application Cluster

In this basic scenario a number of composites are started across nodes from the command line and once they are all started messages are send through the application

Demonstrates: the sca binding and service resolution within the domain.

Scenario - Standalone Node

Composites are added to the node through the node API and the node is started.

Demonstrates: Resolution of wires across composites with a single node

Scenario - Nodes Connected To A domain

Composites are added through the node API and each node is started

Demonstrates: Compilation of a domain view of the application as composites are started on nodes

Scenario - Nodes Running in a Web App

Nodes started in web apps run the composites from those applications and registers them with the domain

Demonstrated: Compilation of a domain view of the application a web apps are run

Scenario - Virtual Node

A node is associated with a domain that doesn't have a tuscany runtime.

Demonstrates: Ability of Tuscany domain to include components/services that are not running on an SCA runtime.

Scenario - Domain Adding Nodes

A node is started and it becomes part of the domain ready to run compositesA more specific scenario where a distributed domain is used to support an application within a web application configured as a cluster.

Managing The Distributed Domain

Th logical view of how the different parts of the solution communicate is.

Image Modified

Messages - the application messages that flow between configured components. Messages will flow over bindings described excplicitly in the assembly model or across the default binding used when no explicit binding is specified.

Configuration - In the disitrubted domain configuration is shared across the nodes with which the domain is associated. This includes information about, contributed resources, running components and their endpoints and domain configuration items such as base URLs.

Events - as the domian domain runs interesting events will occur, for example, a node fails and is restarted meaning that a set of endpoints change.

...

Based on the calculator scenario can imagine the following.

Image Removed

Interfaces

Node

  • start(nodeUri)
  • stop()
  • joinDomain(domainUri)
  • domainNodeConfigurationChange(domainUri)

ServiceDiscovery

  • findServiceEndpoint(domainUri, serviceName)
  • registerServiceEndpoint(domainUri, serviceName, url)

DomainNode

DomainNode

  • createDomainNode(domainUri, nodeUri)
  • startDomainNode(domainUri, nodeUri)
  • stopDomainNode(domainUri, nodeUri)

BaseUriMap

  • setBaseUri(domainUri, nodeUri, protocol, uri)
  • getBaseUri(domainUri, nodeUri, protocol)

ComponentMap

  • addComponent(componentName, domainUri, nodeUri)
  • removeComponent(componentName, domainUri, nodeUri)
  • getComponents(domainUri, nodeUri)

ContributionManager

  • addContribution(domainUri, contributionUri)
  • removeContribution(domainUri, contributionUri)

ComponentManager - a version is already defined in host embedded

  • startComponent(domainUri, componentUri)
  • stopComponent(domainUri, componentUri)

Distributed Domain

  • getDomainNodeConfiguration(domainUri, nodeUri)
  • registerNode(domainUri, nodeUri)

Event (I expect there is some specified interface we can use here)

  • logEvent(domainUri,event)
  • getEventLog(domainUri)

Walkthrough

1. Running a node

  • Run a node exe giving it a node uri and a domain to join.
  • The node exe will embed and start a Tuscany runtime.
  • Node service is exposed
  • Node will discovery where the distributed domain is running
    • in a file based scenario configuration is available locally
    • discovery can be hardcoded if required.
  • Create and start a domain node

2. Running the distributed domain

  • Start the distributed domain exe giving
    • Note the distributed domain may only exist in configuration files on disc and in this case no separate exe is required
  • Gather together the domain configuration
    • base uris
    • Added contributions
    • Components added to nodes
  • Nodes will join the domain as they are started
  • Provide domain node configuration to a node on request
  • If the configuration changes notify each (affected) node

3. Node initial configuration

  • Requests configuration for this domain node
  • Configuration is supplied in the form of
    • base uris
    • contributions to load
    • components to activate
  • Contributions are loaded
    • Gives rise to endpoints being registered with the distributed domain

4. Starting a domain node

  • Domain node is activated
    • currently gives rise to all domain components starting

5. Starting a component

  • Start a named component

6. Stopping a component

  • Stop a named component

7. Stopping a domain node

  • Domain node is stopped
    • all running components are stopped

8. Updating node configuration

  • Distributed domain notifies all (affected) nodes
  • Node retrieves domain node configuration (updates)
    • New/Updated/Removed controbution
    • Added/Remove components
    • Currently incremental domain updated are not fully supported so will have to go with wholesale reconfiguration
  • Domain node is stopped
  • Contributions are reprocessed
  • Domain is restarted

9. Choosing a component instance

Assigning components to nodes defines the endpoint for a components services. The distributed domain uses this information to create default bindings for the cross node wires. If a component is assigned to multiple nodes then the runtime is responsible for selecting the appropriate node based on, scope, conversational status of target component and also non specified goals such as load balancing

10. Events and Stats

  • The hierarchy of components in the distributed domain
  • The components running on a node
  • Events/logs for the distributed domain or for a node

11. Node failure

  • A failed node is restarted and reconfigures itself from the distributed domain
    • and inflight requests are lost
    • any ongoing conversations are lost unless they have been persisted by the runtime
    • endpoints are re-registered when node restarts
  • A failed node can be restarted in a different place
    • Base uri configuration must be adjusted to take account of new location.
    • endpoints are re-registered when node restarts

12. Distributed domain failure

  • Nodes remain running in isolation.
    • periodically trying to rediscover new distributed domain
  • Restart distributed domain
  • nodes should eventally rediscover it

SCA Binding

The SCABinding is the default binding used within an SCA assembly. In the runtime in a single VM case it implies local connections. In the distributed runtime case it hides all of the complexity of ensuring that nodes wired between runtimes are able to communicate.

When a message oriented binding is used here we benefit from the abstract nature of the endpoints, I.e queues can be created given runtimeId/ServiceID and messages can be targetted at these queues without knowledge of where the message consumers are physically.

Whene a point to point protocol is used a physical endpoint is required. So a registry of endpoints to SCA bound service is required to allow the SCA binding to find the appropriate target. This registry can either be static, i.e. derived from the base urls given in the domain topology configuration, or dynamic in nature, i.e. set up at runtime.

Within the same domain/runtime multiple technologies may be required to implement the SCA binding as messages pass between different runtime node implementations.

Modelling The Distributed Domain

Using information from the SCA Assembly specification and the implied requirements of a distribute runtime we can determine what data is required to configure and control the distributed SCADomain.

No Format

SCADomain
  Name (DomainA)
  BaseURI
  Domain Level Composite
    Component (ComponentA)
      implementation
        composite
      Service
      Reference
    Installed Contributions
    Initial Package
    Contribution (file system, jar, zip etc)
      URI (ContributionA)
      /META-INF/
        sca-contribution.xml
          deployable (composite QName)
          import (namespace, location)
          export (namespace)
        sca-contribution-generated.xml
          deployable (composite QName)
          import (namespace, location)
          export (namespace)
        deployables
          *.composite
      *.composite
        URI
        Component (ComponentA)
          Service
          Reference
      Other Resources
        URI
      Dependent Contributions
        Contribution snapshot
      Deployment-time Composites
        *.composite

Over and above the contributed information we need to associate components with runtime nodes.

Runtime
  name (runtimeA)
  Node
    name (nodeeA)
    DomainA
      scheme http://localhost:8080/acbd
      scheme https://localhost:442/abcd
      ComponentA

We know how SCDL is used to represent the application composites. We can view the runtime node configuration as a new set of components, interfaces, services and references. In SCA terms we can consider that each node implements a system composite that provides the service interfaces required to manage the node, for example.

No Format

<composite xmlns="http://www.osoa.org/xmlns/sca/1.0"
           name="nodeA">
    <component name="ComponentRegistry">
        <implementation.java class="org.apache.tuscany.sca.distributed.node.impl.DefaultComponentRegistry"/>
    </component>
</composite>

Having this meand that we can expose out local component registry using any bindings that Tuscany supports. Imagine that out component registry has an interface that allows out to

getComponentNode
setComponentNode
etc.

Then we might choose to initialise the registry with the follwoing type of information.

No Format

<runtime>
    <node name="nodeA">
        <schema name="http" baseURL="http://localhost:80" />
        <schema name="https" baseURL="https://localhost:443" />
        <component name="CalculatorServiceComponent" />
    </node>
    <node name="nodeB">
        <schema name="http" baseURL="http://localhost:81"/>
        <schema name="https" baseURL="https://localhost:444" />
        <component name="AddServiceComponent"/>
    </node>
    <node name="nodeC">
        <schema name="http" baseURL="http://localhost:81"/>
        <schema name="https" baseURL="https://localhost:444" />
        <component name="SubtractServiceComponent"/>
    </node>
</runtime>

Of course we can read this configuration locally form a file, have it delivered via a service interface or retrieve it via a reference.

To Do

SCA Binding

Currently the code uses JMS to implement the default remote SCA binding. The remote SCA binding is used when the system finds that two components that are wired together locally are deployed to separate Nodes. As an alternative it would be good to support web services here also and have this fit in with the new SCA binding mechanism that Simon Nash has been working on.

To make a web services SCA binding work we need an EndpointLookup interface so that components out there in the distributed domain can locate other components that they are wired to.

Node

Currently each node runs in isolation and starts a local SCA domain configured from .topology and .composite files. Now implement Node interfaces so that this information can be provided remotely and so that the node can expose remotely accessible management interfaces, for example.

The domain management interface Ant has recently been added that may help us shape this. Also Sebastien's work to allow local domains to be modified more dynamically should help make this work.

Domain Node

Provide the link between the node and the domain node in order to deliver configuration updates, control messages and events

Distributed Domain

Provide some centralized control by implementing the distributed domain interfaces

WebApps

It qould be useful to have some simple query application in the tone of what is currently in distribution/webapp

References

...

we can take a general view of how the domain organizes running application

Image Added

However there are a number of specific configurations to consider which affect the way that configuration and events are distributed.

Domain Driven

Image Added

Node Driven

Image Added

Stand Alone Node

Image Added

Remote Domain Control

Image Added

APIs

SCADomainFactory

SCADomain

  • public void start() throws DomainException;
  • public void stop() throws DomainException;
  • public String getURI();
  • public void addContribution(String uri, URL url) throws DomainException;
  • public void removeContribution(String uri) throws DomainException;
  • public void addToDomainLevelComposite(QName compositeQName) throws DomainException;
  • public void removeFromDomainLevelComposite(QName compositeQName) throws DomainException;
  • public void addDeploymentComposite(ContributionURI, CompositeXML) throws DomainException;
  • public void startComposite(QName qname) throws DomainException;
  • public void stopComposite(QName qname) throws DomainException;
  • public <B, R extends CallableReference<B>> R cast(B target) throws IllegalArgumentException;
  • public <B> B getService(Class<B> businessInterface, String serviceName);
  • public <B> ServiceReference<B> getServiceReference(Class<B> businessInterface, String referenceName);

SCANodeFactory

SCANode

  • public String getURI();
  • public SCADomain getDomain();
  • public void addContribution(String uri, URL url) throws DomainException;
  • public void removeContribution(String uri) throws DomainException;
  • public void addToDomainLevelComposite(QName compositeQName) throws DomainException;
  • public void removeFromDomainLevelComposite(QName compositeQName) throws DomainException;
  • public void startComposite(QName composite) throws NodeException;
  • public void stopComposite(QName composite) throws NodeException;
  • public void start() throws NodeException;
  • public void stop() throws NodeException;
  • public void destroy() throws NodeException;

SPIs

NodeEvents (node to domain)

  • public String registerNode(String nodeURI, String nodeURL);
  • public String removeNode(String nodeURI);
  • public void registerContribution(String nodeURI, String contributionURI, String contributionURL);
  • public void unregisterContribution(String contributionURI);
  • public String registerServiceEndpoint(String domainUri, String nodeUri, String serviceName, String bindingName, String URL);
  • public String removeServiceEndpoint(String domainUri, String nodeUri, String serviceName, String bindingName);
  • public String findServiceEndpoint(String domainUri, String serviceName, String bindingName);

NodeManagement (domain to node)

  • public String getURI();
  • public void addContribution(String contributionURI, String contributionURL);
  • public void deployComposite(String compositeName);
  • public void start();
  • public void stop();

Interactions

Action

Domain

Domain Proxy

Node

Notes

 

 

 

 

 

Starting Node standalone

 

 

SCANodeFactory nodeFactory = SCANodeFactory.newInstance();

 

 

 

 

SCANode node = nodeFactory.createSCANode(null, null);

use default node URL and don't connect to a domain

Starting Domain

SCADomainFactory domainFactory = SCADomainFactory.newInstance();

 

 

 

 

SCADomain node = domainFactory.createSCADomain(null)

 

 

use default domain URL on this machine

Starting Node to connect to domain

 

 

 

 

Starting Domain proxy standalone

 

 

 

 

Add contribution to domain

 

 

 

 

Add contribution to node

 

 

 

 

Remove Contribution from domain

 

 

 

 

Remove Contribution from node

 

 

 

 

Update Contribution in domain

 

 

 

 

Update Contribution in node

 

 

 

 

Start domain

 

 

 

 

Stop domain

 

 

 

 

Start node

 

 

 

 

Stop node

 

 

 

 

start contribution at node

 

 

 

 

stop contribution at node

 

 

 

 

Start composite at domain

 

 

 

 

Stop composite at domain

 

 

 

 

start composite at node

 

 

 

 

stop composite at node

 

 

 

 

Get service from domain

 

 

 

 

Get service from domain proxy

 

 

 

 

Get service from node

 

 

 

 

Load Balancing

Reliability and Failover