You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Introduction

Apache ACE consists of a number of components, which can be deployed more-or-less independently.

Overview

ACE consists of several components, distributed over a number of nodes in the network. The image above shows the most elaborate situation, consisting of five nodes, each running their part of some functionality of ACE.

The system has the following nodes,

  • Client A client system has functionality for altering the configuration of the deployment process.
  • Server The server contains all necessary metadata to allow provisioning, and acts as a hub for logging information.
  • Relay A relay server keeps a copy of the metadata of the server, and can provide a target with all functionality a 'regular' server can. Note that this allows for offline provisioning, and note that there can be a chain of relay servers.
  • Target The target is the system to which software is to be provisioned.
  • OBR The OBR contains the actual software to be provisioned.

The nodes together have the following components,

  • Deployment See below.
  • Logging See below.
  • Identification Each target needs a unique name; the Identification components provide this functionality.
  • Discovery Targets and relays need to find the next server 'upstream' from the current node.
  • Configuration ACE uses its own configuration mechanism, which works by reading configuration files, and putting that information into Config Admin.
  • Scheduling Some components needs to have scheduled actions to allow logging and provisioning; the Scheduling component helps with that.

Given this information, it might also be useful to compare this with the Software Architecture.

Alternative topologies

TODO create more professional looking images

The three topologies below are just examples; you can take (almost) any component, and drop it somewhere in your network, all components are intended to be distributed.

Single server

In the case of a single server, the OBR and the required server components are deployed on one system, and all target nodes connect to that one. This is the scenario that Getting Started with the File Based Server uses.

Offline

This scenario can be used in the case of a service engineer who takes his laptop to a machine to be serviced. Assuming that these devices do not have internet connectivity (for cost- or security reasons), the engineer loads up his laptop with the most recent situation from the server (this will happen automatically), and later connect it to the machine. The machine will get its new software, and upload its logs to the service laptop; the logs will later be synced back to the server.

Firewalled

In this scenario, the administration is done on the 'big bad internet', but the server on the internet only contains metadata. The actual code is located in the OBR's behind the firewall; here, we also see the distribution possiblities.

Provisioning

Software provisioning is the main task of ACE. The provisioning components are responsible for determining the set of bundles to be deployed to a given target, creating a deployment package, and feeding the deployment package to the Deployment Admin.

Components

TODO show the components of the provisioning mechanism, and where they operate. Remember to point out that the gateway contains separate bundles, but there is also an integrated management agent.

Further reading

TODO create a child page that describes the provisioning part in more detail

Logging and reporting

ACE contains a logging and reporting facility. This mechanism allows a target to send back logging information to the server, and allows the storing of arbitrary key-value pairs.

The logging mechanism is extensible with extra logging information. The basic implementation will log framework events (e.g., starting and stopping of bundles), and deployment events.

Components

  • o.a.a.server.log The server's log bundle handles the access to the access logs on the server, and provides a servlet that is used for synchronizing logs between target (or relay) and server.
  • o.a.a.server.log.store Stores the log on the file; this functionality is separated to allow some other storage mechanism than the file system.
  • o.a.a.server.log.task This task is provided to the o.a.a.scheduler so the relay server can synchronize its logs with the 'master' server.
  • o.a.a.log Contains general interfaces for the gateway's logs.
  • o.a.a.log.listener Listens for framework events, and logs those to the gateway log.
  • o.a.a.gateway.log Provides a Log service to which events can be logged.
  • o.a.a.gateway.log.store Stores the logs on the gateway until they can be synchronized.
  • o.a.a.scheduler The scheduler is used to, configurably, run a task with a given interval. This is used by both the gateway and the relay to synchronize the log.

Further reading

TODO create a child page that shows what the logging is about, and how to add your own logging to the mechanism.

  • No labels