Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Info
titleunder reconsideration

This design, though valid is ignoring the rising use of tools like oasis camp and tosca, as well as the more propriatary format of terraform. Embedding or co-installing Apache Brooklyn with CloudStack for the use of creating application landscapes seems more appropriate.

Table of Contents

Introduction

ApplicationClusters (or AppC, pronounce appz) are an attempt to make orchestrating bigger application landscapes easier in a vanilla Apache CloudStack install.

Services like Kubernetes, Cloud Foundry, DBaaS require integration support from underlying CloudStack. This support includes Grouping Vms, Scaling, Monitoring. Rather than making changes every time to support various services in ACS, a generic framework has to be developed.

As an example container technologies are gaining quite a momentum and changing the way how application are traditionally deployed in the public and private clouds. Gaining interest in micro services based architecture is also fostering adoption of container technologies. Much like how cloud orchestration platforms enable the provisioning of VMs and adjacent services, container orchestration platforms like Kubernetes [3], docker swarm [1], mesos [2] are emerging to enable orchestration of containers. Container orchestration platforms typically can be run any where and be used to provision containers. A popular choice of running containers has been running them on the IaaS provisioned VMs. AWS and GCE provide native functionality to launch containers abstracting out the underlying consumption of VMs. A container orchestration platform can be provisioned on top of CloudStack using development tools, (see [6]), but they are not an out of the box solution. Given the momentum of container technologies, micro-services etc it make sense to provide a native functionality in CloudStack which is available out-of-the-box for users.

Another example are DBaaS installations. These have different sets of roles then the above mentioned container services with different number of nodes in each role. Those two have usually only two roles but for instance sdn solutions might have three roles; switch-, controlplane- and configuration-machines.

Apache Cloudstack should not involve itself with how virtual machines are used, though plugins for CloudStack might be written that do configure sets of VMs for certain uses (like kubernetes in [8]). The intention of this functionality is to provide the organisation of sets of VMs with roles to be used as a single application, be it a container cluster or a database or a SDN facility.

Purpose

Purpose of this document is present the functional requirements for supporting generic vm cluster service functionality in CloudStack

Glossary

Node - Vm in CloudStack

Application cluster - a managed group of VMs in CloudStack

DBaaS - Database as a Service

IaaS - Infrastructure as a service

PaaS - Platform as a service

Functional specification

Application Cluster

CloudStack VM cluster service shall introduce the notion of application cluster. A 'application cluster' shall be first class CloudStack entity that will be a composite of existing CloudStack entities like virtual machines, network, network rules etc.

The application cluster service shall stitch together cluster resources. Any enhancements or plugins can call it to do further deploys of the chosen cluster application like a manager and nodes in Kubernetes, Mesos, docker swarm etc, to provide the manager's service type, like AWS ECS, Google container service etc to the CloudStack users.

Cluster life-cycle management

Container service shall provide following container cluster life-cycle operations.

  • create application cluster: provision cluster resources, and brings the cluster in to operational readiness state. Resources provisioning shall be the responsibility of the caller, that can act according to the cluster manager used. All the cluster VM's shall be launched in to a dedicated network for the cluster. API end point of cluster manager can be exposed by the caller through creating a port forwarding rule on source nat ip of the network dedicated for the cluster.
  • delete application cluster: destroy all the resources provisioned for the application cluster. Post delete, a application cluster can not be performed any operations on it.
  • start application cluster: Starting a cluster will start the VMs and possibly start the network.
  • stop application cluster: Stopping a cluster will shutdown all the resources consumed by the application cluster. User can start the cluster at a later point with Start operation.
  • recovering a cluster: Due to possible faults (like VMs that got stopped due to failures, or malfunctioning cluster manager etc) application cluster can end up in Alert state. Recover is used to revive application cluster to a sane running state. In the initial version this is just trying to have the correct number of VMs per role. In later versions callbacks for (re-)provisioning may be added.
  • cluster resizing (scale-in/out): increase or decrease the size of the cluster on a per role basis. The functionality here is adhering to the same limitations as stated above under recovering.
  • list application cluster: list all the application clusters

provisioning service orchestrator

The provisioning of the service is out of scope for the application cluster. A calling plugin or external tool add value by calling, as part of its creation plan, any setting up of a control plane of the service type that was chosen. How a service will be setup is dependent on the chosen service type.

Design

API changes

Following API shall be introduced with application cluster:

  • createApplicationCluster
    • name: name of the application cluster
    • description: description of application cluster
    • type: service type - Kubernetes, CloudFoundry, Mesos etc
    • zoneid: uuid of the zone in which application cluster will be provisioned
    • a list of
      • role: the name for this type of VM
      • priority: used for starting order, lower numbers will be started sooner. As default the order (times ten) will be used.
      • serviceofferingid: service offering with which cluster VMs of this role shall be provisioned
      • template: the template to use for VMs of this role
      • count: size of the cluster or number of VMs of this role to be provisioned
    • accountname: account for which application cluster shall be created
    • domainid: domain of the account for which application cluster shall be created
    • networkid: uuid of the network in to which application cluster VM's will be provisioned. If not specified cluster service shall provision a new isolated network with default isolated network offering with source nat service.
  • deleteApplicationCluster
    • id: uuid of application cluster
  • startApplicationCluster
    • id: uuid of application cluster
  • stopApplicationCluster
    • id: uuid of application cluster
  • increaseRoleCount
    • id: uuid of application cluster
    • role: the name for the type of node to be added
  • decreaseRoleCount
    • id: uuid of application cluster
    • role: the name of the role for which to remove a node
  • listApplicationClusters
    • id: uuid of application cluster
    • name: (part of) the name of the clusters
  • listClusterNodes
    • id: uuid of application cluster

 

New reponse 'applicationclusterreponse' shall be added with below details:

  • name
  • description
  • zoneid
  • list of
    • role
    • priority
    • serviceofferingid
    • templateid
    • size
  • networkid

    suggested k8 extension response field
  • endpoint: URL of the application cluster manger API server endpoint

Life cycle operations

Each of the life cycle operation is a workflow resulting in either provisioning or deleting multiple CloudStack resources. There is no guarantee a workflow of a life cycle operation will succeed due to the lack of a two-phase-commit model, by means of resource reservation followed by provisioning semantics. Also there is no guarantee of a rollback succeeding. For instance, while provisioning a cluster of 10 VMs, deployment may run out of capacity to provision any more VMs after provisioning the first five Vms. In which case as rollback action, the provisioned VMs can be destroyed. But there can be cases where deleting a provisioned VM is not possible temporarily. For instance when a host is disconnected etc. So its not possible to achieve strong consistency and this will not be a focus in this phase of the development.

Below approach is followed while performing life cycle operations..

  • A best effort will be done to bring the cluster up to spec. If this failed it will be retried indefinitely.
  • If deployment fails it is the responsibility of the user to stop and destroy the cluster.

The below state machine reflects how a application cluster state transitions for each of life cycle operations

Gliffy Diagram
nameapplication cluster life cycle

Garbage collection

Garbage collection shall be implemented as a way to clean up the resources of application cluster, as a background task. Following are cases where cluster resources are freed up.

  • Starting application cluster fails, resulting in clean up of the provisioned resources (Starting → Expunging → Destroyed)
  • Deleting application cluster (Stopped→ Expunging → Destroyed and Alert→ Expunging → Destroyed )

If there are failures in cleaning up resources, and clean up can not proceed, the state of the application cluster is marked as 'Expunge' instead of 'Expunging'. The garbage collector will loop through the list of application clusters in 'Expunge' state periodically and try to free the resources held by application cluster.

Cluster state synchronization

State of the application cluster is 'desired state' of the cluster as intended by the user or what the system's logical view of the application cluster. However there are various scenarios where desired state of the application cluster is not sync with state that can be inferred from actual physical/infrastructure. For e.g a application cluster in 'Running' state with cluster size of 10 VM's all in running state. Its possible due to host failures, some of the VM's may get stopped at later point. Now the desired state of the application cluster is a cluster with 10 VM's running and in operationally ready state, but the resource layer is state is different. So we need a mechanism to ensure:

  • cluster is in desired state at resource/infrastructure layer. Which could mean provision new VM's or delete VM's, in the cluster etc to ensure desired state of the application cluster
  • Conversely when reconciliation can not happen reflect the state of the cluster accordingly, and to recover at later point.

Following mechanism will be implemented.

  • A state 'Alert' will be maintained that application cluster is not in its desired state.
  • A state synchronization background task will run periodically to infer if the cluster is in desired state. If not cluster will marked as alert state.
  • A recovery action try to recover the cluster

State transitions in FSM, where a application cluster ends up in 'Alert' state:

  • failure in middle of scale in/out, resulting in cluster size (# of VM's) not equal to the expected
  • failure in stopping a cluster, leaving some VM's to be running state
  • Difference of states as detected by the state synchronization thread.

example provisioning kubernetes container cluster manager

Core OS template shall be used to provision container cluster VM. Setting up a cluster VM as master/node of kubernetes is done through cloud-config script [7] in CoreOS. CloudStack shall pass necessary cloud config script as base 64 encoded user data. Once Core OS instances are launched by CloudStack, by virtue of cloud-config data passed as user data, core OS instances self-configures as kubernetes master and node VM's

schema changes

 

Code Block
languagesql
CREATE TABLE IF NOT EXISTS `cloud`.`application_cluster` (
    `id` bigint unsigned NOT NULL auto_increment COMMENT 'id',
    `uuid` varchar(40),
    `name` varchar(255) NOT NULL,
    `description` varchar(4096) COMMENT 'display text for this application cluster',
    `zone_id` bigint unsigned NOT NULL COMMENT 'zone id',
    `network_id` bigint unsigned COMMENT 'network this application cluster uses',
    `account_id` bigint unsigned NOT NULL COMMENT 'owner of this cluster',
    `domain

Container technologies are gaining quite a momentum and changing the way how application are traditionally deployed in the public and private clouds. Gaining interest in micro services based architecture also fostering adaption of container technologies. Like how cloud orchestration platforms enabled provisioning of VM's and adjunct services, container orchestration platforms like Kubernetes, docker swarm, mesos are emerging to enable orchestration of containers. Container orchestration platforms typically can be run any where and can be used to provision containers. A popular choice of running containers has been running them on the IAAS provisioned VM's. AWS and GCE provides native functionality to launch containers abstracting underlying consumption of VM's. There are couple efforts to provision a container orchestration platforms on top of CloudStack, but they are not out of the box solution. Given the momentum of container technologies, miro-services etc it make sense to provide a native functionality in CloudStack which is available out-of-the-box for users.

Purpose

Purpose of this document is present the functional requirements for supporting native functionality in CloudStack to provision containers and detail design aspects of how the functionality will be achieved.

Functional specification

Container Cluster

CloudStack container service shall introduce the notion of container cluster. A 'container cluster' shall be first class CloudStack entity that will be a composite of existing CloudStack entities like virtual machines, network, network rules etc. Container service shall stitch together container cluster resources, and deploys chosen cluster manager like Kubernetes, Mesos, docker swarm etc to provide a container service like AWS ECS, Google container service etc to the CloudStack users.

Cluster life-cycle management

Container service shall provide following container cluster life-cycle operations. 

  • create container cluster: provision container cluster resource, and brings the container cluster in to operational readiness state to launch containers. Resources provisioned shall depend on the cluster manager used. all the cluster VM's shall be launched in to a dedicated network for the cluster. API end point of cluster manager shall be exposed through creating port forwarding rule on source nat ip of the network dedicated for the cluster.
  • delete container cluster: destroy all the resources provisioned for the container cluster. Post delete, a container cluster can not be performed any operations on it.
  • start container cluster: Starting a cluster will start the VM's and possibly start the network.
  • stop container cluster: Stopping a cluster will shutdown all the resources consumed by the container cluster. user can start the cluster at a later point with Start operation.
  • recovering a cluster: Due to possible faults (like VM's that got stopped due to failures, or malfunctioning cluster manager etc) container cluster can end up in Alert state. Recover is used to revive container cluster to a sane running state.
  • cluster resizing (scale-in/out): increase or decrease the size of the cluster
  • list container cluster: list all the container clusters

provisioning container orchestrator

As part of container cluster creation, container service shall be responsible for setting up control place of container orchestrator that was choosen.

Design

Api changes

Following API shall be introduced with container service:

  • createContainerCluster
    • name: name of container cluster
    • description: description of container cluster
    • zoneid: uuid of the zone in which container cluster will be provisioned
    • serviceofferingid: service offering with which cluster VM's shall be provisioned
    • cluster: size of the cluster or number of VM's to be provisioned
    • accountname: account for which container cluster shall be created
    • domainid: domain of the account for which container cluster shall be created
    • networkid: uuid of the network in to which container cluster VM's will be provisioned. If not specified container service shall provision a new isolated network with default isolated network offering with source nat service.
  • deleteContainerCluster
    • id: uuid of container cluster
  • startContainerCluster
    • id: uuid of container cluster
  • stopContainerCluster
    • id: uuid of container cluster
  • listContainerCluster
    • id: uuid of container cluster

New reponse 'containerclusterreponse' shall be added with below details

  • name
  • description
  • zoneid
  • serviceofferingid
  • networkid
  • clustersize
  • endpoint: URL of the container cluster manger API server endpoint 

Life cycle operations

Each of the life cycle operation is a workflow resulting in either provisioning or deleting multiple CloudStack resources. Its not possible to achieve atomicity. There is no guarantee a workflow of a life cycle operation will succeed due to lack of 2PC like model of resource reservation followed by provisioning semantics. Also there is no guarantee rollback getting succeeded. For e.g. while provisioning a cluster of size 10 VM's, deployment may run out of capacity to provision any more VM's after provisioning 5 Vm's . In which case as rollback provisioned VM's can be destroyed. But there can be cases where deleting a provisioned VM is not possible temporarily like disconnected hosts etc.

Below approach is followed.

Do a best effort rollback for a life cycle operation in case of failure

In case rollback fails, have reconciliation mechanisms that will ensure eventual consistency

 

Below state machine reflects how container cluster state transitions for each of life cycle oerations

Image Removed

 

Garbage collection

garbage collection shall be implemented as a way to clean up the resources of container cluster, as a background task. Following are cases where cluster resources are freed up.

  • Starting container cluster fails, resulting in clean up of the provisioned resources (Starting → Expunging → Destroyed)
  • deleting container cluster (Stopped→ Expunging → Destroyed and Alert→ Expunging → Destroyed )

If there is failures in cleaning up resources, and clean up can not proceed, state of container cluster is marked in 'Expunge' state from 'Expunging' state.  Garbage collector will loop through the list of container clusters in 'Expunge' state periodically and try to free the resources held by container cluster.

Cluster state synchronization

State of the container cluster is 'desired state' of the cluster as intended by the user or what the system's logical view of the container cluster. However there are various scenarios where desired state of the container cluster is not sync with state that can be inferred from actual physical/infrastructure. For e.g a container cluster in 'Running' state with cluster size of 10 VM's all in running state. Its possible due to host failures, some of the VM's may get stopped at later point. Now the desired state of the container cluster is a cluster with 10 VM's  running and in operationally ready state (w.r.t to container provisioning), but the resource layer is state is different. So we need a mechanism to ensure:

  • cluster is in desired state at resource/infrastructure layer. Which could mean provision new VM's or delete VM's, in the cluster etc to ensure desired state of the container cluster
  • Conversely when reconciliation can not happen reflect the state of the cluster accordingly, and to recover at later point.

Following mechanism will be implemented.

  • A state 'Alert' will be maintained that indicates container cluster is not in its desired state.
  • A state synchronization background task will run periodically to infer if the cluster is in desired state. If not cluster will marked as alert state.
  • A recovery action try to recover the cluster

State transitions in FSM, where a container cluster ends up in 'Alert' state:

  • failure in middle of scale in/out, resulting in cluster size (# of VM's) not equal to the expected.
  • failure in stopping a cluster, leaving some VM's to be running state.
  • Difference of states as detected by the state synchronization thread.

schema changes

 

Code Block
languagesql
CREATE TABLE IF NOT EXISTS `cloud`.`container_cluster` (
    `id` bigint unsigned NOT NULL auto_increment COMMENT 'id',
    `uuid` varchar(40),
    `name` varchar(255) NOT NULL,
    `description` varchar(4096) COMMENT 'display text for this container cluster',
    `zone_id` bigint unsigned NOT NULL COMMENT 'zone id',
    `service_offering_id` bigint unsigned COMMENT 'service offering id for the cluster VM',
    `template_id` bigint unsigned COMMENT 'vm_template.id',
    `network_id` bigint unsigned COMMENT 'network this container cluster uses',
    `node_count` bigint NOT NULL default '0',
    `account_id` bigint unsigned NOT NULL COMMENT 'owner of this cluster',
    `domain_id` bigint unsigned NOT NULL COMMENT 'owner of this cluster',
    `state` char(32) NOT NULL COMMENT 'current state of this cluster',
    `key_pair` varchar(40),
    `cores` bigint unsigned NOT NULL COMMENT 'number of cores',
    `memory` bigint unsigned NOT NULL COMMENT 'total memory',
    `endpoint` varchar(255) COMMENT 'url endpoint of the container cluster manager api access',
    `console_endpoint` varchar(255) COMMENT 'url for the container cluster manager dashbaord',
    `created` datetime NOT NULL COMMENT 'date created',
    `removed` datetime COMMENT 'date removed if not null',
    `gc` tinyint unsigned NOT NULL DEFAULT 1 COMMENT 'gc this container cluster or not',
    CONSTRAINT `fk_cluster__zone_id` FOREIGN KEY `fk_cluster__zone_id` (`zone_id`) REFERENCES `data_center` (`id`) ON DELETE CASCADE,
    CONSTRAINT `fk_cluster__service_offering_id` FOREIGN KEY `fk_cluster__service_offering_id` (`service_offering_id`) REFERENCES `service_offering`(`id`) ON DELETE CASCADE,
    CONSTRAINT `fk_cluster__template_id` FOREIGN KEY `fk_cluster__template_id`(`template_id`) REFERENCES `vm_template`(`id`) ON DELETE CASCADE,
    CONSTRAINT `fk_cluster__network_id` FOREIGN KEY `fk_cluster__network_id`(`network_id`) REFERENCES `networks`(`id`) ON DELETE CASCADE,
    PRIMARY KEY(`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;


CREATE TABLE IF NOT EXISTS `cloud`.`container_cluster_vm_map` (
    `id` bigint unsigned NOT NULL auto_increment COMMENT 'id',
    `cluster_id` bigint unsigned NOT NULL COMMENT 'cluster idowner of this cluster',
    `vm_id` bigint unsigned `state` char(32) NOT NULL COMMENT 'vm idcurrent state of this cluster',
    PRIMARY`key_pair` KEYvarchar(`id`40),
    CONSTRAINT `container_cluster_vm_map_cluster__id` FOREIGN KEY `container_cluster_vm_map_cluster__id`(`cluster_id`) REFERENCES `container_cluster`(`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;


CREATE TABLE IF NOT EXISTS `cloud`.`container_cluster_details` (`created` datetime NOT NULL COMMENT 'date created',
    `removed` datetime COMMENT 'date removed if not null',
    `id``gc` biginttinyint unsigned NOT NULL auto_incrementDEFAULT 1 COMMENT 'idgc this application cluster or not',
    `cluster`network_id`cleanup` biginttinyint unsigned NOT NULL DEFAULT 1 COMMENT 'cluster id',
    `username` varchar(255) NOT NULL,
    `password` varchar(255) NOT NULL,
    `registry_username` varchar(255)true if network needs to be clean up on deletion of application cluster. Should be false if user specfied network for the cluster',
    `registry_password` varchar(255),
    `registry_url` varchar(255),
    `registry_email` varchar(255),
    `network_cleanup` tinyint unsigned NOT NULL DEFAULT 1 COMMENT 'true if network needs to be clean up on deletion of container cluster. Should be false if user specfied network for the clusterCONSTRAINT `fk_cluster__zone_id` FOREIGN KEY `fk_cluster__zone_id` (`zone_id`) REFERENCES `data_center` (`id`) ON DELETE CASCADE,
    CONSTRAINT `fk_cluster__network_id` FOREIGN KEY `fk_cluster__network_id`(`network_id`) REFERENCES `networks`(`id`) ON DELETE CASCADE,
    PRIMARY KEY(`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

 
CREATE TABLE IF NOT EXISTS `cloud`.`application_cluster_role` (
    `id` bigint unsigned NOT NULL auto_increment COMMENT 'id',
    PRIMARY KEY(`id`),
    CONSTRAINT `container_cluster_details_cluster__id` FOREIGN KEY `container_cluster_details_cluster__id`(`cluster_id`) REFERENCES `container_cluster`(`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

 

...

`cluster_id` bigint unsigned NOT NULL COMMENT 'cluster id',
    `name` varchar(255) NOT NULL COMMENT 'role name',
    `service_offering_id` bigint unsigned COMMENT 'service offering id for the cluster VM',
    `template_id` bigint unsigned COMMENT 'vm_template.id',
    `node_count` bigint NOT NULL default '0',
    PRIMARY KEY(`id`),
    CONSTRAINT `fk_cluster__service_offering_id` FOREIGN KEY `fk_cluster__service_offering_id` (`service_offering_id`) REFERENCES `service_offering`(`id`) ON DELETE CASCADE,
    CONSTRAINT `fk_cluster__template_id` FOREIGN KEY `fk_cluster__template_id`(`template_id`) REFERENCES `vm_template`(`id`) ON DELETE CASCADE,
    CONSTRAINT `application_cluster_role_cluster__id` FOREIGN KEY `application_cluster_role_cluster__id`(`cluster_id`) REFERENCES `application_cluster`(`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

 
CREATE TABLE IF NOT EXISTS `cloud`.`application_cluster_role_vm_map` (
    `id` bigint unsigned NOT NULL auto_increment COMMENT 'id',
    `role_id` bigint unsigned NOT NULL COMMENT 'role id',
    `vm_id` bigint unsigned NOT NULL COMMENT 'vm id',
    PRIMARY KEY(`id`),
    CONSTRAINT `application_cluster_role_vm_map_cluster_role__id` FOREIGN KEY `application_cluster_role_vm_map_cluster_role__id`(`role_id`) REFERENCES `application_cluster_role`(`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

 
CREATE TABLE IF NOT EXISTS `cloud`.`application_cluster_details` (
    `id` bigint unsigned NOT NULL auto_increment COMMENT 'id',
    `cluster_id` bigint unsigned NOT NULL COMMENT 'cluster id',
    `key` varchar(255) NOT NULL,
    `value` text,
    PRIMARY KEY(`id`),
    CONSTRAINT `application_cluster_details_cluster__id` FOREIGN KEY `application_cluster_details_cluster__id`(`cluster_id`) REFERENCES `application_cluster`(`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
 
CREATE TABLE IF NOT EXISTS `cloud`.`application_cluster_role_details` (
    `id` bigint unsigned NOT NULL auto_increment COMMENT 'id',
    `role_id` bigint unsigned NOT NULL COMMENT 'role id',
    `key` varchar(255) NOT NULL,
    `value` text,
    PRIMARY KEY(`id`),
    CONSTRAINT `application_cluster_role_details_role__id` FOREIGN KEY `application_cluster_role_details_cluster__id`(`role_id`) REFERENCES `application_cluster_role`(`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Code Block
languagejava
// example details for  a cluster used as a k8 container cluster:
enum {
  `username`,
  `password`,
  `registry_username`,
  `registry_password`,
  `registry_url`,
  `registry_email`,
  `endpoint` varchar(255) COMMENT 'url endpoint of the application cluster manager api access',
  `console_endpoint` varchar(255) COMMENT 'url for the application cluster manager dashbaord',
  `cores` bigint unsigned NOT NULL COMMENT 'number of cores',
  `memory` bigint unsigned NOT NULL COMMENT 'total memory'
};

 

References 

[1https://www.docker.com/products/docker-swarm

[2https://mesosphere.github.io/marathon/

[3https://kubernetes.io

[4https://aws.amazon.com/ecs/

[5] https://cloud.google.com/container-engine/

[6] https://cloudierthanthou.wordpress.com/2015/10/23/apache-mesos-and-kubernetes-on-apache-cloudstack/

[7] https://github.com/kubernetes/kubernetes/tree/master/cluster/rackspace/cloud-config

[8] https://github.com/shapeblue/ccs