Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview

Target

  • Hyper-V Server 2012
  • WMI for VMM control

Strategy

  • CloudStack Agent Model
  • Business logic in ServerResource accesses WMI

Support basic commands

  • VM lifecycle
    • Local disk creation from template
    • VM creation, start/stop, and destruction
  • VMM monitoring
    • Host, VM and storage stats returned to CloudStack Server

VM network isolation using VLANs

Basic template management

Expand networking support

  • Flat networking with Security Groups for VM isolation

Volume service

  • Volumes management independent of VMs

Expand VM management

  • Migration

Shared storage

Native SystemVMs

  • Console access
    • RDP sessions to Hyper-V VMs

Architecture

  • Reuse KVM-style Agent Container
    • Java-based agent executes on hypervisor
    • Call out to WMI
  • Use existing SystemVMs
  • Local Primary Storage
    • Local folder
  • Secondary storage accessed as NFS/SMB via Windows Server 2012
    • Free license Hyper-V Server 2012 has no NFS client
    • Admin manually mounts secondary storage to appear as local folder

Design changes

Adopt Javelin Storage model

  • Avoid NFS requirements

Agent model

  • Cloudstack agent model.
  • C# based agent which makes WMI calls for operations on the hypervisor.
  • Agent accepts json requests and does the necessary operations on the hypervisor.

V2 WMI API

  • 2012 R2 supports only root/virtualization/v2 namespace. WMI calls on the agent will use the same namespace.

Misc Design Notes

Local Storage

  • UUIDs for volumes correspond to their file names on disk. Only this UUID is persisted on the Hyper-V Server

Misc QA Notes

Test Plan

Unit tests

  • Each test corresponds to one or more Command objects sent to a ServerResource
  • No unit tests for server-side objects (Discoverer or HypervisorGuru)
  • Tests written to work only on Hyper-V 2012

Scope

  • Pure hyperv zone will be supported. No mixed zone.
  • Support for SMB/CIFS as primary and secondary.
  • VM Compute
    • Start, stop, reboot, destroy
    • Migrate - Live
    • Service offerings. Scale up is allowed on stopped VM.
    • Console access
    • SSH keys, user data.
    • Create VM from template
    • Create VM from iso
    • Attach, Detach ISO
    • User provided internal name
  • Storage
    • Primary storage
      • Shared Storage (SMB)
      • Local Storage
    • Root & data volumes – local and shared storage
    • Add, delete, attach & detach volumes
    • Secondary storage (SMB). Single secondary storage per zone.
  • Network
    • VLANs (isolated, shared, mgmt.)
    • External device support? NS, F5, SRX, Juniper – both isolated and shared n/w
    • All VR services supported: DNS, DHCP, LB, PF, StaticNAT, SourceNAT, NetworkACL, UserData, VPN
    • Dedicate IP range, Public VLANs (to account)
    • Restart (destroy/recreate) routers, system VMs, restart n/w – all cases
    • Different n/w configurations:
      • Storage in one NIC, Management in another, guest in another etc
      • Management and guest in one, storage in another
      • All in one network
    • L4-L7 services in shared n/w
    • Multiple IP range (restarts and DNS should pick up the specified ranges…)
    • Persistent network
  • Host tags
  • Storage tags

Background

Original Feature Spec

Introduction

CloudStack has a significant advantage in its ability to support multiple hypervisor types within the same zone. CloudStack implements a component model in which “plugins” can be loaded for each kind of hypervisor supported. These plugins can be implemented independently, and there exist plugins for Xen, VMWare, OVM, but not Hyper-V.

Purpose

This functional specification describes the requirements for supporting the Hyper-V hypervisor. Hyper-V support goes beyond mere hypervisor control. The specification allows for storage models typical of a Hyper-V deployment, and it notes required changes to the system VMs.

References

Docs:

Hyper-V Server 2012 licensing http://blogs.technet.com/b/keithmayer/archive/2012/09/07/getting-started-with-hyper-v-server-2012-hyperv-virtualization-itpro.aspx#.UJbNRGeYJUk
Hyper-V SMB 3.0 support http://blogs.technet.com/b/josebda/archive/2012/08/26/updated-links-on-windows-server-2012-file-server-and-smb-3-0.aspx
Hyper-V’s WMI V2 API http://msdn.microsoft.com/en-us/library/hh850319%28v=vs.85%29.aspx
Hyper-V’s PowerShell API http://technet.microsoft.com/en-us/library/hh848559.aspx
Hyper-V’s supported Guest OS’ http://technet.microsoft.com/en-GB/library/hh831531.aspx
Anatomy of a CloudStack Plugin http://www.slideshare.net/gavin_lee/cloud-stack-overview/60
Overview of CloudStack development https://cwiki.apache.org/CLOUDSTACK/index.html#Index-

Projects:

Document History

V0.1 – 2012-11-04

Glossary

See Architecture and Design description.
See also Feature Specifications.

Feature Specifications

Feature Description

A hypervisor plugin to control Hyper-V Server 2012 (V3.0) must be produced to support the full range of CloudStack hypervisor operations. A plugin able to control Hyper-V Server 2012 can also control a Windows Server 2012 with the Hyper-V role activated, as Hyper-V Server 2012 is a subset of Windows Server 2012 functionality. Hyper-V is controlled via a WMI API or PowerShell scripting. PowerShell scripts are built on WMI API calls. Neither WMI nor PowerShell script execution is addressed by existing plugins. Therefore, the focus of this feature is a new hypervisor plugin that can translate CloudStack operations to WMI and/or PowerShell.

Hyper-V support will include the ability to use SMB shares for primary storage and what is currently referred to as secondary storage. To coincide with Hyper-V 2012 (V3.0), Microsoft upgraded SMB to provide robust support for hosting VM disk images. Now the virtual disk for a Hyper-V 3.0 VM can sit on a SMB 3.0 share. Therefore, a new storage type will be added to CloudStack.

The existing set of networking offerings will be available. The virtual router system VM can run on a Hyper-V host, albeit using a different Linux distribution than the current system VM. This VM delivers the advanced networking services. The security group feature of basic networking will be implemented on the Hyper-V host. Therefore, Hyper-V should not introduce any limits to CloudStack's networking.

A console proxy system VM is required than can deal with RDP. Hyper-V provides access to each VM’s frame buffer via an RDP server. For example, the console tool vmconnect.exe can access the console of a Linux VM running on a remote Hyper-V even when that VM lacks Linux Integration Services. As with the virtual router, the console proxy VM has to run a version of Linux supported by HyperV. Therefore, a new console proxy system VM is required.

The existing user experience is left unchanged. Besides retaining the existing hypervisor feature set, Hyper-V will be configured using the existing workflows. The CloudStack GUI will have only minor updates to allow it to distinguish settings specific to Hyper-V. For example, a VHDX disk type will have to be added along with a SMB primary storage type. Therefore, no changes to the GUI or its wizards will be required.

Not Supported

Hyper-V Server 2012 does not include a NFS-client feature. Therefore, the NFS option will not be supported for primary storage on this hypervisor type.

SCVMM 2012 is not used for hypervisor control.

Supportability Characteristics

Work in progress

Configuration Characteristics

Work in progress

Quality Risks

Work in progress

Deployment Requirements

Work in progress (upgrade / install characteristics here)

Localisation / internationalisation specifications

Security Specifications

Work in progress (authentication discussed here)

Architecture and Design description

This spec relies on the CloudStack plugin model, the agent bus, and knowledge of the upcoming “Javelin” storage architecture to make implementing Hyper-V support manageable. The plugin model breaks down the implementation task into well encapsulated pieces. The Message Bus makes remote access to the Hyper-V seamless to plugins, which allows non-functional requirements to be deferred and possibly avoided. “Javelin” storage removes limits imposed by the existing secondary storage model. These architectural elements allow basic requirements to be implemented starting immediately while a development model for system VMs is worked out.

Only architecture that makes adding hypervisor support manageable is of interest. The hypervisor support should be divided from the overall CloudStack such that development and testing can be independent of the CloudStack management server. Non-functional details associated with deployment and maintenance of hypervisor control code should be minimized or avoided altogether. Finally, any future simplifications of the current design should be adopted. Therefore, this section only mentions architecture known to simplify feature development.

The plugin model allows independent development of the feature by dividing the device-specific code from the management-server. First, the plugins are compiled independently and loaded into CloudStack according to a configuration file. The plugin need only implement desired plugin API interfaces. The details of the management server are hidden. Secondly, the plugins themselves encapsulate device-specific code in a ServerResource. These divisions are visible in the figure “Anatomy of a Plugin” below. The divisions within a plugin allow the development effort to concentrate on creating a ServerResource suitable for Hyper-V. The remainder of the Hyper-V plugin would reuse code from an existing hypervisor plugin. Therefore, there must be a plugin API in place for each area we wish to change.

Image Removed

Message Bus binds the two halves of a plugin together. ‘Message Bus’ refers to a basket of activities involved in setting up and carrying out communication between ServerResources and their Plugin counter parts in the CloudStack management server. The Message Bus includes a protocol for establishing communications with the management server, management of the resulting TCP connection, and an agreed network format for transmitting request and response objects. The bus’ specification is embodied by its implementation, which is in two parts. In the management server is an Agent Manager that listens for connections and transmits commands objects on behalf of Plugins. At the receiving end is an Agent, currently written in Java. The Agent loads a ServerResource, initiates connection to the management server, and routes requests and responses to and from the ServerResource. Therefore, the Message Bus provides a logical abstraction for communications between the two halves of a Plugin.

Image Removed

This Message Bus allows us to defer decisions on non-functional requirements. Non-functional requirements arise in conjunction with local deployment of a ServerResource. In this case, ServerResource installation, upgrade support and native implementation of an agent have to be considered. Fortunately, whether an Agent executes on the management server itself or on host in the data center is of no consequence to the ServerResource. The handshaking carried out by the Agent will be the same in both cases. The Hyper-V host can be controlled remotely via the WS-Management SOAP interface, or the Hyper-V host can be controlled via local WMI or PowerShell calls as show in the “Hypervisor Control: Direct or Agent” below. Therefore, the Message Bus’ design allows non-functional requirements to be avoided in at least the initial phases of plugin implementation.

Image Removed

To avoid the limits of Secondary Storage, SMB3.0 support for template and snapshot storage should be delayed until project Javelin is available. The HypervisorGuru API used for hypervisor control does not include operations for secondary storage control. Nor is secondary storage modeled with another Plugin API. Instead, the server types supported by Secondary Storage are tightly coupled to the CloudStack kernel code. However, a more flexible approach to write once, read many (WORM) storage will be introduced with project Javelin. This model introduces new abstractions to decouple image transfer from specific file sharing protocols such as NFS. It will also make manipulate WORM storage via plugins. Therefore, secondary storage will change too significantly to be properly discussed at this point.

This architecture allows CloudStack commands to be implemented starting immediately. The Message Bus agent implements the infrastructure for starting and passing messages to a ServerResource. The implementation work for a ServerResource can concentrate on command objects immediately and independently of the management-server. The management-server side of a plugin can use an existing plugin as a starting point, as the device-specific details are largely contained in the ServerResource. Where system VM services are required for management server operation, these can be obtained by surrogate. In this case, the surrogate is a cluster of an already supported hypervisor type. This cluster will execute the required system VM. Therefore, the architecture takes care of provising the infrastructure that would otherwise delay work on basic features.

The ability to create plugins independent of the overall system buys time while the programming model for system VMs is determined. Splitting the system VM from its O/S and providing alternative console proxy VMs needs to be better addressed in the CloudStack architecture documentation. Alternatively, or a project to make system VM modification clearer should be undertaken. In either case, the missing architectural details are not an impediment to plugin development.

To summarise, the plugin model allows Hyper-V support to be broken into independently implemented pieces. The Message Bus provides sufficient infrastructure to allow development to concentrate on hypervisor commands and not infrastructure. As a result, work on basic features can begin before the rigid secondary storage model is replaced by the “Javelin” architecture. Likewise, a programming model for system VM features can be created while plugin development is carried on.