Bug Reference
https://issues.apache.org/jira/browse/CLOUDSTACK-657
Branch
master, 4.1.0
Introduction
VMware Distributed Switch is an aggregation of per-host virtual switches presented and controlled as a single distributed switch through vCenter Server at the Datacenter level. vDS abstracts configuration of individual virtual switches and enables centralized provisioning, administration, and monitoring.
vDS is integral component of vCenter. Hence the native vDS support makes sense for wider and larger deployments of Cloudstack over vSphere.
Each Standard vSwitch represents an independent point of configuration that needs to be managed and monitored. The management of virtual networks required by instances in the cloud is tedious when virtual networks have to span across large number of hosts. Using distributed vSwitch (vDS) simplifies the configuration and monitoring.
Being standalone implementations, standard vSwitches do not provide any support for virtual machine mobility. So there needed a component to ensure that the network configurations on the source and the destination virtual switch are consistent and will allow the VM to operate without breaking connectivity or network policies. Particularly during migration of VM across hosts, the sync up among peers need to be taken care. However in case of distributed vSwitch during VMotion, the vCenter server, would update the vSwitch modules on the hosts in cluster accordingly.
Purpose
This is functional specification of feature "CloudStack integration with VMware dvSwitch" which has Jira ID CS-657
References
Document History
Author |
Description |
Date |
Sateesh Chodapuneedi |
Initial Revision |
12/31/2012 |
Glossary
- dvSwitch / vDS - VMware vNetwork Distributed Virtual Switch.
- vSwitch - VMware vNetwork Standard Virtual Switch.
- dvPort - Distributed Virtual Port (member of dvPortGroup).
- dvPortGroup - Distributed Virtual Port Group
Feature Specifications
This feature enables VMware distributed vSwitch in CloudStack to configure and manage virtual networks over dvSwitch instances in datacenter of the managed cluster.
- CloudStack does following,
- Create dvPortGroup over designated dvSwitch
- Modify dvPortGroup over designated dvSwitch
- Delete dvPortGroup over designated dvSwitch
- CloudStack doesn't do following,
- Create dvSwitch
- Add host to dvSwitch
- Dynamic migration of virtual adapters of existing VMs across differnt types of virtual switches in scenarios in which co-existance of multiple types of virtual switches (Cisco's Nexus 1000v or VMware standard vSwitch and dvSwitch) is possible. Instead this is left to administrator to decide.
- Configuration of PVLAN
- Configuration dvPort mirror
- Configuration of user defined network resource pools for Network I/O Control (NIOC)
- quality risks (test guidelines)
- functional
- Live migration of VM
- Deployment of virtual router
- Deployment of VM
- non functional: performance, scalability, stability, overload scenarios, etc
- Large number of VMs and isolated networks need to be tested for performance specific results.
negative usage scenarios - NA
- what are the audit events *
- All virtul network orchestration events
- VM migration events
- graceful failure and recovery scenarios
- possible fallback or work around route if feature does not work as expected, if those workarounds do exist ofcourse.
- If some guest network doesn't work correctly or if CloudStack fails to create guest network required by a VM then administrator can (re)configure dvPortGroup in relation to respective network in CloudStack.
- If guest network instantiation fails to due to lack of network resources, then administrator is expected to ensure that more resources/capacity employed. E.g. If number of dvPorts in a dvPortGroup get exhausted, that results in a situation that no more VM can join that network. Hence administrator can go ahead and re-configure dvPortGroup to increase the number of dvPorts to accommodate more VMs in that guest network. This is applicable to vSphere 4.1 only. Because from vSphere 5.0 onwards, there is auto expand support, so that automatic provisioning of dvPorts takes place as required.
- If many dvPortGroups are created and cleanup doesn't happen as expected, then administrator needs to check for unused dvPortGroups on dvSwitch and do clean up using vCenter UI.
- if feature depends on other run-time environment related requirements, provide sanity check list for support people to run
- Basic sanity testing over dvSwitch can be done. It might be ping operation from one VM to another VM in the same network. Make sure VLAN configuration is done in that network and verify isolation is achived or not.
- Verify if traffic shaping policy configured is applied and working.
- explain configuration characteristics:
- configuration parameters or files introduced/changed
###New configuration parameter - "vmware.use.dvswitch" of type Boolean. Possible values are "true" or "false". Default value is "false".
- branding parameters or files introduced/changed - NA
- highlight parameters for performance tweaking - NA
- highlight how installation/upgrade scenarios change
- deployment requirements (fresh install vs. upgrade) if any
- VMware dvSwitch must be already created/configured in the vCenter datacenter deployment.
- All the host/clusterresources should be added to dvSwitch before adding the cluster to CloudStack's pod cluster.
- interoperability and compatibility requirements:
- Hypervisors - VMware vSphere 4.1 or later
- list localization and internationalization specifications
- UI changes in "Add Cluster" wizard. See the section "UI Flow".
- explain the impact and possible upgrade/migration solution introduced by the feature
- explain performance & scalability implications when feature is used from small scale to large scale
- In case of vSphere 4.1 dvPortGroup need to be created with specific number of dvPorts. In large scale deployment optimum use of dvPorts may not be possible due to this pre-allocation. In case of vSphere 5.0 the autoexpand feature helps in auto increment of number of dvPorts.
- Network switches (including the vSwitch in ESXi host) keep a distinct forwarding table for each VLAN; this could lead to an increased overhead in packet forwarding when a considerable number of isolated networks, each one with a significant number of virtual machines, is configured in a data centre.
- explain marketing specifications
- Supporting VMware dvSwitch entitle CloudStack
- For better monitoring and simpler administration of the virtual network infrastructure in Cloud.
- Configuration and management of several vSwitches across large deployments in tedious.
- Seamless network vMotion support
- Better traffic shaping and efficient network bandwidth utilization is possible.
- explain levels or types of users communities of this feature (e.g. admin, user, etc)
- admin - Administrators would be target audience for this feature as this is at infrastructure level.
Use cases
1. There is a datacenter running vSphere clusters which are using dvSwitches for virtual networking. Migrate those servers into CloudStack cloud.CloudStack should be able to manage virtual networks over dvSwitches seamlessly.
2. Virtual network orchestration during VM lifecycle operations in cloud should use the dvSwitch designated for specified traffic. This includes configuration/re-configuration of distributed virtual port groups associated with the VM over the designated dvSwitch.
3. Live migration of VM within cluster. The traffic shaping policies and port statistics should be intact even after migration to another host within that cluster.
Supportability characteristics
Logging
All virtual network orchestration activities involving dvSwitch would be logged at different log levels in management server log file.
- INFO (all the successful operations)
- ERROR (all exceptions/failures)
- DEBUG (all other checks)
Debugging/Monitoring
In addition to looking at the management server logs, administrators can look up the following,
- vCenter logs for analysis.
- Warnings and Alerts associated with specific cluster in vCenter
- dvPort status (to see if port is configured correctly and is active or not) displayed in network configuration screen of vSphere native/web client UI.
Architecture and Design description
CloudStack reads physical traffic labels to understand the designated virtual switches to use for virtual network orchestration. Also virtual switch type is added to existing list of custom properties of cluster to enable a cluster level override option for virtual switch type. A cluster level option would precede over virtual switch type specified in zone level physical traffic label.
- Highlight architectural patterns being used (queues, async/sync, state machines, etc) - N/A
- Talk about main algorithms used
- Isolated network configuration using VLAN over dvSwitch. CloudStack manages dvPortGroup configured with a designated VLAN ID.
- The scenarios to be covered are,
- Adding host/compute resource to podcluster. Create necessary cloudstack managed virtual networks on designated dvSwitch.
- Livemigration
- All VM life cycle operations which might need instantiation of guest network etc.
- Network operations like network creation, network destroy etc.
- Port binding would be static binding.
- Performance implications: what are the improvements or risks introduced to capacity, response time, resources usage and other relevant KPIs•In vSphere 5.0, we will use "AutoExpand" support to configure dvPorts per dvPortGroup. This ensures that we don't pre-allocate unnecessary dvPorts so that they
- Packages that encapsulates the code changes,
- core package (VmwareManager, VmwareResource)
- server package
- vmware-base package mo and util packages
Web Services APIs
Changes to existing web services APIs - AddClusterCmd
Adding an optional parameter named 'virtulSwitchType' which can have values 'vmwaresvs' or 'vmwaredvs' or 'nexusdvs'.
New APIs introduced - N/A
UI flow
Add cluster wizard
If hypervisor is VMware and global configuration parameter "vmware.use.dvswitch" is set to true, then display a list box. List box details are below,
- List Box label - "Virtual Switch Type"
- Default option - "VMware vNetwork Standard Virtual Switch"
- List Box options are,
- "VMware vNetwork Standard Virtual Switch"
Action to perform if this option is selected:-
Add a parameter to parameter list of AddClusterCmd API call.
Parameter name: "vswitchtype"
Parameter value: "vmwaresvs"
- "VMware vNetwork Distributed Virtual Switch"
Action to perform if this option is selected:-
Add a parameter to parameter list of AddClusterCmd API call.
Parameter name: "vswitchtype"
Parameter value: "vmwaredvs"
- "Cisco Nexus 1000v Distributed Virtual Switch"
Action to perform if this option is selected:-
Add a parameter to parameter list of AddClusterCmd API call.
Parameter name: "vswitchtype"
Parameter value: "nexusdvs"
Appendix
Appendix A:
Appendix B: