You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 53 Current »

Introduction


This document describes the Cpu/Ram overcommit feature.

In the current implementation the cpu overcommit is global configuration value. This needs to be changed to provide a more granular control over the overcommit parameters. Currently there is no provision for ram overcommit.

This feature implements the ram overcommit and allows the ram and cpu overcommit ratios to be specified on a per cluster basis.

Use case


change the vm density on all the hosts in a given cluster. This can be done by specifying the cpu and ram overcommit ratios.

- Each cluster (depending on the hypervisor platform, storage or h/w configuration) can handle a different number of VMs per host/cluster - trying to normalize them can be inefficient, as the ratio has to be setup for the lowest common denominator - hence, we are providing a finer granularity for better utilization of resource, irrespective of what the placement algorithm decides

- when combined with dedicated resources, it gets better - with dedicated resources, we may have the capability to tell account A will use cluster X. If this account is paying for "gold" quality of service, perhaps, those clusters would have a ratio of 1. If they are paying for "bronze" QoS, their cluster ratio could be 2. 

Design description


Admin can give the cpu and ram overcommit ratios at the time of creating a cluster or update the values after creating.

Cloudstack will deploy the vms based the overcommit ratios. If the overcommit ratio of a particular cluster is updated, only the vms deployed hereafter will be deployed based on the updated overcommit ratios, this is ensured by storing overcommit ratio with which the vm got deployed stored in user_vm_details. The overcommit ratios for cluster will be stored in the cluster details table and will be inherited from global setting at the time of creation. Also whenever we add a host we will check of the host has the capabilities to perform the cpu and ram overcommiting. These capabilities will be stored in the db.

Supported Hypervisors


XenServer
KVM
VMware

Capacity calculations on MS


Capacity calculation model will be changed to align with the hypervisors calculation. When a vm is deployed with "x" overprovisioing factor we want to guarantee (service offering of vm / x ) during its lifecycle even though the over provisioning changes.

When the cluster overprovisioing factor = x and vms are deployed then

  • Total Capacity = (actualHardwareCapacity * x)
  • Used Capacity = sum (service offering of each running vm) + sum (service offering of each stopped vm in the skipped.counting.hours)  

When the cluster overprovisioing factor is changed to y

  • Total Capacity = (actualHardwareCapacity * y)
  • Used Capacity = [sum (service offering of each running vm deployed when factor was x) + sum (service offering of each stopped vm deployed when factor was x in the skipped.counting.hours)] * y/x + sum (service offering of each running vm deployed when factor was y ) + sum (service offering of each stopped vm deployed when factor was y in the skipped.counting.hours)

Ideally you shouldn't change the over-provisioning factor in a cluster with vms running. This is because the some of the vms got deployed with the previous factor x. 
Lets say you still want to change the factor. On changing it, both used and total capacity are multiplied by this factor to keep a track of available capacity.

Let's understand the capacity calculation below through an example :-

Cluster – c, 
cpu over provisioning = 1, 
Total cpu = 2GHZ

when we deploy 2VMs of 512Mhz service offering each then 
totalCapacity = 2GHz 
AvailableCapacity = 1GHz
UsedCapacity = 1GHZ

Now change the cpu over provisioning ratio of cluster c to 2
totalCapacity = 4GHz 
AvailableCapacity = 2GHz
UsedCapacity = 2GHZ

Notice the difference in multiplication here. Both used and total capacity are multiplied by this factor. Used Capacity in the new model after changing the factor = (service offering of vm / overcommit it got deployed with) * new overcommit => (.5 GHZ/1)*2 + (.5 GHZ/1)*2 => 2GHz
The reason is want to guarantee (service offering of vm / overcommit it got deployed with) in case of contention. So when a vm is deployed with "x" overprovisioing factor we want to gurantee (service offering of vm / x ) during its lifecycle even though the overprovisioning of cluster is changed. So these vms will get .5Ghz each during contention and therefore available is still 1 Ghz during contention.
The reason to scale the used cpu is to keep track of the "actual" amount of cpu left for further vm allocation. Keep the focus on available capacity.

Now if we launch 2 VMs with 1Ghz cpu service offering
totalCapacity = 4GHz 
AvailableCapacity = 0GHz
UsedCapacity = 4GHZ 
Calculation for used capacity for 4vms ((service offering of vm / overcommit it got deployed with) * new overcommit) = 
(512Mhz/1)*2 + (512Mhz/1)*2 + (1Ghz/2)*2 + (1Ghz/2)*2 = 4Ghz

In case of contention first 2 vms (512Mhz service offering) get 512Mhz/1 => .5Ghz each and the next 2 vms (1 Ghz service offering and 2 overprovisioning) also get  (1Ghz/2) = .5Ghz each. So adding up means 2Ghz which is the actual capacity of the host and so there is no more capacity left to accomodate more vms.

now suppose we change the over provisioning to 3 
totalCapacity = 6 GHz 
AvailableCapacity = 0 GHz
UsedCapacity = 6 GHZ
Calculation for used capacity for 4vms ((service offering of vm / overcommit it got deployed with) * new overcommit) = 
(512Mhz/1)*3 +(512Mhz/1)*3 +(1Ghz/2)*3 + (1Ghz/2)*3 = 6Ghz

Now this is assuming, you haven't stopped and started the vms all this while. Say now you stop and start 1 VM with 512Mhz and another VM with 1Ghz. The over-provisioning factor ratio changes for these vms to 3 each. Note the denominator in the calculation. 
totalCapacity = 6 GHz 
AvailableCapacity = 1.5 GHz
UsedCapacity = 4.5 GHZ
Calculation for used capacity for 4vms ((service offering of vm / overcommit it got deployed with) * new overcommit) = 
(512Mhz/3)*3 +(512Mhz/1)*3 +(1Ghz/3)*3 + (1Ghz/2)*3 = 4.5 Ghz

All this is done to track the available capacity for further vm allocation. If you track the "actual" capacity left on host = .5Ghz (out of 2Ghz). So now you can still create a vm with 1.5 GHz and cluster over-provisioning = 3 and hypervisor will guarantee 1.5/3 = .5 Ghz during contention.

The upside of new model is we are guaranteeing QOS as (service offering of vm / x ) during its lifecycle vs the old model

Hypervisor Calculation


Xenserver 

Deploy vm with service offering ‘s’ and memory overcommit factor ‘f’ and overcommit factor ‘c’   --

  • Static Min memory = Dynamic Min memory = service offering / factor ==  (s/f)
  • Static Max memory= Dynamic Max memory =  service offering == s
  • Each vm ensured Min memory during contention. •
  • No overprovisioning means min=max 
  • Min memory = Max memory= service offering == s
  • CPU weight assigned to vm = (service offering / factor) / host hardware speed == (s/c) / actual_host_speed
  • CPU Cap = (serviceOfferingSpeed * serviceOfferingCpus) / actual_host_speed 

Vmware

If vmware.mem.reserve = true 

Reserve memory = (service_offering / memory over provisioning factor)

Else 

Reserve memory = don’t reserve

Same model is followed for cpu.

KVM

TBD

DB changes


  • Whenever creating a cluster, MS will add the cpu and ram overcommit ratios in the cluster_details table. They will be inherited from global settings for cpu over provisioning and memory over provisioning. 
  • In case its an upgrade, existing global values will be carried over to the cluster_details. Do note that memory overprov existed only for vmware and would be carried over during upgrade only for vmware.
  • vm_details will be populated with these factors present at the cluster level when the vm is deployed. 
  • In case the cluster factor changes, vms factor in vm_details wont change until you stop/start the vm.
  • For upgrade vm_details will be populated with the cpu over provisioning and memory over provisioning factors(memory only for vmware) from global setting. For other HVs memory over provisioning will be set as 1.

Caveats


What should the behavior be if admin changes the overcommit factor for a cluster that conflicts with the current situation. For example,lets assume Cluster X has an over commit factor of 1.5x for memory and the admin wants to change this to 1x - i.e no overcommit (or changes from 2x to 1.5x) - however, based on the "older" factor, CS might already have assigned more VMs - when the admin reduces the overcommit value  

1. if there is no conflict, there is no issue

2a. if there is a conflict (i.e. current allocation would conflict with the new value) - should we reject this change?

2b. or accept the change but not add more VMs anymore ( preferred method)

if we decrease the factor - currently we allow doing that (say change from 2X to 1X) . Lets say If the allocation is beyond the factor already (say 1.5 X) then what it means is no future allocation will be allowed and secondly the dashboard would start showing >100% allocated which might confuse the admin (in our example it would show 150%).  The admin would also start getting alerts for capacity being already exhausted. i.e. we should accept the new value and allocate only if the system has enough capacity to deploy more VMs based on the new overcommit ratios.

But, say the allocation done till now is still within the new factor (say 0.8X is allocated currently) then allocation would still be allowed and dashboard would show 80% allocated so in this case everything seems to be correct and we should allow admin changing the factor.

Note :- The overcommit ratios are dynamically plugged into the capacity calculations. All the capacity calculations is done based on the overcommitted value of capacities. So if the overcommit ratios is decreased the used capacity may go beyond 100%. 
Example:
Overcommit =2 
capacity = 2GB
capacity after overcommit = 4GB.
Now if we deploy 3 VM of 1 GB each 
used =3GB
free = 1GB
used % = 3/4 *100 = 75%
if the overcommit ratio is decreased to 1
used = 3GB
free = -1GB
used % = 3/2 *100 =150% (will generate alerts based on this.)

Alert Generation


All the alerts are generated based on the global cpu/memory threshold values.

HV prerequisites to use cpu and ram overcommit


The feature is dependent on the OS type ,Hypervisor capabilities, and some scripts.

1.) All VMs should have a ballon driver installed in them. The hypervisor communicates with the ballon driver to free up and make the memory available to a guest. So in case of
XenSever.
The ballon driver can be found as a part of xen pv or PVHVM drivers. The xen pvhvm drivers are included in upstream linux kernels 2.6.36+.

The DMC (Dynamic Memory Control)capability of the hypervisor should be enabled. only xenserver Advanced and above versions have this feature. In case of xenserver we cannot support an overcommit factor greater than 4. This is because of a hypervisor constraint.

VMware.
In case of VMware the ballon driver can be found as a part of the vmware tools. All the guests that are deployed in a overcommit cluster should have the vmware tools installed.

The memory ballooning is supported by default.

KVM
All the guest are required to support the virtio drivers. All the linux kernels>= 2.6.25 have them installed. Admin needs to activate it. It can be done by setting CONFIG_VIRTIO_BALLOON=y in the virtio configuration.

kvm dose not support automatic adjustment of the guest OS memory dynamically

Note -

  • Almost all the hosts have the capability to overcommit, and it is up to the admin to make sure of it. Even if the host is not configured properly, cloudstack will try to set the parameters assuming it has capability.
  • As of now cloudstack dose not check for any perquisites. It is the admin's responsibility to provision accordingly.

Future Tasks


  • Keep these factors in the service offering.
  • Trigger the capacity recalculate when the overcommit is changed. No need to wait until capacity checker runs.
  • Create action event on overcommit change.
  • Create action event when capacity recalculate is complete.
  • No labels