Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. User / System VM lifecycle and operations
    1. Deploy system VMs from the systemvm template and supports their lifecycle operations
    2. Deploy user VM using the selected template in QCOW2 & RAW formats, and ISO
    3. Start, Stop, Restart, Reinstall, Destory VM(s)
    4. VM snapshot (offline-only, VM snapshots with memory is not supported)
    5. Migrate VM from one KVM host to another KVM host (zone-wide: same/across clusters)
  2. Volume lifecycle and operations (ScaleIO volumes are in RAW format)
    1. Create ROOT disks using a provided template (in QCOW2 & RAW formats, from NFS secondary storage and direct download templates)
    2. List, Detach, Resize ROOT volumes
    3. Create, List, Attach, Detach, Resize, Delete DATA volumes
    4. Create, List, Revert, Delete snapshots of volumes (with backup in Primary, no backup to secondary storage)
    5. Create template (on secondary storage in QCOW2 format) from ScaleIO volume or snapshot
    6. Support ScaleIO volume QoS using details parameters: iopsLimit , bandwidthLimitInMbps in compute/disk offering. These are the SDC limits for the volume.
    7. Migrate volume (usually Volume Tree or V-Tree) from one ScaleIO storage pool to another ScaleIO (limited to storage pools within the same ScaleIO cluster)
    8. Migrate volume across ScaleIO storage pools on different ScaleIO clusters (using block copy, after mapping the source and target disks in the same host, which acts as an SDC)
  3. Config drive on scratch / cache space on KVM host

...

→ Secure authentication with provided URL and credentials

Auto renewal (after session expiry and on '401 Unauthorized' response)

List all storage pools, find storage pool by ID/name

...

PowerFlex/ScaleIO Storage Pool:

Configuration

Description / Changes

Default Value

storage.pool.disk.wait

New primary storage level configuration to set the custom wait time for ScaleIO disk availability in the host (currently supports ScaleIO only).

60 secs

storage.pool.client.timeout

New primary storage level configuration to set the ScaleIO REST API client connection timeout (currently supports ScaleIO only).

60 secs

storage.pool.client.max.connections

New primary storage level configuration to set the PowerFlex REST API client max connections (currently supports ScaleIO only).

100

custom.cs.identifier

New global configuration, which holds 4 chars randomly generated initially. This parameter can be updated to suit the requirement of unique cloudstack installation identifier that helps in tracking the volumes of a specific cloudstack installation in the ScaleIO storage pool, used in Sharing basis.

random 4 chars string


Other settings added/updated:

Configuration

Description / Changes

Default Value

vm.configdrive.primarypool.enabled

Scope changed from Global to Zone level

false

vm.configdrive.use.host.cache.on.unsupported.pool

New zone level configuration to use host cache for config drives when storage pool doesn't support config drive.

true

vm.configdrive.force.host.cache.use

New zone level configuration to force host cache for config drives.

false

router.health.checks.failures.to.recreate.vr

New test "filesystem.writable.test" added, which checks the router filesystem is writable or not. If set to "filesystem.writable.test", the router is recreated when the disk is read-only.

<empty>

Agent parameters

The below parameters are introduced in the agent.properties file of the KVM host.

Parameter

Description

Default Value

host.cache.location

new parameter to specify the host cache path. Config drives will be created on the "/config" directory on the host cache.

/var/cache/cloud

powerflex.sdc.home.dir

new parameter to specify sdc home path if installed in custom dir, required to rescan and query_vols in the sdc.

/opt/emc/scaleio/sdc

Naming conventions for ScaleIO volumes

...

  1. Dell EMC renamed ScaleIO to VxFlexOS and now to PowerFlex with v3.5, for the purpose of this documentation and implementation of this feature “ScaleIO” would be used to imply VxFlexOS/PowerFlex interchangeably. Names of components, API, global settings etc. in CloudStack specific feature branch may change over the course of implementation.
  2. CloudStack will not manage the creation of storagepool/domains etc in ScaleIO, but those must be done by the Admin prior to creating a storage pool in CloudStack. Similary, deletion of ScaleIO storagepool in CloudStack will not cause actual deletion or removal of storage pool on ScaleIO side.
  3. ScaleIO SDC is installed in the KVM host(s), service running & connected to the ScaleIO Metadata Manager (MDM).
  4. The seeded ScaleIO templates volume(s) [in RAW] converted from the direct templates [QCOW2/RAW] on secondary storage have the template's virtual size as the Allocated size in ScaleIO, irrespective of the "Zero Padding Policy" setting for the pool.
  5. The ScaleIO ROOT volume(s) [RAW] converted from the seeded templates volume(s) [RAW] have its total capacity (virtual size) as the Allocated size in ScaleIO, irrespective of the "Zero Padding Policy" setting for the pool.
  6. The ScaleIO DATA volume(s) [RAW] created / attached have the Allocated size as '0' in ScaleIO initially, and changes with the file system / block copy.
  7. Overprovisioning factor of the pool using the config “storage.overprovisioning.factor” should be updated accordingly to leverage the the ScaleIO’s 10x overprovisioning for thin volumes.
  8. Existing config drives resides on the secondary storage if “vm.configdrive.primarypool.enabled” setting is false, else on the primary storage.
  9. VM snapshots (using qemu) with memory may corrupt the qcow2 and this strategy would make this an unsupported feature. Volume snapshots (using qemu) that are stored within the qcow2 file may also corrupt the ScaleIO volume. Backend/ScaleIO-driven snapshots of volumes are still possible but limited to 127 snapshots per root/data disk.
  10. Any kind of caching may make the (qcow2) volume unsafe for volume migration, as a risk-mitigation strategy no disk caching (cache=“none”) may be allowed for ScaleIO volumes. This potentially adds risks for VM migration as well as any side-effects of using a qcow2 based root disk ScaleIO volume using another ScaleIO volume as a RAW (template) disk backing file.
  11. API client may hit SSL certificate related issues which cannot be bypassed by simply ignoring certificate warnings. Admin may need to accept cluster certificates using ScaleIO gateway for LIA/MDM etc. It is assumed that monitoring and maintenance of ScaleIO cluster certificates and configuration is outside of the scope of the CloudStack.
  12. Delete volume/snapshot API to ScaleIO API gateway could cause a client instance to stop working for any subsequent calls unless another authenticated session is initiated. This would limit use of client connection pooling.
  13. Volume/VTree migrations are limited to different storage pools in the same ScaleIO cluster.
  14. PowerFlex/ScaleIO Limitations [1] are applicable.
    1. Names of snapshots and volumes cannot exceed 31 characters, while typical CloudStack resource UUIDs are 36 characters long. So, the CloudStack volume/snapshot UUIDs cannot be used to map to related resources in ScaleIO.
    2. V-Tree is limited to 127 snapshots for a ScaleIO volume.
    3. The minimum volume size is 8GB, and volume sizes should be at the boundary of 8GBs (for example creation of volume of size 1GB will create volume of size 8GB) and minimum Storage Pool capacity is 300GB.
    4. The maximum number of volumes/snapshots in system is limited to 131,072, and Maximum number of volumes/snapshots in Protection Domain is limited to 32,768.
    5. The maximum SDCs per system is 1024, and the maximum volumes that can be mapped to a single SDC is 8192. From the docs we couldn’t determine the limit of maximum number of SDC mappings for a single ScaleIO volume.
    6. The maximum concurrent GUI/REST/CLI clients logged-in is 128.

API changes

  1. createStoragePool API call needs to support the new url for the ScaleIO storage pool.
    1.  ScaleIO storage pool url format: powerflex://<API_USER>:<API_PASSWORD>@<GATEWAY>/<STORAGEPOOL>

      where,

      <API_USER> : user name for API access

      <API_PASSWORD> : url-encoded password for API access

      <GATEWAY> : gateway host

      <STORAGEPOOL> : storage pool name (case sensitive)

    2. Example (using cmk): create storagepool name=pflexpool scope=zone hypervisor=KVM provider=PowerFlex tags=pflex url=powerflex://admin:P%40ssword123@10.2.3.139/cspool zoneid=404105a3-5597-47d5-aed3-58957e51b8fc
  2. createServiceOffering API call needs new key/values in details parameter for specifying the ScaleIO Volume SDC Limits to Root disk.
    1. New details key: "bandwidthLimitInMbps" support is added.
    2. New details key: "iopsLimit" support is added.
    3. Example (using cmk): create serviceoffering name=pflex_instance displaytext=pflex_instance storagetype=shared provisioningtype=thin cpunumber=1 cpuspeed=1000 memory=1024 tags=pflex serviceofferingdetails[0].bandwidthLimitInMbps=90 serviceofferingdetails[0].iopsLimit=9000
    4. These key/values are optional and defaulted to 0 (unlimited)
  3. createDiskOffering API call needs new key/values in details parameter for specifying the ScaleIO Volume SDC Limits to Data disk.
    1. New details key: "bandwidthLimitInMbps" support is added.
    2. New details key: "iopsLimit" support is added.
    3. Example (using cmk): create diskoffering name=pflex_disk displaytext=pflex_disk storagetype=shared provisioningtype=thick disksize=3 tags=pflex details[0].bandwidthLimitInMbps=70 details[0].iopsLimit=7000
    4. These key/values are optional and defaulted to 0 (unlimited)

...