Purpose

This is the functional specification for the supporting LINSTOR SDS as primary storage.

Bug Reference

https://github.com/apache/cloudstack/issues/4403

https://github.com/apache/cloudstack/pull/4994

Branch

master

Introduction


CloudStack currently supports Ceph/RBD storage which is a distributed block storage. Linstor also provides distributed block storage through DRBD.

This proposed feature will add support for Linstor storage as a primary storage (new Storage Plugin) in CloudStack.

Document History

AuthorDescriptionDate
Added feature specification and design11 May 2021

Glossary

  • VM - Virtual Machine
  • PS - Primary Storage

Usecases

This feature should able to:

  1. Allow admin to add Linstor as a Primary Storage and perform PS operations
  2. Allow user to deploy VM and perform VM operations
  3. Allow user to create Volume and perform Volume operations

Feature specification

  1. Add support for Linstor pool as a primary storage
  2. Template (QCOW2/RAW) spooling to the Linstor storage pool in RAW format
  3. Creation of the ROOT and DATA volumes on the Linstor storage pool

Functionality support

  1. User / System VM lifecycle and operations
    1. Deploy system VMs from the systemvm template and supports their lifecycle operations
    2. Deploy user VM using the selected template in QCOW2 & RAW formats, and ISO
    3. Start, Stop, Restart, Reinstall, Destory VM(s)
    4. VM snapshot (offline-only, VM snapshots with memory is not supported)
    5. Migrate VM from one KVM host to another KVM host (zone-wide: same/across clusters)
  2. Volume lifecycle and operations (Linstor volumes are in RAW format)
    1. Create ROOT disks using a provided template (in QCOW2 & RAW formats, from NFS secondary storage and direct download templates)
    2. List, Detach, Resize ROOT volumes
    3. Create, List, Attach, Detach, Resize, Delete DATA volumes
    4. Create, List, Revert, Delete snapshots of volumes (with backup in Primary, no backup to secondary storage)
    5. Create template (on secondary storage in QCOW2 format) from Linstor volume
    6. Migrate volume from one Linstor storage pool to another Linstor storage-pool
    7. Migrate volume to other Cloudstack primary storage

Test Guidelines

  1. Add the Linstor storage pool as a Primary Storage, using a tag say "linstor".
  2. Create compute offering and disk offering using the storage the "linstor".
  3. Deploy VMs and create data disks with the offerings created in step 2.

Error Handling

  1. All errors at various levels for the storage operations will be logged in management-server.log.

Target Users

  1. CloudStack Admins and Users.

Linstor Overview

LINSTOR is a configuration management system for storage on Linux systems. It manages LVM logical volumes and/or ZFS ZVOLs on a cluster of nodes. It leverages DRBD for replication between different nodes and to provide block storage devices to users and applications.


Linstor itself has 3 software components: Linstor-Controller, Linstor-Satellite and Linstor-Client.

  • Controller
    Manages satellites and stores all persistent data

  • Satellite:
    Calls/Queries underlying storage software components: LVM, ZFS, DRBD, ...

  • Client:
    Communicates via HTTP-REST-API via the Controller and is the end-user interface to Linstor


More about Linstor objects can be taken from the Linstor user-guide: https://linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-concepts_and_terms

CloudStack and Linstor

The Linstor counterparts above can be mapped and used with CloudStack and KVM as follows:

  • Linstor-Controller is the equivalent to the Cloudstack management server
  • Linstor-Satellite would be an Cloudstack agent
  • Primary StoragePool

A Cloudstack PS should be associated with a Linstor resource group, a resource group is a meta object in linstor which controls things like replica-count and options for all created resources/volumes from this resource group.

  • Templates

Templates can be of QCOW2 or RAW type, no changes in secondary storage or template/iso lifecycle are necessary.

  • Root disk 

At the time of root-disk/VM provisioning, the KVM host agent can convert a template from secondary storage or direct-download into a RAW disk and write it to a mounted block-storage device, which is the spooled template on the primary pool.

Root disk resize will cause resize of the related Linstor volume, similarly deletion of the root disk will cause deletion of the Linstor volume after unmapping it across all KVM hosts.

  • Data disk

On provisioning data-disks can be simply volumes that are created in Linstor that can be mapped on a KVM host and attached as “raw” disk(s) to a VM. Detach operation would be to detach the raw block-storage device from the VM and un-mapping the volume from a KVM host. Data disk resize will cause resize of the Linstor volume, similarly deletion of the data disk will cause deletion of the disk on Linstor after unmapping it on all KVM hosts.

Design description

Implement a new CloudStack storage plugin for Linstor storage. This will follow the design principles abstracted by CloudStack API for implementing a pluggable storage plugin.

  1. The storage sub-system would have the following design aspects:

Introduce a new storage pool type “Linstor” that associates with a Linstor resource group and allows for shared storage and over-provisioning. This type is used across various operations for handling a storage pool specific handling of operations especially on the hypervisor (KVM agent) side. Implement a new storage volume/datastore plugin with the following:

i. Linstor Datastore Driver: a primary datastore driver class that is responsible for lifecycle operations of a volume and snapshot resource such as to grant/revoke access, create/copy/delete data object, create/revert snapshot and return usage data.

ii. Linstor Datastore Lifecycle: a class that is responsible for managing lifecycle of a storage pool for example to create/initialise/update/delete a datastore, attach to a zone/cluster and handle maintenance of the storage pool.

iii. Linstor Datastore Provider:  a class that is responsible for exporting the implementation as a datastore provider plugin for CloudStack storage sub-system to pick it up and use for the resource groups of type “Linstor”.

iv. Introducing a dependency to java-linstor, a API wrapper library for the Linstor-REST-API

2.  Hypervisor layer (KVM): The hypervisor layer would have the following design aspects:

Linstor StorageAdaptor and StoragePool: For handling of Linstor volumes and snapshots.These classes will be responsible for managing storage operations and pool related tasks and metadata.

All storage related operations need to be handled by various Command handlers and hypervisor/storage processors (KVMStorageProcessor) as orchestrated by the KVM server resource class (LibvirtComputingResource) such as CopyCommand, AttachCommand, DetachCommand, CreateObjectCommand, DeleteCommand, SnapshotAndCopyCommand,  DirectDownloadCommand, etc...

Configuration settings

N/A

Agent parameters

N/A

Naming conventions for Linstor volumes

The following naming conventions are used for CloudStack resources in Linstor.

  • Volume: cs-[volume-uuid]
  • Template: cs-[volume-uuid] / maybe will be tmpl-[volume-uuid]

Linstor capacity in CloudStack

Linstor does not have a global capacity, rather every Linstor storage-pool has its individual capacity.

So either the Linstor driver reports back a average of all used nodes, or we always use the value from the node with the least available free capacity.

The Allocated size (TotalCapacity - FreeCapacity) of the Linstor volume is considered as the used / physical size of the volume in CloudStack.

Assumptions and Limitations

  1. Snapshots are currently not possible

API changes

N/A

DB changes

N/A

Hypervisors supported

KVM

UI Flow

  • The addition of the new provider would automatically list the "Linstor" as a protocol.
  • Changes in "Add Primary Storage" UI when "Linstor" protocol is selected, to specify: Resource group.

Upgrade

N/A

Open Items/Questions

Test procedure?

What actions should be tested?

References




  • No labels