You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Current »

Test Case Name

 

Expected Results

Priority
(P1 | P2 |P3)

Test Case Type( Sanity | Functional |
Negative)

Status

Host maintenance mode
Hypervisor: XEN
Host: XS 6.0.2
Primary storage: NFS

1. Create Advance zone, pod, cluster.
Add 2 hosts to cluster, secondary & primary Storage, isolated network
2. Create a custom compute service offering with HA. Create HA enabled Vms & Acquire IP. Create port forwarding & load balancing rules for Vms.
3. Host 1 : put to maintenance mode.
4. After failover to Host 2 succeeds, deploy Vms
5. Host 1 : cancel maintenance mode.
6. Host 2 : put to maintenance mode.
7. After failover to Host 1 succeeds, deploy VMs

3. All Vms should failover to Host 2 in cluster. Vms should be in running state.
All port forwarding rules and load balancing
Rules should work.
4. Deploy Vms on host 2 should succeed.
6. All Vms should failover to Host 1 in cluster.
7. Deploy Vms on host 1 should succeed.

P1

SANITY, FUNCTIONAL

pass

Host maintenance mode with activities
Hypervisor: XEN
Host: XS 6.0.2
Primary storage: NFS

1. Create Advance zone, pod, cluster.
Add 2 hosts to cluster, secondary & primary Storage, isolated network
2. Create a custom compute service offering with HA. Create HA enabled Vms. Acquire IP. Create port forwarding & load balancing rules for Vms.
3. While activities are ongoing: Create snapshots, recurring snapshots, create templates, download volumes, Host 1 : put to maintenance mode.
4. After failover to Host 2 succeeds, deploy Vms
5. Host 1 : cancel maintenance mode.
6. While activities are ongoing: Create snapshots, recurring snapshots, create templates, download volumes, Host 2 : put to maintenance mode.
7. After failover to Host 1 succeeds, deploy VMs

3. All Vms should failover to Host 2 in cluster. Vms should be in running state.
All port forwarding rules and load balancing Rules should work.
4. Deploy Vms on host 2 should succeed. All ongoing activities in step 3 should succeed.
6.  All Vms should failover to Host 1 in cluster.
7. Deploy Vms on host 1 should succeed. All ongoing activities in step 6 should succeed.

P1

SANITY, FUNCTIONAL

pass

Host maintenance mode
Hypervisor: KVM
Host: RHEL 6.2
Primary storage: NFS

1. Create Advance zone, pod, cluster.
Add 2 hosts to cluster, secondary & primary Storage, isolated network
2. Create a custom compute service offering with HA. Create HA enabled Vms & Acquire IP. Create port forwarding & load balancing rules for Vms.
3. Host 1 : put to maintenance mode.
4. After failover to Host 2 succeeds, deploy Vms
5. Host 1 : cancel maintenance mode.
6. Host 2 : put to maintenance mode.
7. After failover to Host 1 succeeds, deploy VMs

3. All Vms should failover to Host 2 in cluster. Vms should be in running state.
All port forwarding rules and load balancing
Rules should work.
4. Deploy Vms on host 2 should succeed.
6. All Vms should failover to Host 1 in cluster.
7. Deploy Vms on host 1 should succeed.

P1

SANITY, FUNCTIONAL

 

Maintenance mode with 2 hosts in cluster - Vms need not be HA enabled for being live migrated

1.    Have 1 host in 1 cluster.
2.    Deploy few vms.(These Vms need not be deployed with service offerings that are HA enabled). Create Static Nat/PF rules.
3.    Add 1 more host to this cluster.
4.    Put host1 in maintenance mode.
5.    Verify Step1 in "Expected Results"
6.    Add 1 more Vm. This Vm gets deployed in host2.
7.     Take host1 out of maintenance mode.
8.    Put host2 in maintenance mode.
9.    Verify Step2 in "Expected Results"

After step4 - Check the following

  1. Host1 should be put in Maintenance mode suceessfully. All Vms migrate successfully to host2. Make sure that the Vms are still accessible using their existing PF rules and static Nat rules.

  2. After step8  - Check the following:

    Host2 should be put in Maintenance mode suceessfully. All Vms migrate successfully to host1. Make sure that the Vms are still accessible using their existing PF rules and static Nat rules.

 

 

pass

Maintenance mode with 1 host in cluster

  1. Have a cluster with 1 host.
  2. Deploy few vms.
  3. Put this host in maintenance mode.
  4. Verify step1 from "Expected Results"
  5. Cancel maintenance mode.
  6. Verify step2- 6 from "Expected Results"

Verity following after step3.
1.Putting the host in maintenance mode should succeed.All  VMs will be in stopped state.
Verity following  after step4.
2.    SSVM amd CPVM are started automatically.
3.    All user VMs remains in stopped state.
4.    We are able to manually start the Vms successfully.
5.    Make sure that the Vms are still accessible using their existing PF rules.6.    We are able to deploy more vms

 

 

pass

Maintenance mode with 2 hosts in cluster with different host tag

  1. Cluster has 1 host , say host1 with host tag.2.  Deploy few VMS with host tag in service offerings.
    3.  Add another host , host2 to this cluster with  no host tag.
     4.Put host1 in maintenance mode.

1.Host1 should be put in maintenance mode successfully. All the Vms should remain in stopped state since they cannot live migrate to host2 which has no host tags.

 

 

 

Primary Storage

Enable Maintenance mode (P1 , Automate , Functional positive test case)  (PASSED)

API :enableStorageMaintenance

Steps:

  1.  “Enable Maintenance Mode” on primary storage.
  2. . You should be able to succeed in putting the primary storage into maintenance mode.

Expected results :

  1. All VMs will stop running including the System VMs. You may see the states transition from state Stopping to Stopped.
  2. You will not be able to create any VMs. Get “Resource[StoragePool:-1] is unreachable: There are no available pools in the UP state for VM deployment”.

  3. Upon cancelling out of maintenance mode. This may take some time.
  4. All the VMs including the System VMs and VR should all be running automatically.
  5. Make sure you can create an instance.

10. Delete Primary Storage

API :deleteStoragePool           (PASSED)

Steps:

  1. Go to Global Setting and set expunge.delay & expunge.interval to short time like 180 seconds
  2. Stop/Start the Management Server service.
  3.  “Enable Maintenance Mode” of primary storage.
  4. Once maintenance is successful delete the storage pool

Expected results :

  1. The delete should fail with “Failed to delete storage pool” from a dialog box and from the deleteStoragePool API call. In the logs, you will see "Cannot delete pool XYZ_Primary as there are associated vols for this pool".
  2. All VMs take up space as volumes in the Primary store. You have to delete/destroy all the VMs (set expunge interval to short time like 3 minutes). After that then you should be able to delete the Primary Storage. All data, Creating, Allocated volumes remain after deleting VMs. All data volumes deleted ok, but the volumes in “Creating” state do not have a delete option and prevents the ability to delete Primary Storage.

Primary Storage Outage:

  1. The hypervisor should immediately stop all VMs stored on the storage device.
  2. For NFS, the hypervisor may allow the virtual machines to continue running depending on the nature of the issue. An NFS hang will cause the guest VMs to be suspended until storage connectivity is restored.
  • No labels