You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 88 Next »

Test execution results:

vmwaremapnew2.xls

 

vmware datacenter cloudstack zone mapping model

 

 

 

 

 

DVS switch

 

 

 

 

 

Hypervisor:  ESXi 4.1        ESXi 5.0

 

 

 

 

 

 

 

 

 

 

 

      

 

 

 

 

 

 

 

 

 

 

Test  Id

Test case description

Steps

Expected Result

Priority

VMWARE

 

 

 

 

 

 

 

 

 

 

 

 

 

Zone  configuration - Advance

 

 

 

 

1

Vcenter                                  CPP 4.2
VC1 - DC1 -  C1 - H1             zone1 - C1 -  H1
                            H2                                  H2
                                                                  PS1 
                                                                  PS2
                    C2 - H3                          C2 -  H3
                                                                  PS3
          DC2 - C3 - H4             zone2 - C3 -  H4
                                                                  PS4 
                                                                  ZPS5

                                               ZPS6                   

MS - advance zone 1 Vcenter 2 DC with multiple clusters

zone creation  successful

P1

PASS

2

Vcenter                                  CPP 4.2
VC1 - DC1 -  C1 - H1             zone1 - C1 -  H1
                            H2                                  H2
                                                                  PS1
                                                                  PS2
                    C2 - H3                          C2 -  H3
                                                                  PS3
VC2 - DC3 - C4 - H4              zone2 - C4 - H4
                                                                  PS4
                                                                  ZPS5
                                                                  ZPS6                   
 clusters in different subnet

MS - advance zone  multiple Vcenters 2 DC multiple clusters
different subnet

zone creation  successful

P1

PASS

 

 

 

 

 

 

 

Zone DC

 

 

 

 

 

AddVMwareDC

1. Vcenter create DC cluster hosts
2. MS  Add the Vcenter DC to zone

 

P1

PASS

 

RemoveVMware DC

MS Remove Vcenter DC from zone

[http://10.223.195.52:8080/client/api?command=removeVmwareDc
] &zoneid=37751cd3-7fe5-4164-be39-d26a0807a469&_=1377404578209
{ "removevmwaredcresponse" :

Unknown macro: {"uuidList"}

}
| P1 | PASS |

 

ListVMwareDC

MS  List Vmware DCs in zone

[http://10.223.195.52:8080/client/api?command=listVmwareDcs
] &zoneid=37751cd3-7fe5-4164-be39-d26a0807a469&_=1377403694960
{ "listvmwaredcsresponse" : { "count":1 ,"VMwareDC" : [ 

Unknown macro: {"id"}

] }

}
| P1 | PASS |

 

AddVMwarecluster

MS  Add Vmware cluster to Vcenter DC in zone

[http://10.223.195.52:8080/client/api?command=addCluster
] &zoneId=37751cd3-7fe5-4164-be39-d26a0807a469&hypervisor=VMware&clustertype=ExternalManaged
&podId=299f284a-7bc7-482b-9278-0b92a8f790c9&username=Administrator
&password=Password2&publicvswitchtype=vmwaredvs&guestvswitchtype=vmwaredvs
&url=http%3A%2F%2F10.223.52.60%2Fd1%2Fc2&clustername=10.223

P1

PASS

 

RemoveVMwarecluster

MS  Remove Vmware cluster from Vcenter DC in zone

[http://10.223.195.52:8080/client/api?command=deleteCluster
] &id=0925469d-deac-47f4-8844-2029a1f001b7
{ "deleteclusterresponse" :

Unknown macro: { "success" }

  }
| P1 | PASS |

 

disablecluster

disable vmware cluster

[http://10.223.195.52:8080/client/api?command=updateCluster
] &id=0925469d-deac-47f4-8844-2029a1f001b7&allocationstate=Disabled
{ "updateclusterresponse" :  { "cluster" :

Unknown macro: {"id"}

}

  } | P1 | PASS |

 

enablecluster

enable vmware cluster

[http://10.223.195.52:8080/client/api?command=updateCluster
] &id=0925469d-deac-47f4-8844-2029a1f001b7&allocationstate=Enabled
{ "updateclusterresponse" :  { "cluster" :

Unknown macro: {"id"}

}

  }
| P1 | PASS |

 

managecluster

manage vmware cluster

[http://10.223.195.52:8080/client/api?command=updateCluster
] &id=0925469d-deac-47f4-8844-2029a1f001b7&managedstate=Managed
{ "updateclusterresponse" :  { "cluster" :

Unknown macro: {"id"}

}

  } | P1 | PASS |

 

unmanagecluster

unmanagae vmware cluster

[http://10.223.195.52:8080/client/api?command=updateCluster
] &id=0925469d-deac-47f4-8844-2029a1f001b7&managedstate=Unmanaged
{ "updateclusterresponse" :  { "cluster" :

Unknown macro: {"id"}

}

  }
| P1 | PASS |

 

 

 

 

 

 

 

Network

M

 

 

 

 

isolated VPC network

1. In advance zone create isolated VPC network

[http://10.223.195.52:8080/client/api?command=createVPC&name=vpc1
] &displaytext=vpc1&zoneid=37751cd3-7fe5-4164-be39-d26a0807a469
&cidr=10.1.1.1%2F16&vpcofferingid=ca01a99f-579f-4c13-9701-e2281868bcfb&_=1377404893237
{ "createvpcresponse" :

Unknown macro: {"id"}

}
| P1 | PASS |

 

isolated nonVPC network

1. In advance zone create non VPC network

non VPC network creation successful

P1

PASS

 

listvpc

 

 

 

PASS

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

network states

 

 

 

 

 

Extend IP range of the network

1. Advance zone  Create  Shared NW1  scope zone  .

Vm deployment should succeed.

P1

PASS

 

 

2. Deploy few Vms in this network.

Vm should be assigned address from the extended range.

 

 

3. Consume all ips in range

All VMs  in NW1 unable to access each other

 

 

 

 

4. Extend Ip range.

All VMs in NW1 able to reach DHCP server, gateway

 

 

 

5. Deploy Vm in this network.

 

 

 

 

Restart network

1. Advance zone  Create  Shared NW1  scope zone  .

Network restart should succeed.

P1

PASS

 

 

2. Deploy few Vms in this network.

After network restart:

 

 

 

 

3. Restart Network.

All VMs  in NW1 unable to access each other

 

 

 

 

 

All VMs in NW1 able to reach DHCP server, gateway

 

 

 

 

We should be able to deploy new Vms in this network.

 

 

Restart network with cleanup option

1. Advance zone  Create  Shared NW1  scope zone  .

Network restart should succeed. After network restart: All VMs 
 in NW1 unable to access each other. All VMs in NW1 able to
reach DHCP server, gateway We should be able to deploy new
 Vms in this network.As part of network restart m Router is
stopped and started. Following 3 entries for the router should
get deleted and re-created in the host:ovs-ofctl dump-flows xenbr01. cookie=0x0, duration=3503.373s, table=0, n_packets=8, n_bytes=2748, priority=100,udp,dl_vlan=998,nw_dst=255.255.255.255,
tp_dst=67 actions=strip_vlan,output:182. cookie=0x0,
 duration=3503.38s, table=0, n_packets=20, n_bytes=1148, priority=200,arp,dl_vlan=998,nw_dst=10.223.161.110 actions=strip_vlan,output:183.cookie=0x0, duration=3503.376s,
table=0, n_packets=37, n_bytes=3176, priority=150, dl_vlan=998, dl_dst=06:e0:d8:00:00:1c actions=strip_vlan,output:18

P1

PASS

 

 

2. Deploy few Vms in this network.

 

 

 

 

 

4. Restart Network with cleanup option.

 

 

 

 

Delete network with vms in "Running" state

1. Advance zone  Create  Shared NW1  scope zone  <pVLAN1, sVLAN1>.

Network Deletion should fail.

P1

PASS

 

 

2. Deploy few Vms in this network

 

 

 

 

 

3. Delete Network.

 

 

 

 

Delete network when there are no Vms associated with it.

1. Advance zone  Create  Shared NW1  scope zone .

Network Deletion should succeed.

P1

 

 

 

2. Deploy few Vms in this network.

 

 

 

 

 

3. Destroy all the Vms.

 

 

 

 

 

4. Delete Network after all the Vms are expunged.

 

 

 

 

Stop all Vms in network and wait for network shutdown

1. Advance zone  Create  Shared NW1  scope zone .
2. Deploy few Vms in this network.
3. Stop all the Vms.
4. Wait for network scavenger thread to run.

Network should not be picked up for Shutting down.

P1

PASS

 

delete one of the IP range while not in use

In  network with multiple IP ranges & no VMs in network, delete one of the IP range

delete one of the IP range while not in use  succeed

P1

PASS

 

delete one of the IP range while in use in

In   network with multiple IP ranges & VMs in IP range of network, delete one of the IP range with VMs.

delete one of the IP range while in use by VMs  fail

P1

PASS

 

 

 

 

 

 

 

 

 

 

 

 

 

VM Deployment isolated VPC and nonVPC network

 

 

 

 

 

advance zone VPC network Deploy VM centos with data disk

1. In advance zone create VPC network
2. Deploy VMs with data disk using centos template. Check PF LB rules

2. VMs in running state.  PF LB rules should work

P1

PASS

 

advance zone VPC network Deploy VM centos without data disk

1. In advance zone create VPC network
 2. Deploy VMs without data disk using centos template. Check PF LB rules

2. VMs in running state.  PF LB rules should work

P1

PASS

 

advance zone VPC network Deploy VM windows with data disk

1. In advance zone create VPC network
2. Deploy VMs using windows template.

2. VMs in running state.  PF LB rules should work

P1

PASS

 

advance zone nonVPC network Deploy VM centos with data disk

1. In advance zone create nonVPC network
2. Deploy VMs with data disk using centos template. Check PF LB rules

2. VMs in running state.  PF LB rules should work

P1

PASS

 

advance zone nonVPC network Deploy VM centos without data disk

1. In advance zone create nonVPC network
2. Deploy VMs without data disk using centos template. Check PF LB rules

2. VMs in running state.  PF LB rules should work

P1

PASS

 

 

 

 

 

 

 

VM Life cycle

 

 

 

 

 

Stop VM

Stop an existing VM that is in "Running" State.
[we should flesh out the steps here starting with 'deploy a vm']

1. Should Not be able to login to the VM.
2. listVM command should return this VM.State of this VM should be "Stopped".
3. DB check : VM_INSTANCE table - state should be "Stopped"

P1

PASS

 

Start VM

Start an existing VM that is in "Stopped" State.

1. Should be able to login to the VM.
2. listVM command should return this VM.State of this VM should be "Running".
3. DB check : VM_INSTANCE table - state should be "Running"

P1

PASS

 

Destroy VM

Destroy an existing VM that is in "Running" State.

1. Should not be able to login to the VM.
2. listVM command should return this VM.State of this VM should be "Destroyed".
3. DB check : VM_INSTANCE table - state should be "Destroyed"

P1

PASS

 

Restore VM

Restore a VM instance that is in "Destroyed" state.

1. listVM command should return this VM.State of this VM should be "Stopped".
2. We should be able to Start this VM successfully.
3. DB check : VM_INSTANCE table - state should be "Destroyed"

P1

PASS

 

Destroy VM (Expunged)

Destroy an existing VM that is in "Running" State.
Wait for the Expunge Interval (expunge.delay) .

1. listVM command should NOT  return this VM any more.
2. DB check : 1. VM_INSTANCE table - state should be "Expunging".
2. No entries relating to this VM in the NICS table.
3. No entries relating to this VM in the VOLUMES table.
4. Make sure the volumes get removed from the Primary Storage.

P1

PASS

 

Reboot VM

1.Deploy VM.
2.Reboot VM.

1. Should be able to login to the VM.
2. listVM command should return the deployed VM.
State of this VM should be "Running".

P1

PASS

 

Migrate VM

1.Deploy VM.

1. Should be able to login to the VM.

P1

PASS

 

 

2.Check to make sure you are able to log in to this VM.

2. listVM command should return this VM.State of this VM should be "Running"
 and the host should be the host to which the VM was migrated to.

 

 

 

 

3.Migrate vm to another host in the same cluster.

 

 

 

 

Attach ISO

1.Deploy VM.

1.Log in to the Vm. We should be able to see a device attached.

P1

 

 

 

2.Attach ISO to this VM.

2. You should be able to mount this device and
 use the ISO.

 

 

 

Detach ISO

1.Deploy VM.

1.Log in to the Vm. We should be see device is not attached to the VM anymore.

P1

 

 

 

2.Attach ISO to this VM. Log in to the Vm and make sure you see
a device attached which has the ISO.

 

 

 

 

 

3. Detach ISO.

 

 

 

 

Change Service to use less CPU and Memory

1.Deploy VM using default template,  "small" service offering and small data disk offering.

1.Log in to the Vm .We should see that the CPU and memory Info of this Vm
 matches the one specified for "Medium" service offering.

P1

 

 

 

2.Stop VM

2. Using  listVM command verify that this Vm
has "Medium" service offering Id.

 

 

 

 

3.Change Service of this Vm to use "Medium" service offr

 

 

 

 

 

4.Start VM

 

 

 

 

Change Service to use more CPU and Memory

1.Deploy VM using default template,  "Medium' service offering and small data disk offering.

1.Log in to the Vm .We should see that the CPU and memory Info of this Vm
matches the one specified for "Medium" service offering.

P1

 

 

 

2.Stop VM

2. Using  listVM command verify that this Vm
has "small" service offering Id.

 

 

 

 

3.Change Service of this Vm to use "Small" service offering.

 

 

 

 

4.Start VM

 

 

 

 

 

 

 

 

 

 

SSVM  CPVM  VR   Life cycle

 

 

 

 

 

Stop  SSVM   CPVM VR

Stop  SSVM   VR    that is in "Running" State.
[we should flesh out the steps here starting with 'deploy a vm'

1. Should Not be able to login to the VM.
2. listVM command should return this VM.State of this VM should be "Stopped".
3. DB check : VM_INSTANCE table - state should be "Stopped"

P1

PASS

 

start SSVM  CPVM VR

Start an existing VM that is in "Stopped" State.

1. Should be able to login to the VM.
2. listVM command should return this VM.State of this VM should be "Running".
3. DB check : VM_INSTANCE table - state should be "Running"

P1

PASS

 

Destroy SSVM   CPVM VR

Destroy an existing VM that is in "Running" State.

1. Should not be able to login to the VM.
2. listVM command should return this VM.State of this VM should be "Destroyed".
3. DB check : VM_INSTANCE table - state should be "Destroyed"

P1

PASS

 

Restore SSVM CPVM VR

Restore a VM instance that is in "Destroyed" state.

1. listVM command should return this VM.State of this VM should be "Stopped".
2. We should be able to Start this VM successfully.
3. DB check : VM_INSTANCE table - state should be "Destroyed"

P1

 

 

Destroy SSVM  CPVM  VR   (Expunged)

Destroy an existing VM that is in "Running" State.
Wait for the Expunge Interval (expunge.delay) .

1. listVM command should NOT  return this VM any more.
2. DB check : 1. VM_INSTANCE table - state should be "Expunging".
2. No entries relating to this VM in the NICS table.
3. No entries relating to this VM in the VOLUMES table.
4. Make sure  volumes get removed from the Primary Storage.
New SSVM  CPVM  are created

P1

PASS

 

Reboot SSVM  CPVM   VR

1.Deploy VM.
2.Reboot VM.

1. Should be able to login to the VM.
2. listVM command should return the deployed VM.
State of this VM should be "Running".

P1

PASS

 

Migrate SSVM  CPVM   VR

1.Deploy VM.

1. Should be able to login to the VM.

P1

PASS

 

 

2.Check to make sure you are able to log in to this VM.

2. listVM command should return this VM.State of this VM should be "Running"
 and the host should be the host to which the VM was migrated to.

 

 

 

 

3.Migrate vm to another host in the same cluster.

 

 

 

 

 

 

 

 

 

 

VM live migration

 

 

 

 

1

Vcenter                                  CPP 4.2
VC1 - DC1 -  C1 - H1             zone1 - C1 -  H1
                            H2                                  H2
                                                                  PS1 cluster primary storage
                                                                  PS2
                    C2 - H3                          C2 -  H3
                                                                  PS3
          DC2 - C3 - H4             zone2 - C3 -  H4
                                                                  PS4 
                                                                  ZPS5 zone primary storage
                                                                  ZPS6                   

VM Migrate  H1 to H2 vice versa
VM Migrate  H1 to H3 vice versa
VM Migrate  H1 to H4 vice versa

successful

P1

PASS

2

Vcenter                                  CPP 4.2
VC1 - DC1 -  C1 - H1             zone1 - C1 -  H1
                            H2                                  H2
                                                                  PS1 cluster primary storage
                                                                  PS2
                    C2 - H3                          C2 -  H3
                                                                  PS3
VC2 - DC3 - C4 - H4              zone2 - C4 - H4
                                                                  PS4
                                                                  ZPS5 zone primary storage
                                                                  ZPS6                   
 clusters in different subnet

VM Migrate  H1 to H2 vice versa
VM Migrate  H1 to H3 vice versa
VM Migrate  H1 to H4 vice versa

successful

P1

PASS

 

 

 

 

 

 

 

Storage volume

 

 

 

 

 

Attaching Volumes

1.navigate to Storage-volumes

1.shows list of volumes

P1

 

 

 

2.select the detached data disk

3."Attach Disk" pop-up box will display with list of  instances

 

 

3.click on "Actions" and select "Attach Disk" to attach to a particular instace

4. disk should be  attached to instance successfully and both
UI and database(volumes table) should reflect the changes
( i.e attached disk details should be updated with  attached
 vm_instance ID and device id )

 

 

 

 

4. select the instance and click "Ok" button  to attach the disk to selected
instance in the "Attach Disk" pop-up window

 

 

 

 

Detaching Volumes

1.navigate to Storage-volumes

data disk should be detached from instance and detached data
 disk details should be updated properly(i.e Instance Name
-detached and Device ID is null)

P1

 

 

 

2. selected the data disk which is attached to instance

 

 

 

 

 

3.click "Actions" and select the "Detach Disk"

 

 

 

 

Download volumes

1.navigate to Storage-volumes

3. download volume will fail with proper error message "Failed
- Invalid state of the volume with ID: . It should be either
detached or the VM should be in stopped state"

P1

 

 

 

2. selected the data disk which is attached to instance

.

 

 

 

 

[some fundamental steps here are missing. in each os after attaching disk,
you need to format and mount it before writing to it. downloading a blank
 volume would probably fail

5. able to download the volume when its not attached to instace

 

 

3.perform download volume

 

 

 

 

 

4.select another data disk with is not attahced instance

 

 

 

 

 

5.perform "download volume "

 

 

 

 

Delete Volumes

case 1

case 1:

P1

 

 

 

1.navigate to Storage-volumes

volume should be deleted successfully and listVolume should not
contain the deleted volume details.

 

 

 

 

2. selected the data disk

 

 

 

 

 

3.click on 'Actions" menu and select the "Delete volume"

Case 2:

 

 

 

 

 

"Delete Volume" menu item not shown under "Actions" menu.

 

 

case 2

(UI should not allow  to delete the volume when it is attahced to
 instance by hiding the menu Item)

 

 

 

 

4.select another data disk with is  attahced instance

 

 

 

 

 

5.clcik on "Actions" menu and check for "Delete Volume"

 

 

 

 

Create Volume

1. Perform "Add volume" by provinding name, availability zone and disk offering details.(i.e go to Storage-Volumes-)

volume should be created successfully and listVolumes should 
contain the created volumes. Database should reflects with new
created volume details(volumes table)

P1

PASS

 

 

[we shoudl execute this against every storage size, sanity check against that but also a few custom sized disk offerings]

 

 

 

 

 

 

 

Upload a data disk

call upload volume API with following parameters

Upload volume is successful

P1

 

 

 

 

 

 

 

 

 

HTTP URL of the data disk,

 

 

 

 

 

Zone ID,

 

 

 

 

 

Name,

 

 

 

 

 

Description,

 

 

 

 

 

Hyper visor

 

 

 

 

 

 

 

 

 

 

Attach a uploaded data disk

call attach volume api with following parameter

Verify Attach Volume API will move (and not Copy) the volume from
secondary storage to primary storage and attach to the vm.

P1

 

 

 

 

 

 

 

 

 

Volumeid

 

 

 

 

 

VM

 

 

 

 

 

 

 

 

 

 

snapshot

 

 

 

 

 

Create VMSnapshot of Vm in "Stopped" state with root volume with "vm_snapshot_type" set to "Disk"

1. Deploy Vm.
2. Log in to the Vm and create few directories and files.
3. Stop the VM.
4. Create VMSnapshot for this VM with "vm_snapshot_type" set to "Disk"

Snapshot should get created successfully.
It should be stored in the Primary Storage.

P1

https://issues.apache.org/jira/browse/CLOUDSTACK-4458

 

Create VMSnapshot of Vm in "Stopped" state with 1 data disk with "vm_snapshot_type" set to "Disk"

1. Deploy Vm.  Attach a data disk to this VM.
2. Log in to the Vm and create few directories and files on the root volume. Create few more directories on the data disk.
3. Stop the VM.
4. Create VMSnapshot for this VM with "vm_snapshot_type" set to "Disk"

Snapshot should get created successfully.
It should be stored in the Primary Storage.

P1

https://issues.apache.org/jira/browse/CLOUDSTACK-4458

 

We should be allowed to initiate Snapshots for multiple Vms in parallel

1. Deploy few Vms.
2. Log in to the each of the Vms and create few directories and files.
3. Create VMSnapshot for all VMs at the same time.

Vm Snapshot process for all the Vms should happen in parallel.

P1

https://issues.apache.org/jira/browse/CLOUDSTACK-4458

 

We should be abe to revert multiple Vms to different snapshots in parallel

1. Deploy few Vms.
2. Create Vm snapshots for all the Vms.
3. Revert  all the above Vms to a snapshot of these Vms in parallel

Vm reverts for all the Vms should happen in parallel.

P1

 

 

 

 

 

 

 

 

2.       Create volume from snapshot

 

 

 

PASS

 

3.       Create template from snapshot

 

 

 

PASS

 

 

 

 

 

 

 

 

 

 

 

 

 

Storage Migration

 

 

 

 

1

Vcenter                                  CPP 4.2
VC1 - DC1 -  C1 - H1             zone1 - C1 -  H1
                            H2                                  H2
                                                                  PS1 cluster primary storage
                                                                  PS2
                    C2 - H3                          C2 -  H3
                                                                  PS3
          DC2 - C3 - H4             zone2 - C3 -  H4
                                                                  PS4 
                                                                  ZPS5 zone primary storage
                                                                  ZPS6                   

storage Migrate  PS1 to PS2 vice versa
storage Migrate  PS1 to PS3 vice versa
storage Migrate  PS1 to PS4 vice versa

successful

P1

PASS

2

Vcenter                                  CPP 4.2
VC1 - DC1 -  C1 - H1             zone1 - C1 -  H1
                            H2                                  H2
                                                                  PS1 cluster primary storage
                                                                  PS2
                    C2 - H3                          C2 -  H3
                                                                  PS3
VC2 - DC3 - C4 - H4              zone2 - C4 - H4
                                                                  PS4
                                                                  ZPS5 zone primary storage
                                                                  ZPS6                   
 clusters in different subnet

storage Migrate  PS1 to PS2 vice versa
storage Migrate  PS1 to PS3 vice versa
storage Migrate  PS1 to PS4 vice versa

successful

P1

PASS

15

Migrate volume when VM is running

Use MigrateVirtualMachine() with VMId , SPId , when VM is in
Running state

Should throw appropriate error. Storage migration can happen only if the Vm is in Stopped state.

P1

PASS

 

Storage Migration Negative Scenarios

 

 

 

 

 

CASE I : start VM migration.
When the Volume migration is still in progress ,restart the management server.

 

Vm migration should fail. There should be a roll back done. Vm state should get back to" Stopped" and volumes should get back to "Ready" state.
Attempt to start the VM should succeed using the existing (older) storage.

P1

 

 

CASE II : start VM migration.
When the Volume migration is still in progress ,   shutdown  host which is doing the storage migration.

 

Vm migration should fail. There should be a roll back done. Vm state should get back to" Stopped" and volumes should get back to "Ready" state
Attempt to start the VM should succeed using the existing (older) storage.

P1

 

 

CASE III : start VM migration.
When the Volume migration is still in progress ,   reboot  host which is doing the storage migration.

 

Vm migration should fail. There should be a roll back done. Vm state should get back to" Stopped" and volumes should get back to "Ready" state
Attempt to start the VM should succeed using the existing (older) storage.

P1

 

 

CASE IV : migrate vm to a storage pool which belongs to a cluster of different
hypervisor type

 

The migration should fail with appropriate error message.

P1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Host

 

 

 

 

 

Force reconnect Host()

Force reconnect Host()

 

P1

PASS

 

UpdateHost() - Edit host tags

UpdateHost() - Edit host tags

 

P1

 

 

deleteHost() - > Put the host in maintenance then delete the host

deleteHost() - > Put the host in maintenance and then delete the host

P1

 

 

Put the host in maintenance mode

Put the host in maintenance mode

 

P1

https://issues.apache.org/jira/browse/CLOUDSTACK-4513

 

Put host in maintenance mode bring  host out of maintenance mode

Put host in maintenance mode.  Cancel host maintenance mode.

P1

https://issues.apache.org/jira/browse/CLOUDSTACK-4513

 

Power down host

Power down host

 

P1

 

 

Power down host and the power on the host

Power down host and the power on the host

 

P1

 

 

Reboot host

Reboot host

 

P1

 

 

Bring down the network connectivity of the host.

Bring down the network connectivity of the host.

 

P1

 

 

Bringdown network connectivity of host. Bringup nw connectivity ,

Bring down network connectivity of  host. Bring up network connectivity

P1

 

 

 

 

 

 

 

 

Template

 

 

 

 

 

add template

1. Add public/private template

1.database (vm_template table) should be updated with newly created template

P1

 

 

 

2. check UI/ listTemplates command to see the newly added template.

2.UI should show the newly added template

P1

 

 

 

 

3.listtemplates API should show the newly added template

P1

 

 

 

 

 

P1

 

 

delete template

Delete Template:

1.listtemplates should not show the delted template

P1

 

 

 

1.delete a template using UI/ API

2.database should be updated.

P1

 

 

 

 

3. tempalte should not be displayed in admin UI.

P1

 

 

 

 

4. template should be removed from secondary store (if no VMs were deployed
with it) after storage.cleanup.interval

P1

 

 

Edit template

Edit Template:

1.data base should be updated with updated values

P1

 

 

 

1. edit the template attribute

2.UI should show the updated template attributes

P1

 

 

 

2. check the updated values are relfecte

3.listTemplate shoud show the template with updated values

P1

 

 

Download template

"1. Add public/private template

"1.  download template should generate a valid http link to download

P1

 

 

 

2. select  the template and click ""Actions""

2.  You should be able to download the template with out any issues (no 404, 530,
503 HTTP Errors)"

 

 

 

 

3. perform   ""Download Template""

 

 

 

 

Check admin can extract any template  

admin should able  extract and download the templates

 

P1

 

 

Check all public templates should be visible to the users and admins

listTemplates should display all the public templates for all kind of users

P1

 

 

check all the system VM templates are not visble to users

Listtemplate should not display the system templates"

 

P1

 

 

 

 

 

P1

 

 

    Copy template to other zone

 

    Copy template to other zone successful

P1

 

 

   Start a VM from template

 

   Start a VM from template successful

P1

 

 

Download template with isolated storage network

1.Bring up CS in advanced zone

Verify that SSVM downloads the guest OS templates from storage server which
is in isolated network.

P1

 

 

 

2.create zone->pod->cluster>host

Use below sql query to verify the download status of the template.

 

 

3.Primary and secondary storage on storage server in isolated network
(Only management network has the reachability to storage server,
public network does not have the reachability)

mysql> select * from template_host_ref;

 

 

 

 

4.Disable zone

 

 

 

 

 

5.Set global settings parameter "secstorage.allowed.internal.sites" to
storage server subnet

 

 

 

 

 

6.Restart management server

 

 

 

 

 

7.Enable zone

 

 

 

 

 

 

 

 

 

 

ISO

 

 

 

 

 

verify add ISO

1. Add public/private ISO

1.database (vm_template table) should be updated with newly created ISO

P1

 

 

 

2. check UI/ listISO command to see the newly added template.

2.UI should show the newly added ISO

 

 

 

 

 

3.listISos API should show the newly added templat

 

 

 

 

 

 

 

 

verify delete ISO

1.delete a template using UI/ API

1.listIsos should not show the delted template

P1

 

 

 

 

2.database should be updated.

 

 

 

 

 

3.ISO should not be displayed in admin UI.

 

 

 

 

 

4. ISO should be removed from secondary store (if no VMs were deployed with it)
after storage.cleanup.interval

 

 

 

Verify Edit ISO

1. edit the ISO attribute

1.data base should be updated with updated values

P1

 

 

 

2. check the updated values are relfected

2.UI should show the updated ISO attributes

 

 

 

 

 

3.listISos shoud show the template with updated value

 

 

 

 

 

 

 

 

Download ISO

1. Add public/private ISO

1. download template should generate the link to download and able to download
the template with out any issues

P1

 

 

 

2. select  the ISO and click "Actions"

:check the ssvm in /var/www/html/userdata

 

 

 

 

3. perform   "Download Template"

 

 

 

 

Copy ISO to other zone

1. perform copy ISO   from one zone to another zone

copy ISO should be successful and seconday storage should contain new copied ISO

P1

 

 

 

2.check the newly copied template in another zone

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

primary storage

 

 

 

 

 

Add Primary storage NFS cluster wide

Use createStoragePool with url, name, podid, clusterid

Primary storage should be created and should be in Up state

P1

PASS

 

Delete Primary storage NFS

Delete Primary storage NFS

Delete successful

P1

PASS

 

 

 

 

P1

 

 

 

 

 

P1

 

 

 

 

 

P1

 

 

Storage maintenance mode with one pool

Initiate maintenance mode for primary Storage pool

VMs and System VMs stopped, Volumes should not be destroyed

P1

 

 

 

(no additional pool present)

 

P1

 

 

Storage maintenance mode with multiple pools

Initiate maintenance mode for Primary storage with additional pools

User VMs should be stopped. System VMs should be restarted in another pool

P1

 

 

Storage maintenance mode cancel

Cancel maintenance mode (one pool)

User VMs that were stopped should restart and also the system VMs

P1

 

 

Storage maintenance mode cancel

Cancel maintenance mode (multiple pools)

User VMs that were stopped should restart

P1

 

 

Enable maintenance with local storage

Enable local storage for System VM and User Vm

User and System VMs should be stopped and restarted when maintenance mode is
disabled (?)

P1

 

 

 

Put Local storage in maintenance mode

 

P1

 

 

Create VMs when Maintenance mode is on

Create User VMs when storage pools are in maintenance mode

Should not be able to create VMs and appropriate message should be thrown

P1

 

 

Storage Tags

Create Storage tag for storage and use the tag for disk offering

Should deploy disk on the storage which matches the tag

P1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

primary storage zone wide

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

secondary storage

 

 

 

 

 

Add NFS Secondary storage

Seed System VM templates

Secondary storage should get added to the zone.

 

 

 

 

Use addSecondaryStorage with url, host

listHosts should should show up the added sec storage as type "secondarystorage"

 

 

 

listSystemVm should show up system vms once they are up and running

 

 

 

listTemplates should show default templates downloaded and and in Ready state

 

 

 

 

 

 

 

Change Sec Storage server

Stop cloud services

Sec storage should be back up

 

 

 

 

Copy files from old to new secondary storage

Should be able to take snapshots and create templates

 

 

 

Change IP in DB in host and host_details

 

 

 

 

 

Start services

 

 

 

 

 

Stop and start SSVM

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

PVLAN

 

 

 

 

 

 

 

 

 

 

 

Shared Network scope All  1 PVLAN Deploy VM

1. Advance zone cluster with 2 hosts,  Domain D1  domainadmin  d1domain, user d1user. Domain D2  user d2user.
2. Create  Shared NW1  scope All <pVLAN1, sVLAN1>. 
3. As users from different domains , deploy Vms in this network,

1. shared NW1 with pVLAN creation succeed.
2.  All accounts able to  create VMs on NW1.
3.All VMs  in NW1 unable to access each other All VMs in NW1 able to reach DHCP server, gateway

P1

 

 

Shared NW scope Domain  1 PVLAN Deploy VM

1. Advance zone cluster with 2 hosts,  Domain D1  domainadmin  d1domain, user d1user. Domain D2  user d2user.
2. Create  Shared NW1  scope Domain for D1 <pVLAN1, sVLAN1>. 
3. As users from different domains , deploy Vms in this network,

1. shared NW1 with pVLAN creation succeed.
2. Only  accounts from doamin d1 are allowed to create VMs on NW1.
3.All VMs  in NW1 unable to access each other All VMs in NW1 able to reach DHCP server, gateway

P1

 

 

 

 

 

 

 

 

UPGRADE

 

 

 

 

1

2.2.16 -> campo 4.2.
Deployment- multiple zones, cluster single DC
VC DC  cluster  host          CP
vc1 a1   c1        h1             zone 1
                        h2

after upgrade - VC - add cluster c2 to vc1 a1 :
VC DC  cluster  host          CP
vc1 a1   c1        h1             zone 1 
                                            pod1   c1     h1
                        h2                                   h2
                                                                PS1

            c2        h4                 pod2   c2     h4
                                                                PS2
                                                            ZPS5                                                              ZPS6            

1. 2.2.6 vmware setup
Deployment- multiple zones, cluster single DC
deploy VM, assign public IP to VM
ensure internet access

2. upgrade campo 4.2
add cluster c2 to Vcenter.
add pod2 cluster c2 host h4 to zone1
add cluster PS
    

New mapping Vcenter & CPP should work
1. legacy has single DC

P1

 

2

Legacy Vcenter                    Legacy CPP 4.2
VC1 - DC1 -  C1 - H1             zone1 - cluster1   VC1 - DC1 - C1 -  H1
                            H2                                                                      H2
                                                                  PS1 cluster primary storage
                                                                  PS2
          DC2 - C2 - H3                          cluster2    VC1 - DC2 - C2 - H3
                                                                  PS3
          DC3 - C3 - H4                          cluster3    VC1 - DC3 -  C3 - H4
                                                                  PS4 
                                                                  ZPS5 zone primary storage
                                                                  ZPS6        
2. upgrade campo 4.2
add to current zone:
Vcenter1 - DC1 -  cluster1 - Host1
                            cluster2 - Host2
                 DC2 -  cluster3 - Host3
Vcenter2 - DC3 -  cluster4 - Host4    
 clusters in different subnet                

Legacy CPP 3.0.7 Patch B- advance zone 1 Vcenter 2 DC with multiple clusters
upgrade  campo  4.2

New mapping Vcenter & CPP NOT allowed

P1

 

3

Vcenter                                  CPP 4.2
VC1 - DC1 -  C1 - H1             zone1 - cluster1   VC1 - DC1 - C1 -  H1
                            H2                                                                      H2
                                                                  PS1
                                                                  PS2
          DC2 - C2 - H3                           cluster2    VC1 - DC2 - C2 - H3
                                                                  PS3
VC2 - DC3 - C4 - H4                           cluster3    VC2 - DC3 -  C4 - H4
                                                                  PS4
                                                                  ZPS5 zone primary storage
                                                                  ZPS6                   
 clusters in different subnet

Legacy CPP 3.0.7 Patch B - advance zone  multiple Vcenters 2 DC multiple clusters  different subnet
upgrade campo  4.2

New mapping Vcenter & CPP NOT allowed

P1

 

 

 

 

 

 

 

4

Before upgrade Zone with mixed hypervisors --
 Later add additional clusters VMWARE or any other

Legacy CPP 4.0 advance zone  multiple Vcenters 2 DC multiple clusters  different subnet
upgrade campo  4.2

 

 

 

 

 

 

 

 

 

5

Before Upgrade Nexus vSwitch

 

 

 

 

 

 

 

 

 

 

6

Before Upgrade DVS  2 physical networks
after upgrade add dvs Cluster

 

 

 

 

 

 

 

 

 

 

7

Before Upgrade DVS  2 physical networks

 

 

 

 

 

after upgrade add Nexus  Cluster

 

 

 

 

 

 

 

 

 

 

8

Test upgrade to new product (with this feature) when the deployment
already has multiple zones, where clusters in each zone has one or more of following,

 

 

 

 

 

1. Single DC

 

 

 

 

 

2. Multiple DCs

 

 

 

 

 

3. Multiple vCenters

 

 

 

 

 

 

 

 

 

 

9

After upgrade, test adding new cluster to existing/new zone, where the
cluster belongs to a DC/vCenter that is,

 

 

 

 

 

1. Already part of zone

 

 

 

 

 

2. Not part of zone

 

 

 

 

 

 

 

 

 

 

 

after upgrade     Test creation of new zone

 

 

 

 

  • No labels