SNO | Testcase Name | Procedure | Expected Results | Test Case Type( Sanity | Functional |Negative) | is Automatable | Priority | Pass/Fail |
---|
1 | Verify_Host_transition_states | 1. Add each Host to each cluster and make sure each host managed by each MGMT server 2.Stop the Management Server server(MS1) and check the Host transition state when owner ship changed form one MS to another MS(eg:MS1 toMS2) | 1. Check the Host state changed from Alert-disconnect-Connecting-UP 2.Query the ms-host & host tables for the details | SANITY, FUNCTIONAL | N | P1 | PASS |
2 | Only_One MS_UP | 1.Make sure each host is managed by each management server 2.Except one Management server, stop other remaining management server(eg:stop MS1,MS2 and keep MS3 up and running state) 3.Deploy new VMs and perform VM life-cycle operations(vm stop/start/destroy/restore) 4.snapshot on the volumes 5.register new templates | a)Able to access the MS UI from MS3 and all system vms should be up and running b)All the hosts stats are UP 3)deploy VM should be successful without any errors 4)snapshot should be successful 5.Should be able to register the new templates successfully | SANITY, FUNCTIONAL | Y | P2 | PASS |
3 | All_MGMTServices_UP_at_sameTime | 1.Make sure each host is managed by MGMT server 2.Stop the all the running cloud management service once(service cloud-management stop) 3.Start the all the Stopped Management servers (service cloud-management start) 4.Deploy new VMs and perform snapshot on the volumes | Hosts should get re-distributed to all the 3 management servers. deployVM should be successful without any errors snapshot operation should be successful with out any errors | FUNCTIONAL | Y | P2 | PASS |
4 | restart_ALL_MGMT services's | 1.Make sure each host is managed by each MGMT server 2.restart the cloud-management Service on all the Management Servers (MS1,MS2,MS3) (service cloud-management restart) | All the MGMT server should up with out any erros and all hosts get distributed to corresponding MS's | FUNCTIONAL/ Negative | | P2 | PASS |
5 | Change_SystemDate_ON_1MGMT Server | 1.Make sure each host is managed by each MGMT Server and system date is sync on all the hosts and MS 2.Change the system date in one of MS(MS1) in cluster MS 3. Check the MS state in mshost table 4. Check the MS ownership on host tables 5. Synch the date on MS1(i.e change the sysdate again to synch with other MGMT server) | When one of the MS date is not synched management state in mshost should be down of Host belongs to above management server moved to another avilable Management server Management server state should be UP and Host ownership will changed back to current only when agent load is in-balance. | FUNCTIONAL/ Negative | | P3 | PASS |
6 | Perform_task_From_non-managed_MS | 1.host 1 is managed by MS1 2.check the forward functionality by initializing the task from non-managed MS (i.e deployVM on host1 from MS2) | 1. task should be forward to managed MS(MS1) from MS2 and executed at MS1 and response should be received back to task initiator(i.e MS2) | SANITY, FUNCTIONAL | Y | P1 | PASS |
7 | Perform_Task_From_non-Managed MS_StopMS | 1.host 1 is managed by MS1 2.perform add Instance on host1 from MS2 3.stop the MS2 before it receive the response from MS1 | Async Job shouldn't be in progress and it should be Done state. | FUNCTIONAL/ Negative | | P2 | PASS |
8 | verify_ForwardAgent_becomes DA | 1.Stop the MS1 2. Make sure that the host owned by this management server say HOST1 gets transferred to one of the 2 management servers say MGMT2. 3. From MGMT2 , Make a request to add Vm to the host HOST1 ( use host tags to direct vm to a specific host) 4.Start/stop Vm instances that are in HOST1. Take snapshots of the Vms that are in HOST1. | From MGMT2,Direct Agent Request transferred to MGMT2 from MGMT1 and Vm deployment should
successful on MS2 all the operations should be successful with out any errors | SANITY, FUNCTIONAL | Y(Its possible to automate but it s difficult judge which MS server takes the ownership if we have more than 2 MS ..always manual intervention required to get the accurate results for this cases) | P1 | PASS |
9 | verify ForwardAgent_functionality | 1.Stop the MS1 2. Make sure that the host owned by this management server say HOST1 gets transferred to one of the 2 management servers say MGMT2. 3. From MGMT3 , Make a request to add Vm to the host HOST1 ( use host tags to direct vm to a specific host) 4.Start/stop Vm instances that are in HOST1. Take snapshots of the Vms that are in HOST1 | 3.Request get forwarded to MGMT2 from MGMT3 and Vm deployment should successful on MS2. 4.all the operations should be successful with out any errors | SANITY, FUNCTIONAL | Y | P1 | PASS |
10 | Host_Force_reconnect_From_non-managed MS | Find the management server that owns this host. From any of the other management servers ( that is NOT the owner of this host) , initiate a Force Re-connect of the host. | We should see the request being forwarded from non-managed MS to the owner management server and the action should succeed. | FUNCTIONAL | | P1 | PASS |
11 | Host-maintenance_Mode_Mgmg_status | 1.Select the Host from any MGMT server(says MS1) 2.Put the Host into maintenance mode 3. Check the host MGMT_ID value | 2.Host should be maintenance mode without any erros 3.MGMT_ID should be Null | SANITY, FUNCTIONAL | | P2 | PASS |
12 | Cancel_MaintenanceMode | 1.Select the Host from any MGMT server(say MS2) 2.Put the Host into maintenance mode 3. Check the host MGMT_ID value 4.cancel the maintenance mode 5.Check the MGMT ID of the above host | 3.MGMT_ID should be Null 4.It should get either same MGMT ID or mgmt_server_id or another mgmt_server id | SANITY, FUNCTIONAL | | P1 | PASS |
13 | Take the Host out of maintenance Mode | Find the management server that owns this host. From any of the other management servers ( that is NOT the owner of this host) , put the host in maintenance mode and take the host out of maintenance mode. | We should see the requests for both the above actions being forwarded to the owner management server and the actions should succeed. | SANITY, FUNCTIONAL | | P1 | PASS |
14 | HostMaintenance_Join_MS | 1.Select the Host from any MGMT server 2.Put the Host into maintenance mode 3. Join another Management server to cluster 4.check the agent load balancing will happen or not | when Host is in maintenance mode ,agent load balancing won't happen. No errors observed in the logs | FUNCTIONAL | | P2 | PASS |
15 | Cancel_Maintenance_Join_MS | 1.Select the Host from any MGMT server(Say MS1) and make sure the current cluster has more than Threshold value 2.Put the Host into maintenance mode 3. cancel the maintenance mode 4.Join another Management server to cluster (say MS4) or restart the existing MS(MS2) 5.check the agent load balancing will happen or not | 2..MGMT_ID should be Null 5.Either old and new MS ID will be assigned | SANITY, FUNCTIONAL | Y | P2 | PASS |
16 | StopMS1_Cancel_Maintenance Mode_fromMS2 | 1.Select the Host from any MGMT server(eg:MS1) 2.Put the Host into maintenance mode 3. Check the host MGMT_ID value 4.stop the MGMT server service on MS 5.cancel the maintenance mode from MS2 6.Check the MGMT ID of the above host | 3.MGMT_ID should be Null 6)It should successfully came out of Maintenance mode and get the suitable MS ID | SANITY, FUNCTIONAL | Y | P2 | PASS |
17 | AsyncJobState_StopMS_while_Job(Maintenance)_inProgress | 1.Select the Host owned by MGMT server(eg: MGMT 1) from MGMT Server 2 2.Put the Host into maintenance mode from non-managed management server (say M2) 3. when host is in prepare maintenance mode, stop the Management server service on Management service 1(MS1) 5.Check the asyn job status | Async job should not be in-progress state and its Job state should be done with FAILED. | SANITY, FUNCTIONAL | Y | P2 | PASS |
18 | DeleteHost_from_other_MS | Find the management server that owns this host. From any of the other management servers ( that is NOT the owner of this host) , put the host in maintenance mode and then delete the host. | We should see the requests for both the above actions being forwarded to the owner management server and the actions should succeed. | FUNCTIONAL | Y | P1 | PASS |
19 | AsyncJobState_StopMS_while_Job(Snapshots)_inProgress | From the management server which is NOT the owner of the host in which the Vm is running , initiate a snapshot task. When the task is still in progress , stop the management server. | Async job should get marked as done (FAILED) but it should not be in progress. | FUNCTIONAL | Y | P2 | PASS |
20 | AgentLB_HostState_UP_cluster_Disabled | 1. zone> pod1> cluster1 cluster2 cluster3 each with at least 1 host. all hosts state are in UP "agent.lb.enabled" true. MS1 manage 2 clusters. MS2 manage 1 cluster. 2. MS1 put one of the cluster in disable state. 3.Join another MS3 | MS1 gives away 2nd cluster & Agent loadbalacing happens when hosts state is UP. host will get the new MGMT server ID | FUNCTIONAL | | P1 | PASS |
21 | AgentLB_HostState_Down_cluster_Disabled | 1. zone> pod1> cluster1 cluster2 cluster3 each with at least 1 host. all hosts state are in UP "agent.lb.enabled" true. MS1 manage 2 clusters. MS2 manage 1 cluster. 2. MS1 unmanage 1 cluster >disable state and make sure state of the hosts are in disconnected state 3.Join MS3 to cluster | No agent load balancing will trigger | FUNCTIONAL | | P2 | PASS |
22 | AgentLB_ThresHold_lessThan_default value | 1. zone> pod1> cluster1 cluster2 cluster3 each with at least 1 host. all hosts state are in UP "agent.lb.enabled" true. MS1 manage 2 clusters. MS2 manage 1 cluster. 2.agent.load.threshhold =0.45 less than default thresh hold value(0.7) 2.Join MS3 to cluster | No agent load balancing will trigger | SANITY, FUNCTIONAL | | P1 | PASS
|
23 | AgentLB_ThresHold_moreThan_default value | 1. zone> pod1> cluster1 cluster2 cluster3 each with at least 1 host. all hosts state are in UP "agent.lb.enabled" true. MS1 manage 2 clusters. MS2 manage 1 cluster. 2.agent.load.threshhold =0.8 more than default thresh hold value(0.7) 3.Join MS3 to cluster | Agent load balancing will happen and host(selected host which are in giveway list) will get the new MGMT server ID | SANITY, FUNCTIONAL | | P1 | PASS |
24
| AgentLB_ThresHold_moreThan_Non_default_ value | 1. zone> pod1> cluster1 cluster2 cluster3 each with at least 1 host. all hosts state are in UP "agent.lb.enabled" true. MS1 manage 2 clusters. MS2 manage 1 cluster. 2. change the threshold is more than default thresh hold value(0.2) i.e Global configuration change the value of agent.load.threshhold parameter 3.Join MS3 to cluste | Agent load balancing will happen and host(selected host which are in giveway list) will get the new MGMT server ID | | FUNCTIONAL | P1 | PASS |
| |
25
| Direct agent-loads | - Configure the direct.agent.load.size =2
- make sure 3 hosts should be consider for rebalacning
- check the number of hosts consider for rebalancing at same time
| As per the configured load size(2),At same time 2 hosts should be consider(processed) for re-balancing. | FUNCTIONAL
| | P2
| PASS
|
26
| AgentLB_sourceMS_down_during_Re-balancing | 1. In multi clustered Env, make sure all hosts state is UP(eg:cluster1 with 4 hosts) and "agent.lb.enabled" should be set to "true 2.Join the another MGMT server 3.during agent load balancing,stop the source MGMT server(MS1) | All hosts already transferred to MS2 should get new MGMT server ID. | FUNCTIONAL | | P2
| PASS
|
27
| AgentLB_DestinationMS_Down_during_Rebalancing | 1. In multi clustered Env, make sure all hosts state is UP(eg:cluster1 with 4 hosts)) and "agent.lb.enabled" should be set to "true 2.Join the another MGMT server(MGMTServer 2) 3.during agent load balancing(from MS1 to MS2), stop the destination MGMT server(MHMT server 2) | If M2 dies, all re balancing should be stopped as it's running on M2. All hosts already transfered to M2 should be marked as Disconnected. | SANITY, FUNCTIONAL | | P1
| PASS |
28
| dbServer_ UsageServer on different machines | 1. configure cluster management setup and Make sure database and usage on separate machine 2.Deploy VM's on Host2 and generate usgae traffic from one management server MGMT 2 on the GuestVMs .(download some files from internet into guestVMs) 3.check the network usage statistics 4.stop the current Mangement server i.e MGMT 2 5.check the network statistics 6.From another management server ,generate public traffic and check the network usage statistics | In all the cases usage statistics should be collected and details should be accurate | SANITY, FUNCTIONAL | | P1
| FAIL
https://issues.apache.org/jira/browse/CLOUDSTACK-81\\Image Added
|
29
| dbServer_ UsageServer on _Same_Machines | 1. configure cluster management setup and Make sure database and usage on the machine 2.Deploy VM's on Host2 and generate usgae traffic from one management server MGMT 2 on the GuestVMs. (download some files from internet into guestVMs) 3.check the network usage statistics 4.stop the current Management server i.e MGMT 2 5.check the network statistics 6.From another management server ,generate public traffic and check the network usage statistics | In all the cases usage statistics should be collected and details should be accurate | SANITY, FUNCTIONAL | | P2 | FAIL
https://issues.apache.org/jira/browse/CLOUDSTACK-81\\Image Added
|
30
| Ping Timeout | 1.make sure all the MS's are synch 2.On one management remove or comment the IPtable rule for 9090 port(to make sure MS communication failed with ping timed out but still other MS's can communicate to common mysql DB) | once ping command timeouts,The corresponding MS will be disconnect state and host on the current MS(M2) will get the another management ID(MS3 ID) | Negative
| | P1
| |
31
| Upgrade test 3.0.2 to ASF 4.0 | 1.prepare the cluster management setup before upgrade 2. perform basic operations(vm life-cycle,snapshots) 3.make sure before host ownership changed from MS1 to MS2 4.Upgrade the all the Management Servers in the cluster management-setup 5.check after upgrade able to perform basic sanity test | Upgrade should be successful, able to perform basic sanity cases(Agent load balancing & Direct agent and forward agent functionality) | SANITY, FUNCTIONAL | | P1 | PASS
Unable to start cloud-usage server
|