Test Plan

This plan is to test the new logging functionality presented in the Status Update Design.

Test Objectives

This plan will define the various areas that must be tested to validate the new logging meets its requirements. This information will the be used during the subsequent technical design and development phases to ensure that the testing approaches defined in this plan are possible.

The plan will focus on two areas:

  • Performance Testing
  • Operation Testing

Performance Testing

One of the biggest risks of adding more logging to the broker is the potential performance impact they will have in terms of a) creating the messages to log and b) actually logging the message. Therefore validating our changes have a negligible performance impact is key.

Approach

A series of test must be written that cover all the log messages that the broker will generate. The approach to validate any performance changes should then be to run the test multiple times to generate an average performance. The impact of the logging additions must be negligible. The performance test suite has shown that that there can be up to 5% variance between test runs so if this testing is to be performed as an automated system test we must be careful to ensure that we do not end up with test failures due to an unusually variance. Testing should be performed to establish a baseline from which we can determine what amount of variance occurs between the two test runs. This variance can then be used to determine the failure criteria. However, it should be noted that such a comparison technique will only ensure that the impact of the logging does not shift between each run. The test will not address any potential drift in performance of the broker as a whole, only the difference between logging on and logging off.

The test should run with interleaved test setups, i.e. Logging On then Loggging off. This will help minimise any external factors that could impact the timing. The time spent logging however, will still represent a very small percentage of time for a test case and as such the test will be more susceptible to external factors during the test run.

Operation Testing

There are two components to testing the functionality of the new logging.

  1. Unit Testing
  2. System Testing

Unit Testing

Unit testing must be completed on each module and code coverage should cover at least 80% (aiming towards 100%) of the code base. The unit testing however, will only verify that the module performs as expected with the use of a test output logger.

System Testing

System testing will need to be performed to validate that the correct log messages appear in the right order when run as a full system. By using the test output logger used for the Unit testing it will be possible to validate an InVM broker correctly logs at the appropriate time. To complete system testing the log4j output from an external broker test run must be examined to ensure it contains the expected output. The alerting tests already perform this sort of external log validation so this should be easy to replicate.

Additionally, we will want to add tests to understand how the system behaves under certain failure conditions. This will not be required as part of this initial work. However, when we remove log4j we need to understand what the differences will with any new logging framework in failure situations, i.e. disk full, disk loss(crash, NFS delay). The difference with a disk loss (crash, NFS delay) is the write IO will not necessarily respond immediately while a disk full notification will usually respond immediately.

Performance Test Plan

The performance of the broker can be measured by the JMS Client by ensuring that all the updated modules are executed in the test run. The time it takes to startup create a new JMS Session and subscribe and the cleanly shut everything down will ensure all of the new log messages have the option of logging. Now the overhead in broker startup and shutdown will have a large effect on the test run however this should mean it will be easier to compare startup times as they should be very similar.

Operational Test Plan

The Functional Specificiation lists the various status log messages that the broker will be configured to produce. What this section details is how testing will be carried out for each of these log statements.

Broker

The Broker messages can be split in to two categories. Startup and shutdown messages. The testing here focuses on the Broker messages and ignore the fact that during a system testing of the broker other log messages will also be printed.

Startup

As all the startup messages are printed on startup then they cannot be printed individually. So by monitoring the logging output on startup we can verify that messages BKR-1001,1002,1004,1006,1007 are logged. It is expected that the output would be very similar to this:

2009-07-09 15:50:20 +0100 MESSAGE BRK-1006 : Using configuration : build/etc/config.xml
2009-07-09 15:50:20 +0100 MESSAGE BRK-1007 : Using logging configuration : build/etc/log4j.xml
2009-07-09 15:50:20 +0100 MESSAGE BRK-1001 : Startup : Version: 0.6 Build: <svn revision>
2009-07-09 15:50:20 +0100 MESSAGE BRK-1002 : Starting : Listening on TCP port 5672
2009-07-09 15:50:20 +0100 MESSAGE BRK-1004 : Ready

Additionally testing should be performed with SSL enabled to verfify that the additional BRK-1002 message is logged.

2009-07-09 15:50:20 +0100 MESSAGE BRK-1002 : Starting : Listening on TCP/SSL port 8672.
Shutdown

On shutdown the broker will log a BRK-1003 for each interface that has been started. So if we have started both the TCP and the TCP/SSL listeners then we would expect both to be printed out before the Stopped message.

2009-07-09 15:50:20 +0100 MESSAGE BRK-1003 : Shutting down : TCP port 5672
2009-07-09 15:50:20 +0100 MESSAGE BRK-1003 : Shutting down : TCP/SSL port 8672
2009-07-09 15:50:20 +0100 MESSAGE BRK-1005 : Stopped

ManagementConsole

The management console also has a startup and shutdown phase like the broker.

Startup

The Management console uses two ports so both of these will be logged at startup. Two test runs must be performed to validate that the 'SSL Keystore' output is only present when correctly enabled.

2009-07-09 15:50:20 +0100 MESSAGE MNG-1001 : Startup
2009-07-09 15:50:20 +0100 MESSAGE MNG-1002 : Starting : RMI Registgry : Listening on port 8999
2009-07-09 15:50:20 +0100 MESSAGE MNG-1002 : Starting : RMI ConnectorServer : Listening on port 9099
2009-07-09 15:50:20 +0100 MESSAGE MNG-1006 : Using SSL Keystore : test_resources/ssl/keystore.jks
2009-07-09 15:50:20 +0100 MESSAGE MNG-1004 : Ready
Shutdown

During shutdown the two listeners created during startup will shutdown. The use of SSL will not provide any additional shutdown mesasage to verify.

2009-07-09 15:50:20 +0100 MESSAGE MNG-1003 : Shutting Registgry : Listening on port 8999
2009-07-09 15:50:20 +0100 MESSAGE MNG-1003 : Shutting down : RMI ConnectorServer : Listening on port 9099
2009-07-09 15:50:20 +0100 MESSAGE MNG-1005 : Stopped

VirtualHost

Virtualhosts cannot be programatically created so the log messages will have to be found during the startup/shutdown of the broker.

Startup

During startup the following VHT messages will be logged as the Virtualhost is created.

2009-07-09 15:50:20 +0100 MESSAGE [ vh:(test) ] VHT-1001 : Created : test
Shutdown

On shutdown the Virutalhost will only log that it has been closed.

2009-07-09 15:50:20 +0100 MESSAGE [ vh:(test) ] VHT-1002 : Closed

MessageStore

The MessageStore details will be logged as part of the Virtualhost startup and shutdown.

Startup

Aside from the easy to test create(MST-1001) and store location(MST-1002) messages, a persistent store such as DerbyDB will also need to be used for testing so that the recovery messages MST-1004-6 can be tested.
It is expected that on recovery start MST-1004 will be logged without a queue name. Then as each queue is recovered then a start(MST-1004), count (MST-1005) and complete (MST-1006) will be logged. Only when all queues have been recovered will a final complete (MST-1006) be logged. This would give a sequence such as:

2009-07-09 15:50:20 +0100 MESSAGE [ vh:(test) ] MST-1001 : Created
2009-07-09 15:50:20 +0100 MESSAGE [ vh:(test) ] MST-1001 : Created : DerbyMessageStore
2009-07-09 15:50:20 +0100 MESSAGE [ vh:(test) ] MST-1002 : Store location : ./derbyDB
2009-07-09 15:50:20 +0100 MESSAGE [ vh:(test) ] MST-1004 : Recovery Start 
2009-07-09 15:50:20 +0100 MESSAGE [ vh:(test) ] MST-1004 : Recovery Start \[: qu(myQueue)]
2009-07-09 15:50:20 +0100 MESSAGE [ vh:(test) ] MST-1005 : Recovered 100 messages for qu(myQueue)
2009-07-09 15:50:20 +0100 MESSAGE [ vh:(test) ] MST-1006 : Recovery Complete \[: qu(myQueue)]
2009-07-09 15:50:20 +0100 MESSAGE [ vh:(test) ] MST-1006 : Recovery Complete 
Shutdown

On shutdown the MessageStore will only log that it has been closed.

2009-07-09 15:50:20 +0100 MESSAGE [ vh:(test) ] MST-1003 : Closed

Connection

New connections can be easily performed in isolation by connecting to the running broker. The connection open and close log messagse should be presented in response to the new connection and the closure of the connection.

2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test) ] CON-1001 : Open : Client ID client1 : Protocol Version : 0-9
2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test) ] CON-1002 : Close

Channel

As with Connection creation Channel creation can be tested in isolation. Whilst a channel will be created along with the Connection. A new channel can be created from the Java client by creating a new JMS Session. The current Java client will start all Channels on the JMS Connection flowed. This means that an initial 'CHN-1002 : Flow Stopped' message will be logged. Only when the JMS Connection is started will the Channel bun unflowed and a 'CHN-1002 : Flow Started' logged.

Additional testing should be done to ensure that CHN-1002 messages are logged when the client exceeds its prefetch count and the broker flows the client.

2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test)/ch:2 ] CHN-1001 : Create : Prefetch 500
2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test)/ch:2 ] CHN-1002 : Flow Stopped
2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test)/ch:2 ] CHN-1002 : Flow Started
2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test)/ch:2 ] CHN-1003 : Close

Queue

There are a number of properties that can be set on a Queue and as a reusult they all need to be tested in isolation and validated that the correct log message is generated based on this template:

2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test)/ch:2/qu(myqueue) ] \\ 
                          QUE-1001 : Create : [AutoDelete] [Durable|Transient] [Priority:<levels>] Owner:<name>

Property combinations to test:

  • Durable | Transient
    • AutoDelete
    • Priority

This results in 8 tests as each Combination of AutoDelete|Priority|None is tested against Durable|Transient.

A simple deleted is logged when the queue is finally deleted.

2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test)/ch:2/qu(myqueue) ] QUE-1002 : Deleted

Exchange

As with Queue logging the Exchange has the Durable option that must be tested independently. During broker startup the default exchanges will be created and so will be logged. Testing should ensure that these log messages are indeed correctly logged.
However as Exchanges can be programatically declared this should also be tested.

2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test)/ch:2/ex(amq.direct) ] EXH-1001 : Create : Durable Type:amq.direct Name:amq.direct
2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test)/ch:2/ex(amq.direct) ] EXH-1001 : Create : Type:amq.direct Name:amq.direct
2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test)/ch:2/ex(amq.direct) ] EXH-1002 : Deleted

Binding

Bindings will be created via JMS in the Java client along side Queue and Subscriber creation. The Qpid Java Client uses the Arguments field to ensure exclusive queues do not get filled with messages when a selector is in use. This adds another dimension to the testing as part of the system testing needs to include validation of the message prefix ( content between [ ]) as testing needs to cover the binding of Queues to a different exchanges and with a varienty of routing keys.

This gives us the following dimensions of testing that needs to be performed:

  • Exchange
    • Arguments
    • RoutingKey

If we take bindings between amq.direct and amq.topic and vary the routing key (for Queues the default is the queue name when used with the Java Client). We have 4 test cases that we need to run on Exclusive and Non-Exclusive queues.

2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test)/ch:2/ex(amq.direct)/qu(myQueue)/rk(myQueue) ] \\ 
                          BND-1001 : Create [: Arguments : <key=value>]
2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/test)/ch:2/ex(amq.direct)/qu(myQueue)/rk(myQueue) ] \\
                          BND-1002 : Delete

Subscription

There are two types of subscription, durable and non-durable. The durable subscription can be tested from the Java Client by creating a JMS Durable Topic Subscription. Additionally both types of subscription can have an additional argument for the JMS selector. The non-durable subscription type can also operate as a JMS Queue Browser which has an additional 'autoclose' argument.

This makes 5 different possible Create (SUB-1001) log messages.

2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/)/ch:2/sub:1:qu(myqueue) ] SUB-1001 : Create : Durable [Arguments : <key=value>]
2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/)/ch:2/sub:1:qu(myqueue) ] SUB-1001 : Create [: Arguments : <key=value>]
2009-07-09 15:50:20 +0100 MESSAGE [ con:1(guest@127.0.0.1/)/ch:2/sub:1:qu(myqueue) ] SUB-1002 : Close

Test Specification

The next step is to provide an enumerated list of tests for completion.
Each test in the list will include:

  • Functional description of what is being tested.
  • Input(actions and/or data)
  • Expected outputs:
    • ... that will cause failure
    • ... that can safely be ignored.

Test Specification

  • No labels