You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Test Plan

This plan is to test the new logging functionality presented in the Status Update Design.

Test Plan Objectives

This plan will define the various areas that must be tested to validate the new logging meets its requirements. This information will the be used during the subsequent technical design and development phases to ensure that the testing approaches defined in this plan are possible.

The plan will focus on two areas:

  • Performance Testing
  • Operation Testing

Performance Testing

One of the biggest risks of adding more logging to the broker is the potential performance impact they will have in terms of a) creating the messages to log and b) actually logging the message. Therefore validating our changes have a negligible performance impact is key.

Approach

A series of test must be written that cover all the log messages that the broker will generate. The approach to valiate any performance changes should then be to run the test multiple times to generate an average performance. The difference between the logging on and logging off performance should be within 3%. Based on previous performance tesing results this should give us an impact range of 1-5%. However, it should be noted that such a comparison technique will only ensure that the impact of the logging does not shift between each run. The test will not address any potential drift in performance of the broker as a whole, only the difference between logging on and logging off.

The test should run with interleaved test setups, i.e. Logging On then Loggging off. This will help minimise any external factors that could impact the timing. The time spent logging however, will still represent a very small percentage of time for a test case and as such the test will be more susceptible to other factors during the test run.

Operation Testing

There are two componente to testing the functionality of the new logging.

  1. Unit Testing
  2. System Testing

Unit Testing

Unit testing must be compelted on each module and code coverage should cover at least 80% (aiming towrds 100%) of the codebase. The unit testing however, will only verify that with the use of a test output logger that the module performs as expected.

System Testing

System testing will need to be performed to validate that the correct log messages appear in the right order when run as a full system. By using the test output logger used for the Unit testing it will be possible to validate an InVM broker correctly logs at the appropriate time. To complete system testing the log4j output from an external broker test run must be examined to ensure it contains the expected output. The alerting tests already perform this sort of external log validation so this should be easy to repllicate.

Additionally, we will want to add tests to understand how the system behaves under certain failure conditions. This will not be required as part of this initial work. However, when we remove log4j we need to understand what the differences will with any new logging framework in failure situations, i.e. disk Full, disk loss(crash, NFS delay).

  • No labels