Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Goals:
    • Consistent use of logging levels
    • TODO more...

TODO

Testing strategy

Goals:

  • Running all the tests should be fast (it should be exceptional for a test to take more than five seconds).
  • Tests operate at the correct level for their test type (see next section). Specifically:
    • Simple logic bugs cause unit tests to fail.
    • Errors in collaboration logic cause integration module tests to fail.
    • Most (if not all) of our integration tests should run in-process.
    • The official JMS 2 TCK
  • The following third-part tests will also be run against the client:
    • Official JMS 2 TCK.
    • Joram.
Integration tests

For each test, our main choice is which layer to designate as the boundary for our tests. Everything "below" this boundary will be replaced with a mock implementation ("mock" is defined in a broad sense - it does not assume a imply the use of a mocking framework).

Our options are listed below in increasing system boundary extent. As the extent increases, the usual trade-offs apply:

  • More layers of the real JMS client will be exercised, thereby increasing the confidence that our tests give us.
  • We get less control over the tests, both in terms of their inputs and the assertions we can test.
  • Tests become harder to debug.

...

Layer to mock

...

Comments

...

Proton

...

The broker (out-of-process).
We would either write a simple broker or use the existing one.
The client test would more closely resemble a conventional application than a JUnit test.

...

Useful for testing multi-threaded behaviour

Test types

We will write a number of each of the following tests.

  • Unit test: class that attempts to test an individual class. Some unit tests will incidentally tests several other closely related classes too.
  • Module test: a class that attempts to test the full JMS client, but using a stub/mock etc instead of a broker. In most cases, we want to test that:
    • Calling specific JMS methods causes the correct AMQP to be sent, and
    • The client correctly handles AMQP sent by its peer.
  • System test: a class that tests the behaviour of the JMS client when communicating with a real peer such as the Qpid Broker. Assertions will be written mostly in terms of the JMS API. We may re-use some of the existing Qpid system tests.
Module tests

The following diagram shows how module tests will work:

No Format

+================================+
|
| JUnit test
|
| 1. Create an in-process TestAmqpPeer
| 2. Set up TestAmqpPeer behaviour and expectations
| 3. Call some JMS methods
| 
+================================+
   |       |             |
   |       |             |
   |       |            \|/
   |       |    +================================+
   |       |    |
   |       |    | JMS client
   |       |    |
   |       |    +================================+
   |       |                |                 |
   |       |                |                 |
   |       |               \|/                |
   |       |    +======================+      |
   |       |    |       Proton         |      |
   |       |    |                      |      |
   |       |    | Message |  Engine    |      |
   |       |    |         |            |      |
   |       |    +=========+============+      |
   |       |                /|\               |
   |       |                 |                |
   |      \|/               \|/              \|/
   |    +=======================================
   |    |
   |    | In-process Driver
   |    |
   |    +=======================================
   |                      /|\
   |                       |
   |                       | Bytes
   |                       |
  \|/                     \|/
+===================================+
|
| TestAmqpPeer
|
+===================================+

...

In-process Driver implementation

...

  • Should use a threading model that is as similar as possible to the TCP/IP driver, to make the test as realistic as possible.
  • Should expose methods to give the test a way to control, for a given connection, the relative order of (1) events in the "application" thread (typically the main test thread) and (2) events in the driver thread. This should reduce the number of race condition bugs in our code. We will probably implement this control using techniques such as:
    • Latches, CyclicBarriers etc.
    • Controlling the chunking of the byte production/consumption by both the In-process Driver and the TestAmqpPeer.

...

TestAmqpPeer implementation

...

Each test will give behaviour to a TestAmqpPeer. This behaviour will mostly be expressed in terms of AMQP frames. Here are prose examples (we don't yet know how these will be expressed in code)

Expected frame

Frame to respond with

An Open frame with container-id "xyz"

The following canned Open frame: ...

Expected frame

Frame to respond with

An Open frame with container-id "xyz"

The following sequence of frames representing a refused connection

Expected frame

Frame to respond with

Any Open

Canned Open frame: ...

Any Begin

Canned Begin frame: ...

An Attach matching the following criteria: ...

The following Attach: ...

In order to increase our confidence in the AMQP interoperabilty of the JMS client, we want to avoid using the same Proton stack in the test peer as in the client. Therefore, Decoding and encoding will be done using proton-api's Data class. The AmqpTestPeer will minimise its use of other Proton classes.

...

SASL

...

Most tests will perform minimal SASL negotiation, simulating simple, successful SASL authentication. We will write specific SASL tests to exercise more complex scenarios

We expect all of the above mocking options to be used at some point. Initially we will favour mocking the broker in-process where possible. This is because:

...

.