Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

A tree of manageable categories, that are all of which extend of the interface ConfiguredObject, underpins underpin the Broker.   A ConfiguredObject has zero or more attributes, zero or more children and zero or more context variable name/value pairs.  A ConfiguredObject may be persisted to a configuration store so its state can be restored when the Broker is restarted.

The manageable categories are arranged into a tree structure.  SystemConfig is at the root and has a single descendent Broker.  The Broker itself has children: Port, AuthenticationProvider, VirtualHostNode amongst others.   VirtualHostNode has a child VirtualHost.  It is the VirtualHost that has categories  The children of the VirtualHost are categories that directly involved in messaging such as Queue.  The diagram below illustrates the category hierarchy but many categories are elided for brevity.

...

ConfiguredObject categories such as SystemConfig and VirtualhostNode take responsibility for managing the storage of their children.  This is marked up in the model with the @ManagedObject annotation (#managesChildren). These objects utilise a DurableConfigurationStore to persist their durable children to storage.  ConfigurationChangeListener are used to trigger the update of the storage each time a ConfiguredObject is changed.

AMQP Transport Layer

At the high level, the transport layer

  1. accepts bytes from the wire and passes them to the protocol engines.
  2. pulls bytes from the protocol engines and pushes them down the wire.    

There are two AMQP Transport Layers in Broker-J.

...

We'll consider the two layers separately below.

The transport is responsible for TLS.  The TLS configuration is owned from the PortKeystore and Truststore model objects.  If so configured, it is the transport's responsibility to managed the TLS connection.

TCP/IP

This layer is implemented from first principles using Java NIO.

...

It uses a Selector to monitor all connected sockets (and the accepting socket) for work.  Once work is detected (i.e. the selector returns) the connection work is serviced by threads drawn from an IO thread pool.  An eat-what-you-kill pattern is used to reduce dispatch latency.  This works in the following way.   The worker thread that performed the select, after adding all the ready connections to the work queue, adds the selector task to the work queue and then starts to process the work queue itself (this is the eat what you kill bit).  This approach potentially avoids the dispatch latency between the thread that performed select and another thread from the IO thread pool.   The Selector is the responsibility of the SelectorThread class.

Connections to peers are A connections to a client is represented by a NonBlockingConnection instance.  The SelectorThread causes the NonBlockingConnections that require IO work to be executed (NonBlockingConnection#doWork) on a thread from an IO thread pool (owned by NetworkConnectionScheduler).  On each work cycle, the NonBlockingConnection first goes through a write phase where pending work is pulled from the protocol engine producing bytes for the wire in the process.  If all the pending work is sent completely (i.e. the outbound network buffer is not exhausted), the next phase is a read phase, where the . The bytes are consumed from the channel and fed into the protocol engine.  Finally there is a further write phase to send any new bytes resulting from the input we have just read.   The write/read/write sequence is organised so in order that the Broker first evacuates as much state from memory as possible (thus freeing memory) before reading new bytes from the wire.

In addition to the NonBlockingConnection being scheduled when bytes arrive for it from the wiresingled by the Selector, the Broker may need to awaken them at other times.  For instance, if a message arrives on a queue that is suitable for a consumer, the NonBlockingConnection associated with that consumer must added on to the work queueawoken.   The mechanism that does this is NetworkConnectionScheduler#schedule method NetworkConnectionScheduler#schedule method which adds it to the work queue.  This is wired to the protocol engine via a listener.

...

There is a NetworkConnectionScheduler associated with each AMQP Port and each VirtiualHost.  When a connection is made to the Broker, the initial interactions exchanges between peer and broker (exchange of protocol headers, authentication etc) take place on the thread pool of the NetworkConnectionScheduler of the Port.  Once the connection has indicate indicated which VirtualHost it wishes to connect to, responsibility for the NonBlockingConnection shifts to the NetworkConnectionScheduler of the VirtualHost.  

TLS

The TCP/IP transport layer responds to the TLS configured provided by the PortKeystore and Truststore model objects.  It does this using the NonBlockingConnectionDelegates.

  • The NonBlockingConnectionUndecidedDelegate is used to allow Plain/TLS port unification feature.  It sniffs the initial incoming bytes to determine if the peer is trying to negotiate a TLS connection or not.
  • NonBlockingConnectionTLSDelegate is responsible for TLS connections.  It feeds the bytes through an SSLEngine.

  • NonBlockingConnectionPlainDelegate

<mention the delegates> 

 

Websocket

Uses Jetty's websocket module.  

AMQP Protocol Engines

ProtocolEngine accept bytes from the transport (ProtocolEngine#received).  

Queues

HTTP, REST and Web Management

...