Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 4.0

...

SEDA

...

Component

...

The

...

seda:

...

component

...

provides

...

asynchronous

...

SEDA

...

behavior,

...

so

...

that messages are exchanged on a BlockingQueue and consumers are invoked in a separate thread from the producer.

Note that queues are only visible within a single CamelContext. If you want to communicate across CamelContext instances (for example, communicating between Web applications), see the VM component.

This component does not implement any kind of persistence or recovery, if the VM terminates while messages are yet to be processed. If you need persistence, reliability or distributed SEDA, try using either JMS or ActiveMQ.

Tip
titleSynchronous

The Direct component provides synchronous invocation of any consumers when a producer sends a message exchange.

URI format

Code Block
 messages are exchanged on a [BlockingQueue|http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/BlockingQueue.html] and consumers are invoked in a separate thread from the producer.

Note that queues are only visible within a _single_ [CamelContext]. If you want to communicate across {{CamelContext}} instances (for example, communicating between Web applications), see the [VM] component.

This component does not implement any kind of persistence or recovery, if the VM terminates while messages are yet to be processed. If you need persistence, reliability or distributed SEDA, try using either [JMS] or [ActiveMQ].

{tip:title=Synchronous}
The [Direct] component provides synchronous invocation of any consumers when a producer sends a message exchange.
{tip}

h3. URI format

{code}
seda:someName[?options]
{code}

Where *someName* can be any string that uniquely identifies the endpoint within the current [CamelContext].

You can append query options to the URI in the following format: {{

Where someName can be any string that uniquely identifies the endpoint within the current CamelContext.

You can append query options to the URI in the following format: ?option=value&

...

option=value&

Options

Wiki Markup
amp;…}}

h3. Options
{div:class=confluenceTableSmall}
|| Name || Since || Default || Description ||
| {{size}} | |  | The maximum capacity of the SEDA queue (i.e., the number of messages it can hold). The default value in Camel 2.2 or older is {{1000}}. From Camel 2.3 onwards, the size is unbounded by default. *Notice:* Mind if you use this option, then its the first endpoint being created with the queue name, that determines the size. To make sure all endpoints use same size, then configure the size option on all of them, or the first endpoint being created. From *Camel 2.11* onwards, a validation is taken place to ensure if using mixed queue sizes for the same queue name, Camel would detect this and fail creating the endpoint. |
| {{concurrentConsumers}} | | {{1}} | Number of concurrent threads processing exchanges. |
| {{waitForTaskToComplete}} | | {{IfReplyExpected}} | Option to specify whether the caller should wait for the async task to complete or not before continuing. The following three options are supported: {{Always}}, {{Never}} or {{IfReplyExpected}}. The first two values are self-explanatory. The last value, {{IfReplyExpected}}, will only wait if the message is [Request Reply] based. The default option is {{IfReplyExpected}}. See more information about [Async] messaging. |
| {{timeout}} | | {{30000}} | Timeout (in milliseconds) before a SEDA producer will stop waiting for an asynchronous task to complete. See {{waitForTaskToComplete}} and [Async] for more details. In *Camel 2.2* you can now disable timeout by using 0 or a negative value. | 
| {{multipleConsumers}} | *2.2* | {{false}} | Specifies whether multiple consumers are allowed. If enabled, you can use [SEDA] for [Publish-Subscribe|http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern] messaging. That is, you can send a message to the SEDA queue and have each consumer receive a copy of the message. When enabled, this option should be specified on every consumer endpoint. |
| {{limitConcurrentConsumers}} | *2.3* | {{true}} | Whether to limit the number of {{concurrentConsumers}} to the maximum of {{500}}. By default, an exception will be thrown if a SEDA endpoint is configured with a greater number. You can disable that check by turning this option off. |
| {{blockWhenFull}} | *2.9* | {{false}} | Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted.  By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. |
| {{queueSize}} | *2.9* |  | *Component only:* The maximum default size (capacity of the number of messages it can hold) of the SEDA queue. This option is used if {{size}} is not in use. |
| {{pollTimeout}} | *2.9.3* | {{1000}} | _Consumer only_ -- The timeout used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. |
| {{purgeWhenStopping}} | *2.11.1* | {{false}} | Whether to purge the task queue when stopping the consumer/route. This allows to stop faster, as any pending messages on the queue is discarded. |
| {{queue}} | *2.12.0* | null | Define the queue instance which will be used by seda endpoint |
| {{queueFactory}} | *2.12.0* | null | Define the QueueFactory which could create the queue for the seda endpoint |
| {{failIfNoConsumers}} | *2.12.0* | false | Whether the producer should fail by throwing an exception, when sending to a SEDA queue with no active consumers. |
{div}

h3. 

Choosing

...

BlockingQueue

...

implementation

...

Available

...

as

...

of

...

Camel

...

2.12

...

By

...

default,

...

the

...

SEDA

...

component

...

always

...

intantiates

...

LinkedBlockingQueue,

...

but

...

you

...

can

...

use

...

different

...

implementation,

...

you

...

can

...

reference

...

your

...

own

...

BlockingQueue

...

implementation,

...

in

...

this

...

case

...

the

...

size

...

option

...

is

...

not

...

used

{
Code Block
}
<bean id="arrayQueue" class="java.util.ArrayBlockingQueue">
<constructor-arg index="0" value="10" ><!-- size -->
<constructor-arg index="1" value="true" ><!-- fairness -->
</bean>
<!-- ... and later -->
<from>seda:array?queue=#arrayQueue</from>
{code}

Or

...

you

...

can

...

reference

...

a

...

BlockingQueueFactory

...

implementation,

...

3

...

implementations

...

are

...

provided

...

LinkedBlockingQueueFactory,

...

ArrayBlockingQueueFactory

...

and

...

PriorityBlockingQueueFactory:

{
Code Block
}
<bean id="priorityQueueFactory" class="org.apache.camel.component.seda.PriorityBlockingQueueFactory">
<property name="comparator">
<bean class="org.apache.camel.demo.MyExchangeComparator" />
</property>
</bean>
<!-- ... and later -->
<from>seda:priority?queueFactory=#priorityQueueFactory&size=100</from>

Use of Request Reply

The Seda component supports using Request Reply, where the caller will wait for the Async route to complete. For instance:

Code Block
{code}

h3. Use of Request Reply
The [Seda] component supports using [Request Reply], where the caller will wait for the [Async] route to complete. For instance:
{code}
from("mina:tcp://0.0.0.0:9876?textline=true&sync=true").to("seda:input");

from("seda:input").to("bean:processInput").to("bean:createResponse");
{code}

In

...

the

...

route

...

above,

...

we

...

have

...

a

...

TCP

...

listener

...

on

...

port

...

9876

...

that

...

accepts

...

incoming

...

requests.

...

The

...

request

...

is

...

routed

...

to

...

the

...

seda:input

...

queue.

...

As

...

it

...

is

...

a

...

Request

...

Reply

...

message,

...

we

...

wait

...

for

...

the

...

response.

...

When

...

the

...

consumer

...

on

...

the

...

seda:input

...

queue

...

is

...

complete,

...

it

...

copies

...

the

...

response

...

to

...

the

...

original

...

message

...

response.

{:=
Note
title
until
2.2:
Works
only
with
2
endpoints
}

Using

[

Request

Reply

]

over

[

SEDA

]

or

[

VM

]

only

works

with

2

endpoints.

You

*

cannot

*

chain

endpoints

by

sending

to

A

->

B

->

C

etc.

Only

between

A

->

B.

The

reason

is

the

implementation

logic

is

fairly

simple.

To

support

3+

endpoints

makes

the

logic

much

more

complex

to

handle

ordering

and

notification

between

the

waiting

threads

properly.

This

has

been

improved

in

*

Camel

2.3

*

onwards,

which

allows

you

to

chain

as

many

endpoints as you like.

Concurrent consumers

By default, the SEDA endpoint uses a single consumer thread, but you can configure it to use concurrent consumer threads. So instead of thread pools you can use:

Code Block
 as you like.
{note}

h3. Concurrent consumers
By default, the SEDA endpoint uses a single consumer thread, but you can configure it to use concurrent consumer threads. So instead of thread pools you can use:
{code}
from("seda:stageName?concurrentConsumers=5").process(...)
{code}

As

...

for

...

the

...

difference

...

between

...

the

...

two,

...

note

...

a

...

thread

...

pool

...

can

...

increase/shrink

...

dynamically

...

at

...

runtime

...

depending

...

on

...

load,

...

whereas

...

the

...

number

...

of

...

concurrent

...

consumers

...

is

...

always

...

fixed.

...

Thread

...

pools

...

Be

...

aware

...

that

...

adding

...

a

...

thread

...

pool

...

to

...

a

...

SEDA

...

endpoint

...

by

...

doing

...

something

...

like:

{
Code Block
}
from("seda:stageName").thread(5).process(...)
{code}

Can

...

wind

...

up

...

with two BlockQueues: one from the SEDA endpoint, and one from the workqueue of the thread pool, which may not be what you want. Instead, you might wish to configure a Direct endpoint with a thread pool, which can process messages both synchronously and asynchronously. For example:

Code Block
 two {{BlockQueues}}: one from the SEDA endpoint, and one from the workqueue of the thread pool, which may not be what you want. Instead, you might wish to configure a [Direct] endpoint with a thread pool, which can process messages both synchronously and asynchronously. For example:
{code}
from("direct:stageName").thread(5).process(...)
{code})

You

...

can

...

also

...

directly

...

configure

...

number

...

of

...

threads

...

that

...

process

...

messages

...

on

...

a

...

SEDA

...

endpoint

...

using

...

the

...

concurrentConsumers

...

option.

...

Sample

In the route below we use the SEDA queue to send the request to this async queue to be able to send a fire-and-forget

...

message

...

for

...

further

...

processing

...

in

...

another

...

thread,

...

and

...

return

...

a

...

constant

...

reply

...

in

...

this

...

thread

...

to

...

the

...

original

...

caller.

Wiki Markup
 
{snippet:id=e1|lang=java|url=camel/trunk/camel-core/src/test/java/org/apache/camel/component/seda/SedaAsyncRouteTest.java}

Here

...

we

...

send

...

a

...

Hello

...

World

...

message

...

and

...

expects

...

the

...

reply

...

to

...

be

...

OK.

Wiki Markup

{snippet:id=e2|lang=java|url=camel/trunk/camel-core/src/test/java/org/apache/camel/component/seda/SedaAsyncRouteTest.java}

The

...

"Hello

...

World"

...

message

...

will

...

be

...

consumed

...

from

...

the

...

SEDA

...

queue

...

from

...

another

...

thread

...

for

...

further

...

processing.

...

Since

...

this

...

is

...

from

...

a

...

unit

...

test,

...

it

...

will

...

be

...

sent

...

to

...

a

...

mock

...

endpoint

...

where

...

we

...

can

...

do

...

assertions

...

in

...

the

...

unit

...

test.

...

Using

...

multipleConsumers

...

Available

...

as

...

of

...

Camel

...

2.2

...

In

...

this

...

example

...

we

...

have

...

defined

...

two

...

consumers

...

and

...

registered

...

them

...

as

...

spring

...

beans.

Wiki Markup

{snippet:id=e1|lang=xml|url=camel/trunk/components/camel-spring/src/test/resources/org/apache/camel/spring/example/fooEventRoute.xml}

Since

...

we

...

have

...

specified

...

multipleConsumers=true

...

on

...

the

...

seda

...

foo

...

endpoint

...

we

...

can

...

have

...

those

...

two

...

consumers

...

receive

...

their

...

own

...

copy

...

of

...

the

...

message

...

as

...

a

...

kind

...

of

...

pub-sub

...

style

...

messaging.

...

As

...

the

...

beans

...

are

...

part

...

of

...

an

...

unit

...

test

...

they

...

simply

...

send

...

the

...

message

...

to

...

a

...

mock

...

endpoint,

...

but

...

notice

...

how

...

we

...

can

...

use

...

@Consume

...

to

...

consume

...

from

...

the

...

seda

...

queue.

Wiki Markup

{snippet:id=e1|lang=java|url=camel/trunk/components/camel-spring/src/test/java/org/apache/camel/spring/example/FooEventConsumer.java}

h3. 

Extracting

...

queue

...

information.

...

If

...

needed,

...

information

...

such

...

as

...

queue

...

size,

...

etc.

...

can

...

be

...

obtained

...

without

...

using

...

JMX

...

in

...

this

...

fashion:

{
Code Block
}
SedaEndpoint seda = context.getEndpoint("seda:xxxx");
int size = seda.getExchanges().size();
{code}
{include:Endpoint See Also}
- [VM]
- [Disruptor]
- [Direct]
- [Async]
Include Page
Endpoint See Also
Endpoint See Also