Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

...

  1. as a consumer, if it's a file, it just reads the file, otherwise if it represents a directory it scans all the file under the path satisfying the configured pattern. All the files under that directory must be of the same type.
  2. as a producer, if at least one split strategy is defined, the path is considered a directory and under that directory the producer creates a different file per split named using the configured UuidGenerator.

Options

 

Note

When consuming from hdfs2 then in normal mode, a file is split into chunks, producing a message per chunk. You can configure the size of the chunk using the chunkSize option. If you want to read from hdfs and write to a regular file using the file component, then you can use the fileMode=Append to append each of the chunks together.

Options

Div
classconfluenceTableSmall

Name

Default Value

Description

overwrite

true

The file can be overwritten

append

false

Append to existing file. Notice that not all HDFS file systems support the append option.

bufferSize

4096

The buffer size used by HDFS

replication

3

The HDFS replication factor

blockSize

67108864

The size of the HDFS blocks

fileType

NORMAL_FILE

It can be SEQUENCE_FILE, MAP_FILE, ARRAY_FILE, or BLOOMMAP_FILE, see Hadoop

fileSystemType

HDFS

It can be LOCAL for local filesystem

keyType

NULL

The type for the key in case of sequence or map files. See below.

valueType

BYTES

The type for the key in case of sequence or map files. See below.

splitStrategy

 

A string describing the strategy on how to split the file based on different criteria. See below.

openedSuffix

opened

When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase.

readSuffix

read

Once the file has been read is renamed with this suffix to avoid to read it again.

initialDelay

1000

For the consumer, how much to wait (milliseconds) before to start scanning the directory.

delay

0

The interval (milliseconds) between the directory scans.

pattern

*

The pattern used for scanning the directory

chunkSize

4096

When reading a normal file, this is split into chunks producing a message per chunk.

connectOnStartup

true

Camel

Wiki Markup
{div:class=confluenceTableSmall} || Name || Default Value || Description || | {{overwrite}} | {{true}} | The file can be overwritten | | {{append}} | {{false}} | Append to existing file. Notice that not all HDFS file systems support the append option. | | {{bufferSize}} | {{4096}} | The buffer size used by HDFS | | {{replication}} | {{3}} | The HDFS replication factor | | {{blockSize}} | {{67108864}} | The size of the HDFS blocks | | {{fileType}} | {{NORMAL_FILE}} | It can be SEQUENCE_FILE, MAP_FILE, ARRAY_FILE, or BLOOMMAP_FILE, see Hadoop | | {{fileSystemType}} | {{HDFS}} | It can be LOCAL for local filesystem | | {{keyType}} | {{NULL}} | The type for the key in case of sequence or map files. See below. | | {{valueType}} | {{TEXT}} | The type for the key in case of sequence or map files. See below. | | {{splitStrategy}} | | A string describing the strategy on how to split the file based on different criteria. See below. | | {{openedSuffix}} | {{opened}} | When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase. | | {{readSuffix}} | {{read}} | Once the file has been read is renamed with this suffix to avoid to read it again. | | {{initialDelay}} | {{0}} | For the consumer, how much to wait (milliseconds) before to start scanning the directory. | | {{delay}} | {{0}} | The interval (milliseconds) between the directory scans. | | {{pattern}} | {{*}} | The pattern used for scanning the directory | | {{chunkSize}} | {{4096}} | When reading a normal file, this is split into chunks producing a message per chunk. | | {{connectOnStartup}} | {{true}} | *Camel

2.9.3/2.10.1:

*

Whether

to

connect

to

the

HDFS

file

system

on

starting

the

producer/consumer.

If

{{

false

}}

then

the

connection

is

created

on-demand.

Notice

that

HDFS

may

take

up

till

15

minutes

to

establish

a

connection,

as

it

has

hardcoded

45

x

20

sec

redelivery.

By

setting

this

option

to

{{

false

}}

allows

your

application

to

startup,

and

not

block

for

up

till

15

minutes. | {div}

minutes.

owner

 

The file owner must match this owner for the consumer to pickup the file. Otherwise the file is skipped.

KeyType and ValueType

  • NULL it means that the key or the value is absent
  • BYTE for writing a byte, the java Byte class is mapped into a BYTE
  • BYTES for writing a sequence of bytes. It maps the java ByteBuffer class
  • INT for writing java integer
  • FLOAT for writing java float
  • LONG for writing java long
  • DOUBLE for writing java double
  • TEXT for writing java strings

...

Note

note that this strategy currently requires either setting an IDLE value or setting the HdfsConstants.HDFS_CLOSE header to false to use the BYTES/MESSAGES configuration...otherwise, the file will be closed with each message

for example:

Code Block
hdfshdfs2://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5

...

The following headers are supported by this component:

Producer only

Div
classconfluenceTableSmall

Header

Description

CamelFileName

Camel

Wiki Markup
{div:class=confluenceTableSmall} || Header || Description || | {{CamelFileName}} | *Camel

2.13:

*

Specifies

the

name

of

the

file

to

write

(relative

to

the

endpoint

path).

The

name

can

be

a

{{

String

}}

or

an

[]

object.

Only

relevant

when

not

using

a

split

strategy.

| {div}

Controlling to close file stream

...

Using this component in OSGi

This component is fully functional There are some quirks when running this component in an OSGi environment related to the mechanism Hadoop 2.x uses to discover different org.apache.hadoop.fs.FileSystem implementations. Hadoop 2.x uses java.util.ServiceLoader which looks for /META-INF/services/org.apache.hadoop.fs.FileSystem files defining available filesystem types and implementations. These resources are not available when running inside OSGi.

As with camel-hdfs component, however, it requires some actions from the user. Hadoop uses the thread context class loader in order to load resources. Usually, the thread context classloader will be the bundle class loader of the bundle that contains the routes. So, the default configuration files need to be visible from the bundle class loader. A typical way to deal with it is to keep a copy of of core-default.xml (and e.g., hdfs-default.xml) in your bundle root. That file can be found in the hadoop-common.jar.

Using this component with manually defined routes

There are two options:

  1. Package /META-INF/services/org.apache.hadoop.fs.FileSystem resource with bundle that defines the routes. This resource should list all the required Hadoop 2.x filesystem implementations.
  2. Provide boilerplate initialization code which populates internal, static cache inside org.apache.hadoop.fs.FileSystem class:
Code Block
java
java
org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration();
conf.setClass("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class, FileSystem.class);
conf.setClass("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class, FileSystem.class);
...
FileSystem.get("file:///", conf);
FileSystem.get("hdfs://localhost:9000/", conf);
...

Using this component with Blueprint container

Two options:

  1. Package /META-INF/services/org.apache.hadoop.fs.FileSystem resource with bundle that contains blueprint definition.
  2. Add the following to the blueprint definition file:
Code Block
java
java
<bean id="hdfsOsgiHelper" class="org.apache.camel.component.hdfs2.HdfsOsgiHelper">
   <argument>
      <map>
         <entry key="file:///" value="org.apache.hadoop.fs.LocalFileSystem"  />
         <entry key="hdfs://localhost:9000/" value="org.apache.hadoop.hdfs.DistributedFileSystem" />
         ...
      </map>
   </argument>
</bean>

<bean id="hdfs2" class="org.apache.camel.component.hdfs2.HdfsComponent" depends-on="hdfsOsgiHelper" />

This way Hadoop 2.x will have correct mapping of URI schemes to filesystem implementations.