Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Order of nodes join is not relevant, there is possible situation that oldest node has older partition state, but joining node has higher partition counter. In this case rebalancing will be triggered by coordinator. Rebalancing will be performed from the newly joined node to existing one (note this behaviour may be changed under IEP-4 Baseline topology for caches)

...


Advanced Configuration

 WAL History Size

In corner case we need to store WAL only for 1 checkpoint in past for successful recovery (PersistentStoreConfiguration#walHistSize )

...

By default WAL history size is 20 to increase probability that rebalancing can be done using logical deltas from WAL.

Estimating disk space

 

WAL Work max maximum used size: walSegmentSize * walSegments = 640Mb (default)

...

1st way is applicable if checkpoints are triggered mostly by timer trigger.
Wal size = 2*Average load(bytes/sec) * trigger interval (sec) * walHistSize (number of checkpoints)
Where 2 multiplier coming from physical & logical WAL Records.

2nd way: Checkpoint is triggered by segments max dirty pages percent. Use persisted data regions max sizes:
sum(Max configured DataRegionConfiguration.maxSize) * 75% - est. maximum data volume to be writen on 1 checkpoint.
Overall WAL size (before archiving) = 2* est. data volume * walHistSize = 1,5 * sum(DataRegionConfiguration.maxSize) * walHistSize

Note applying WAL compressor may significiantly reduce archive size.

Setting input-output

I/O abstraction determines how disk features are accessed by native persistence.

Random Access File I/O

This type Opeate with files with standard Java inferface.

This option was default before 2.4.

This type of IO is always used for WAL files.

Async I/O

This option is default after 2.4. In was introduced to protect IO module and underlying files from close by interrupt problem.

Direct I/O

 

Direct I/O is implemented by Ignite plugin, since 2.4.

Direct I/O is a feature of the file system whereby file reads and writes go directly from the applications to the storage device, bypassing the operating system read and write caches (page cache). Direct I/O is useful by applications (such as databases) that manage their own caches, as Ignite is.

 

Since Ignite 2.4 there is plugin for enabling Direct I/O mode. Plugin works on Linux with the version of the kernel over 2.4.2. This plugin switches the input of the output for durable (page) memory to use the mode Direct IO. If incompatible OS or FS is used, plugin has no effect and fallbacks to regular I/O implementation.

 

There is no need to do additional configuration of plugin. It is sufficient to add ignite-direct-io.jar to classpath. Plugin jar is available under optional libs folder:
apache-ignite-fabric-x.x.x-bin\libs\optional\ignite-direct-io\ignite-direct-io-x.x.x.jar
However, disabling plugin’s function is possible through system Property.

 

Enabled direct input-output mode allows Ignite to bypass the system cache pages Linux is fully conveys the management of pages to Ignite.