You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »


IDIEP-28
Author
Sponsor
Created

31-Oct-2018

Status

DRAFT


Motivation

The Apache Ignite cluster balance procedure with enabled persitence currently doesn't utilize network and storage device throughout to its full extent. The balance procedure processes cache data entries one by one which is not efficient enough for the cluster with enabled persistence.

Description

The Apache Ignite needs to support cache rebalancing as transferring partition files using zero copy algorithm [1] based on an extension of communication SPI and Java NIO API.

Process overview

There are two participants in the process of balancing data: 

  1. demaner (receiver of partition files)
  2. supplier (sender of partition files)

The process of ordering cache groups for rebalance remains the same. 
The whole process is described in terms of rebalance single cache group:

  1. The GridDhtPreloaderAssignments created for cache group (type of Map<ClusterNode, GridDhtPartitionDemandMessage>)

CommunicationSpi

To benefit from zero file copy we must delegate the file transferring to FileChannel#transferTo(long, long, java.nio.channels.WritableByteChannel) [2] because the fast path of transferTo method is only executed if the destination buffer inherits from an internal JDK class.

  • The CommunicationSpi needs to support pipe connections between two nodes;
    • The WritableByteChannel needs to be accesses on the supplier side;
    • The ReadableByteChannel needs to be read on the demander side;
  • The CommunicationListener must be extended to respond on new incoming pipe connections;

Partition transmission

The cache partition file transfer over the network must be done using chunks with validation of received piece of data on the demander side.

  • The new layer over the cache partition file must support direct using of FileChannel#transferTo method over the CommunicationSpi pipe connection;
  • The process manager must support transferring the cache partition file by chunks of predefined size (multiply to the page size) one by one;
  • The connection bandwidth of the cache partition file transfer must have an ability to be limited at runtime;

Checkpointing on supplier

When the supplier node receives the cache partition file demand request it must prepare and provide the cache partition file to transfer over network. The Copy-on-Write [3] tehniques assume to be used to guarantee the data consistency during chunk transfer.  

The checkpointing process description on the supplier node:

  1. The node starts and waits for the checkpoint process to be finished;
  2. The node creates empty temporary cache partition file with .tmp postfix in the same cache persistence directory;
  3. The whole cache partition file divided into virtual rebalance chunks of predefined size (multiply to the PageMemory size);
  4. The checkpoint thread determines the appropriate rebalance file chunk and tries to flush dirty page to the cache partition file
    1. If rebalance chunk already transferred than just flush the dirty page to the file;
    2. If rebalance chunk not transferred
      1. Write this chunk to the temporary cache partition file;
      2. Flush the dirty page;

Record temp-WAL and recovery


Risks and Assumptions

A few notes can be mentioned:

  • If operating system does not support zero copy, sending a file with FileChannel#transferTo might fail or yield worse performance.
    For example, sending a large file doesn't work well enough on Windows;
  • The ssl must be disabled to take an advantage of zero copy file transmission;

Discussion Links

// Links to discussions on the devlist, if applicable.

Reference Links

  1. Zero Copy I: User-Mode Perspective – https://www.linuxjournal.com/article/6345
  2. Example: Efficient data transfer through zero copy – https://www.ibm.com/developerworks/library/j-zerocopy/index.html
  3. Copy-on-write – https://en.wikipedia.org/wiki/Copy-on-write

Tickets

// Links or report with relevant JIRA tickets.

  • No labels