Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • A complex index reduild procedure that requires the development of additional crash recovery guarantees. It will start immediately when the partition file is fully received from the supplier node. If the node crashes in the middle of the rebuilding index process it will have an inconsistent index state at the further node startup. To avoid this a new index-undo WAL record must be logged within rebuilding and used on node start to remove previously added index records.

...

Historical rebalance

...

After partition is received the historical rebalance must be initiated to load other cache updates.

...

Catch-up temporary WAL

The swapped temporary storage will log all the cache updates to the temporary WAL storage (per each partition) for further applying them to the corresponding partition file.

Preload entries from loaded partition file

The demander node will use a preloaded patition file as a new source of cache data entries to load.

Disadvantages:

  • The approach will require a new temporary FilePageStore to be initialized. It must be created as a part of the temporary cache group or in the separate temporary data region to provide reusing machinery of iteration over the full partition file.

Proposed Changes (Hot swap with historical rebalance)

Process Overview

In the process of balancing data:

  • Demander (receiver of partition files)
  • Supplier (sender of partition files)

The whole process is described in terms of rebalancing a single partition file of a cache group. All the other partitions would be rebalanced one-by-one.

  1. NODE_JOIN event occurrs and the blocking PME starts;
    1. The Demander decides which partitions must be loaded. All the desired partitions have MOVING state;
    2. The Demander initiates a new checkpoint process;
      1. Under the checkpoint write-lock it swaps cache data storage with the temporary one for each partition of the given set;
      2. The temporary cache data storage tracks partition counter number as ususal (on each cache operations);
      3. Wait for the checkpoint begin future ends;
  2. The Demander sends a request to the Supplier with the previously prepares set of cache groups and partition files;
  3. The Supplier receives a request and starts a new local checkpoint process;
    1. Creates a temporary file with .delta postfix (for each partition file e.g. part-0.bin.delta);
    2. Under checkpoint write lock fixes the partition expected file size (at the moment of the checkpoint end);
    3. Wait for the checkpoint begin future ends;
    4. Starts the copy process of the partition file to the Demander;
      1. Opens the partition file in read-only mode;
      2. Starts sending partition file (with any concurrent writes) by chunks of predefined size;
    5. Asynchronouosly writes each page to the partition file and the same page to the corresponding file with .delta postfix;
    6. When the partition file sent it starts sending corresponding .delta file;
  4. The Demander listens new file sending attempts from the Supplier;
  5. The Demander receives partition file (for each partition file one by one);
  6. The Demander reads corresponding partition .delta file by chunks and applies them on the received partiton file;
  7. When the Demander receives the whole cache partition file;
    1. Swap the temporary cache data storage with the original one on the next checkpoint (under write lock); 
    2. When the partition has been swapped it starts the rebuild indexes procedure over given partition files;
    3. Starts historical rebalance for the given partition file;
  8. The Supplier deletes all temporary files;

Components

In terms of a high-level abstraction, Apache Ignite must support the features described below.

File transfer between nodes

The node partition preloader machinery download cache partition files from cluster nodes which owns desired partitions (the zero copy algorithm [1] assume to be used by default). To achieve this, the file transmission process must be implemented at Apache Ignite over Communication SPI.

CommunicationSpi

IThe Comminication SPI must support to:

  • opening channel connections to a remote node to an arbitrary topic (GridTopic is used) with initial meta information;
  • listening incoming channel connections and handling them by registered handlers;
  • an arbitrary set of channel parameters on connection handshake (some initial Message assumed to be used);
API
Code Block
languagejava
themeConfluence
titleCommunicationListenerEx.java
collapsetrue
public interface CommunicationListenerEx<T extends Serializable> extends EventListener {
    /**
     * @param nodeId Remote node id.
     * @param initMsg Init channel message.
     * @param channel Locally created channel endpoint.
     */
    public void onChannelOpened(UUID nodeId, Message initMsg, Channel channel);
}

GridIoManager

IO manager must support to:

While the Demander is being receive partition files it must save sequentially all cache entries corresponding to the MOVING partition into a new temporary storage. These entries will be applied later one by one on the newly received cache partition file. All asynchronous operations will be enrolled to the end of temporary storage during storage reads until it becomes fully read. The file-based FIFO approach assumes to be used by this process.

The temporary storage is chosen to be WAL-based. The storage must support to:

  • Unlimited number of WAL-files to store temporary data records;
  • Iterating over stored data records during an asynchronous writer thread insert new records;
  • WAL-per-partiton approach is need to be used;
  • Write operations to storage must have higher priority over read operations;

Expected problems to be solved

  • We must stop updating indexes on demander when the data is ready to be transferred from the supplier node. All async cache updates on demander must not cause the index update;
  • The previous partition metadata page and all stored meta information must be destroyed in PageMemory and restored from the new partition file;

Preload entries from loaded partition file

The demander node will use a preloaded patition file as a new source of cache data entries to load.

Disadvantages:

  • The approach will require a new temporary FilePageStore to be initialized. It must be created as a part of the temporary cache group or in the separate temporary data region to provide reusing machinery of iteration over the full partition file.

Proposed Changes (Hot swap with historical rebalance)

Process Overview

In the process of balancing data:

  • Demander (receiver of partition files)
  • Supplier (sender of partition files)

The whole process is described in terms of rebalancing a single partition file of a cache group. All the other partitions would be rebalanced one-by-one.

  1. NODE_JOIN event occurrs and the blocking PME starts;
    1. The Demander decides which partitions must be loaded. All the desired partitions have MOVING state;
    2. The Demander initiates a new checkpoint process;
      1. Under the checkpoint write-lock it swaps cache data storage with the temporary one for each partition of the given set;
      2. The temporary cache data storage tracks partition counter number as ususal (on each cache operations);
      3. Wait for the checkpoint begin future ends;
  2. The Demander sends a request to the Supplier with the previously prepares set of cache groups and partition files;
  3. The Supplier receives a request and starts a new local checkpoint process;
    1. Creates a temporary file with .delta postfix (for each partition file e.g. part-0.bin.delta);
    2. Under checkpoint write lock fixes the partition expected file size (at the moment of the checkpoint end);
    3. Wait for the checkpoint begin future ends;
    4. Starts the copy process of the partition file to the Demander;
      1. Opens the partition file in read-only mode;
      2. Starts sending partition file (with any concurrent writes) by chunks of predefined size;
    5. Asynchronouosly writes each page to the partition file and the same page to the corresponding file with .delta postfix;
    6. When the partition file sent it starts sending corresponding .delta file;
  4. The Demander listens new file sending attempts from the Supplier;
  5. The Demander receives partition file (for each partition file one by one);
  6. The Demander reads corresponding partition .delta file by chunks and applies them on the received partiton file;
  7. When the Demander receives the whole cache partition file;
    1. Swap the temporary cache data storage with the original one on the next checkpoint (under write lock); 
    2. When the partition has been swapped it starts the rebuild indexes procedure over given partition files;
    3. Starts historical rebalance for the given partition file;
  8. The Supplier deletes all temporary files;

Components

In terms of a high-level abstraction, Apache Ignite must support the features described below.

File transfer between nodes

The node partition preloader machinery download cache partition files from cluster nodes which owns desired partitions (the zero copy algorithm [1] assume to be used by default). To achieve this, the file transmission process must be implemented at Apache Ignite over Communication SPI.

CommunicationSpi

IThe Comminication SPI must support to:

  • opening channel connections to a remote node to an arbitrary topic (GridTopic is used) with initial meta information;
  • listening incoming channel connections and handling them by registered handlers;
  • an arbitrary set of channel parameters on connection handshake (some initial Message assumed to be used)
  • different approaches of incoming data handling: CHUNK (read channel into ByteBuffer), FILE (zero-copy approach)
  • send and receive data by chunks of predefined size with storing intermediate results;
  • reestablishing connection between nodes if an error occurs and continue file sending\receiving;
  • limiting connection bandwidth at runtime;
API
Code Block
languagejava
themeConfluence
titleTransmissionHandlerCommunicationListenerEx.java
collapsetrue
public interface TransmissionHandlerCommunicationListenerEx<T extends Serializable> extends EventListener {
    /**
     * @param errnodeId TheRemote err of fail handling processnode id.
     */
 @param initMsg Init public void onException(UUID nodeId, Throwable err);
channel message.
    /**
     * @param nodeIdchannel RemoteLocally nodecreated idchannel fromendpoint.
 which request has been received.*/
    public *void onChannelOpened(UUID nodeId, Message initMsg, Channel channel);
}

GridIoManager

IO manager must support to:

  • different approaches of incoming data handling: CHUNK (read channel into ByteBuffer), FILE (zero-copy approach)
  • send and receive data by chunks of predefined size with storing intermediate results;
  • reestablishing connection between nodes if an error occurs and continue file sending\receiving;
  • limiting connection bandwidth at runtime;
API
Code Block
languagejava
themeConfluence
titleTransmissionHandler.java
collapsetrue
public interface TransmissionHandler {
    /*@param fileMeta File meta info.
     * @return Absolute pathname denoting a file.
     */
    public String filePath(UUID nodeId, TransmissionMeta fileMeta);

    /**
     * <em>Chunk handler</em> represents by itself the way of input data stream processing.
     * It accepts within each chunk a {@link ByteBuffer} with data from input for further processing.
     *
     * @param nodeIderr RemoteThe nodeerr idof fromfail whichhandling requestprocess.
 has been received.
  */
   * @parampublic initMetavoid Initial handler meta info.onException(UUID nodeId, Throwable err);

     /**
 @return Instance of chunk handler* to@param processnodeId incomingRemote datanode by chunksid from which request has been received.
     * @param fileMeta File meta info.
     * @return Absolute pathname denoting a file.
     */
    public Consumer<ByteBuffer>String chunkHandlerfilePath(UUID nodeId, TransmissionMeta initMetafileMeta);

    /**
     * <em>File<em>Chunk handler</em> represents by itself the way of input data stream processing.
 All the data will
 * It accepts within *each bechunk processeda under{@link theByteBuffer} hoodwith usingdata zero-copyfrom transferringinput algorithm and only start file processing andfor further processing.
     *
 the end of processing will* be@param provided.
nodeId Remote node id  *
     * @param nodeId Remote node id from from which request has been received.
     * @param initMeta Initial handler meta info.
     * @return IntanceInstance of readchunk handler to process incoming data like the {@link FileChannel} mannerby chunks.
     */
    public Consumer<File>Consumer<ByteBuffer> fileHandlerchunkHandler(UUID nodeId, TransmissionMeta initMeta);
}
Code Block
languagejava
titleGridIoManager.TransmissionSender.java
collapsetrue
public class TransmissionSender implements Closeable {
    /**
     * @param file Source file to send to remote.
    /**
     * <em>File handler</em> represents by itself the way of input data stream processing. All the data will
     * @parambe paramsprocessed Additionalunder transferthe filehood description keys.
     * @param plc The policy of handling data on remote.
     * @throws IgniteCheckedException If failsusing zero-copy transferring algorithm and only start file processing and
     * the end of processing will be provided.
     */
    public void* send(
@param nodeId Remote node id from which request has Filebeen file,received.
     * @param initMeta Initial Map<String,handler Serializable>meta params,info.
     * @return Intance TransmissionPolicyof plc
read handler to process )incoming throwsdata IgniteCheckedException,like InterruptedException,the IOException {
@link FileChannel} manner.
     */
    public Consumer<File> fileHandler(UUID nodeId, TransmissionMeta initMeta);
}


Code Block
languagejava
titleGridIoManager.TransmissionSender.java
collapsetrue
public class TransmissionSender implements Closeable {
 send(file, 0, file.length(), params, plc);
    }

    /**
     * @param file Source file to send to remote.
     * @param plc params Additional transfer file description keys.
     * @param plc The policy of handling data on remote.
     * @throws IgniteCheckedException If fails.
     */
    public void send(
        File file,
        Map<String, Serializable> params,
        TransmissionPolicy plc
    ) throws IgniteCheckedException, InterruptedException, IOException {
        send(file, 0, file.length(), new HashMap<>()params, plc);
    }

    /**
     * @param file Source file to send to remote.
     * @param offset Position to start trasfer atplc The policy of handling data on remote.
     * @param@throws cntIgniteCheckedException Number of bytes to transferIf fails.
     */
 @param params Additional transferpublic file description keys.void send(
     * @param plc The policy of handling data on remote.File file,
     * @throws IgniteCheckedException IfTransmissionPolicy fails.plc
     */
    public void send() throws IgniteCheckedException, InterruptedException, IOException {
        File send(file,
 0, file.length(), new HashMap<>(), plc);
   long offset,}

    /**
    long cnt,
* @param file Source file to send  Map<String, Serializable> params,
        TransmissionPolicy plc
to remote.
     * @param offset Position to start trasfer at.
     )* @param throwscnt IgniteCheckedException,Number InterruptedException,of IOException {
		// Implbytes to transfer.
    }
}

Copy partition on the fly

Checkpointer

When the supplier node receives the cache partition file demand request it will send the file over the CommunicationSpi. The cache partition file can be concurrently updated by checkpoint thread during its transmission. To guarantee the file consistency Сheckpointer must use Copy-on-Write [3] tehnique and save a copy of updated chunk into the temporary file.

Apply partition on the fly

Catch-up temporary WAL

While the demander node is in the partition file transmission state it must save sequentially all cache entries corresponding to the MOVING partition into a new temporary storage. These entries will be applied later one by one on the newly received cache partition file. All asynchronous operations will be enrolled to the end of temporary storage during storage reads until it becomes fully read. The file-based FIFO approach assumes to be used by this process.

The temporary storage is chosen to be WAL-based. The storage must support to:

  • Unlimited number of WAL-files to store temporary data records;
  • Iterating over stored data records during an asynchronous writer thread insert new records;
  • WAL-per-partiton approach is need to be used;
  • Write operations to storage must have higher priority over read operations;

Expected problems to be solved

...

 * @param params Additional transfer file description keys.
     * @param plc The policy of handling data on remote.
     * @throws IgniteCheckedException If fails.
     */
    public void send(
        File file,
        long offset,
        long cnt,
        Map<String, Serializable> params,
        TransmissionPolicy plc
    ) throws IgniteCheckedException, InterruptedException, IOException {
		// Impl.
    }
}


Copy partition on the fly

Checkpointer

When the supplier node receives the cache partition file demand request it will send the file over the CommunicationSpi. The cache partition file can be concurrently updated by checkpoint thread during its transmission. To guarantee the file consistency Сheckpointer must use Copy-on-Write [3] tehnique and save a copy of updated chunk into the temporary file.

...

Rebuild indexes

The node is ready to become partition owner when partition data is rebalanced and cache indexes are ready. For the message-based cluster rebalancing approach indexes are rebuilding simultaneously with cache data loading. For the file-based rebalancing approach, the index rebuild procedure must be finished before the partition state is set to the OWNING state. 

Failover and Recovery

be finished before the partition state is set to the OWNING state. 

Failover and Recovery

Ignite doesn't provide any recovery guarantees for the partitions with the MOVING state. The cache partitions will be fully loaded when the next rebalance procedure occurs.

FAIL\LEFT during rebalancing

The node which is beeing rebalancing left the cluster. For such nodes WAL is always disabled (all partitions have MOVING state due to this node is new for the cluster and has no cache data). 
Since WAL is disabled we can guarantee that all operations with loaded partition files are safe to be done (renaming partition files, applying async updates) due to a cache directory will be fully dropped on recoveryApache Ignite doesn't provide any recovery guarantees for the partitions with the MOVING state. The cache partitions will be fully loaded when the next rebalance procedure occurs.

Topology change

Each topology change event JOIN/LEFT/FAILED may or may not change cache affinity assignments of currently rebalacning caches. If assignments is not changed and the node is still needs partitions being rebalanced we can continue the current rebalance process (see for details IGNITE-7165).

...

To provide basic recovery guarantees we must to: 

  • Start the checkpoint process when the temporary WAL becomes empty;
  • Wait for the first checkpoint ends and set OWNING status to partition;

Recovery from different stages:

  • The Supplier crashes when sending partition;
  • The Demander crashes when receiving partition;
  • The Demander crashes when applying temp WAL;

Phase-2

The SSL must be disabled to take an advantage of Java NIO zero-copy file transmission using FileChannel#transferTo method. If we need to use SSL the file must be splitted on chunks the same way to send them over the socket channel with ByteBuffer. As the SSL engine generally needs a direct ByteBuffer to do encryption we can't avoid copying buffer payload from the kernel level to the application level

...