Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. The demander node prepares the set of IgniteDhtDemandedPartitionsMap#full cache partitions to fetch;
  2. The demander node checks compatibility version (for example, 2.8) and starts recording all incoming cache updates to the new special storage – the temporary WAL;
  3. The demander node sends the GridDhtPartitionDemandMessage to the supplier node;
  4. When the supplier node receives GridDhtPartitionDemandMessage and starts the new checkpoint process;
  5. The supplier node creates empty the temporary cache partition file with .tmp postfix in the same cache persistence directory;
  6. The supplier node splits the whole cache partition file into virtual chunks of predefined size (multiply to the PageMemory size);
    1. If the concurrent checkpoint thread determines the appropriate cache partition file chunk and tries to flush dirty page to the cache partition file
      1. If rebalance chunk already transferred
        1. Flush the dirty page to the file;
      2. If rebalance chunk not transferred
        1. Write this chunk to the temporary cache partition file;
        2. Flush the dirty page to the file;
    2. The node starts sending to the demander node each cache partition file chunk one by one using FileChannel#transferTo
      1. If the current chunk was modified by checkpoint thread – read it from the temporary cache partition file;
      2. If the current chunk is not touched – read it from the original cache partition file;
  7. The demander node starts to listen to new pipe incoming connections from the supplier node on TcpCommunicationSpi;
  8. The demander node creates the temporary cache partition file with .tmp postfix in the same cache persistence directory;
  9. The demander node receives each cache partition file chunk one by one
    1. The node checks CRC for each PageMemory in the downloaded chunk;
    2. The node flushes the downloaded chunk at the appropriate cache partition file position;
  10. When the demander node receives the whole cache partition file
    1. The node swaps the original partition file with the .tmp partition file;
    2. The node starts applying for data begins to apply data entries from temporary WAL storage;
    3. All concurrent async operations corresponding to cache partition file still write to the end of temporary WAL;
    4. At the moment of temporary WAL store is ready to be empty
      1. Suspend applying async operations to the temporary WAL;
      2. Wait on last operations are applied Waiting for recent operations from the temporary WAL store finished to be applied to the partition file;
      3. The node owning owns the new cache partition;
      4. Resume applying  async applying  async operations to the new owning owned partition file;
      5. Schedule the temporary WAL storage deletion;
  11. The supplier node deletes the temporary cache partition file;

...

  • Zero-copy limitations – If operating system does not support zero copy, sending a file with FileChannel#transferTo might fail or yield worse performance. For example, sending a large file doesn't work well enough on Windows;
  • Disbaled SSL connection – SSL must be disabled to take an advantage of Java NIO zero copy file transmission using of FileChannel#transferTo. We can consider to use OpenSSL's non-copying interface to avoid allocating new buffers for each read and write operation at Phase-2;
  • Writing WAL io wait time –  Under the heavy load of partition file transmission, writing to the temp-temporary WAL storage may be slowing down. Since the loss of data of temporary WAL storage has no risks we can consider store the whole storage into the memory.

...