Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. The demander node prepares the set of IgniteDhtDemandedPartitionsMap#full cache partitions to fetch;
  2. The demander node checks compatibility version (for example, 2.8) and starts recording all incoming cache updates to the new special storage – the temporary WAL;
  3. The demander node sends the GridDhtPartitionDemandMessage to the supplier node;
  4. When the supplier node receives GridDhtPartitionDemandMessage and starts the new checkpoint process;
  5. The supplier node creates empty the temporary cache partition file with .tmp postfix in the same cache persistence directory;
  6. The supplier node splits the whole cache partition file into virtual chunks of predefined size (multiply to the PageMemory size);
    1. If the concurrent checkpoint thread determines the appropriate cache partition file chunk and tries to flush dirty page to the cache partition file
      1. If rebalance chunk already transferred
        1. Flush the dirty page to the file;
      2. If rebalance chunk not transferred
        1. Write this chunk to the temporary cache partition file;
        2. Flush the dirty page to the file;
    2. The node starts sending to the demander node each cache partition file chunk one by one using FileChannel#transferTo
      1. If the current chunk was modified by checkpoint thread – read it from the temporary cache partition file;
      2. If the current chunk is not touched – read it from the original cache partition file;
  7. The demander node starts to listen to new pipe incoming connections from the supplier node on TcpCommunicationSpi;
  8. The demander node creates the temporary cache partition file with .tmp postfix in the same cache persistence directory;
  9. The demander node receives each cache partition file chunk one by one
    1. The node checks CRC for each PageMemory in the downloaded chunk;
    2. The node flushes the downloaded chunk at the appropriate cache partition file position;
  10. When the demander node receives the whole cache partition file
    1. The node starts applying for cache data for data entries from temporary WAL storage on .tmp partition file;
    2. All concurrent operations corresponding to cache partition file still write to the end of temporary WAL;
    3. At the moment of temporary WAL store is ready to be empty
      1. Suspend applying async operations on both the original partition file and file with .tmp postfix;
      2. Wait on last operations are applied from the temporary WAL store to the partition file;
      3. Cut the .tmp postfix on partition file;
      4. Move the original partition file to .tmp;
      5. Resume applying  async operations;
      6. Schedule the original partition file and temporary WAL storage deletion;
  11. The supplier node deletes the temporary cache partition file;
  12. The demander node owning the new cache partition file;

...

  • The new write-ahead-log manager for writing temporary records must support
    • Unlimited number of wal-files to store temporary cache data records;
    • Iterating over stored data records during an asynchronous writer thread inserts new records;

...