...
- The demander node prepares the set of IgniteDhtDemandedPartitionsMap#full cache partitions to fetch;
- The demander node checks compatibility version (for example, 2.8) and starts recording all incoming cache updates to the new special storage – the temporary WAL;
- The demander node sends the GridDhtPartitionDemandMessage to the supplier node;
- When the supplier node receives GridDhtPartitionDemandMessage and starts the new checkpoint process;
- The supplier node creates empty the temporary cache partition file with .tmp postfix in the same cache persistence directory;
- The supplier node splits the whole cache partition file into virtual chunks of predefined size (multiply to the PageMemory size);
- If the concurrent checkpoint thread determines the appropriate cache partition file chunk and tries to flush dirty page to the cache partition file
- If rebalance chunk already transferred
- Flush the dirty page to the file;
- If rebalance chunk not transferred
- Write this chunk to the temporary cache partition file;
- Flush the dirty page to the file;
- The node starts sending to the demander node each cache partition file chunk one by one using FileChannel#transferTo
- If the current chunk was modified by checkpoint thread – read it from the temporary cache partition file;
- If the current chunk is not touched – read it from the original cache partition file;
- The demander node starts to listen to new pipe incoming connections from the supplier node on TcpCommunicationSpi;
- The demander node creates the temporary cache partition file with .tmp postfix in the same cache persistence directory;
- The demander node receives each cache partition file chunk one by one
- The node checks CRC for each PageMemory in the downloaded chunk;
- The node flushes the downloaded chunk at the appropriate cache partition file position;
- When the demander node receives the whole cache partition file
- The node initializes received .tmp cache partition file as the file holder;
- Thread-per-partition begins to apply data entries from the begining of WAL-temporary storage;
- All async operations corresponding to this partition file still write to the end of temporary WAL;
- At the moment of WAL-temporary storage is ready to be empty
- The node owns the new cache partition;
- The node switches writings direct to the partition file (step of writing to the temp-WAL is excluded);
- Schedule the temporary WAL storage deletion;
- The supplier node deletes the temporary cache partition file;
...
{"serverDuration": 97, "requestCorrelationId": "9d1beb7513189194"}