...
- The demander node prepares the set of IgniteDhtDemandedPartitionsMap#full cache partitions to fetch;
- The demander node checks compatibility version (for example, 2.8) and starts recording all incoming cache updates to the new special storage – the temporary WAL storage;
- The demander node sends the GridDhtPartitionDemandMessage to the supplier node as usual;
- The supplier node receives GridDhtPartitionDemandMessage and starts the new checkpoint process and fixes cache partition file sizes;
- The supplier node creates an empty temporary file with .delta (e.g. part-0.bin.delta file) postfix for each cache partition file (in the same cache working directory or another configured);
- The supplier node starts tracking each pageId write attempt to these partition files
- When the write attempt happens, the thread that caused it reads the original copy of this page from the partition and flushes it to the corresponding .delta file;
- After it the thread writes the changed page data to the partition file;
- The supplier waits the checkpoint process ends;
- On the supplier for each cache partition file
- The process opens the partition file in read-only mode;
- Starts sending partition file (as it is with any concurrent writes) by chunks of predefined constant size (multiple of PageMemory size);
- After the partition file sent it starts sending corresponding .delta file;
- The demander node starts to listen to new type of incoming connections (a socket channel created event) from the supplier node;
- When the appropriate connection established the demander node for each cache partition file
- Receives file metadata information (corresponding cache group identifier, cache partition file name, file size)
- Writes data from the socket to the particular cache partition file from the beginning of the file
- After the original cache partition file received the node starts receiving corresponding .delta file
- The node reads data from the socket by chunks of PageMemory size and applies each received pageId to the partition file
- When the demander node receives the whole cache partition file
- The node begins the rebuild secondary indexes procedure over received partition file
- After it the thread begins to apply for data entries from the beginning of WAL-temporary storage;
- All async operations corresponding to this partition file still write to the end of temporary WAL;
- At the moment of WAL-temporary storage is ready to be empty
- Start the first checkpoint;
- Wait for the first checkpoint ends and own the cache partition;
- All operations now are switched to the partition file instead of writing to the temporary WAL;
- Schedule the temporary WAL storage deletion;
- The supplier node deletes all temporary files;
...
{"serverDuration": 172, "requestCorrelationId": "c5773b4ad1ce59ce"}