...
Is there a protection against multiple concurrent dump creation? Do we need one?
There are several important points to consider when writing data to SSD storage
Typical SSD throughput versus write block size chart will look like this.
One of the configuration aims is to find minimum block size when random and sequential writes show the same throughput.
Data write time was measured with different settings on MacBook.
There was 1Gb direct memory allocated and filled with random data. It was logically split into 8 partitions and saved into different files. There were different blocks used (16kb, 32 kb, ...16Mb) for writing in different number of threads (1, 2, 4, 8). The overall time required to fully write data to disk was measured. For each point there were 20 runs executed and average (time in milliseconds) was taken. After every run files were deleted and System.gc() was executed and 10 second pause was taken. Got the following diagram.
Some conclusions for this run: