...
The main idea is to write large number of bytes per each disk write operation, minimizing switching between files and doing minimizing other overhead work between writes.
...
For desktop PCs it doesn't make sense to use more than one writer thread. But for servers using RAID storages writing to several files in parallel could be faster overall. The solution should be build in assumption that there could be several writers. Need to make sure that writers don't pick up data for the same file.
There will be a separate blocking queue for each partition providing fast info about its fullness
...
...
Is there a protection against multiple concurrent dump creation? Do we need one?
There are several important points to consider when writing data to SSD storage
Typical SSD throughput versus write block size chart will look like this.
One of the configuration aims is to find minimum block size when random and sequential writes show the same throughput.
Data write time was measured with different settings on MacBook.
There was 1Gb direct memory allocated and filled with random data. It was logically split into 8 partitions and saved into different files. There were different blocks used (16kb, 32 kb, ...16Mb) for writing in different number of threads (1, 2, 4, 8). The overall time required to fully write data to disk was measured. For each point there were 20 runs executed and average (time in milliseconds) was taken. After every run files were deleted and System.gc() was executed and 10 second pause was taken. Got the following diagram.
Some conclusions for this run: