Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Accumulate bytes from serialized dumpEntries into a large ByteBuffer and send it to disk once it becomes full. This requires in-memory data copying from small buffers to large buffer.
  2. Accumulate ByteBuffers each containing one serialized dump entry and send all ByteBuffers to disk using write(ByteBuffer[]) operation.
  3. Serialize dump entries directly to large ByteBuffer. This is good for dump executor threads, but doesn't seem very good for transaction threads.

Need to investigate: Assuming that we want to write the same total number of bytes at what range of byte buffer array size invocation of write(ByteBuffer) and write(ByteBuffer[]) is the same?

ByteBuffer trip cycle will look like this

ByteBufferPool => partition/transaction thread => partition queue => queue dispatcher thread => disk writer queue => disk writer thread => free ByteBuffer queue => ByteBuffer releaser thread => ByteBufferPool→ Partition Queue → Partition Queue Dispatcher Thread → Disk Writer Queue → Disk Writer Thread → Free Queue → ByteBuffer Releaser Thread → ByteBufferPool



ByteBufferPool


partition thread 


Partition Queue

transaction thread


ByteBuffer Releaser ThreadPartition Queue Dispatcher Thread

Free QueueDisk Writer ThreadDisk Writer Queue


ByteBuffer pool

Current solution uses thread local ByteBuffers with expanding size. This is ok for single threaded usage, but not suitable for passing to other threads. And this is also not good for transaction threads.

...

  • ByteBufferPool(size) - constructor with maximum number of bytes to allocate by all buffers together.   

  • ByteBuffer acquire(minSize) - returns ByteBuffer that has size = power of 2 and not smaller than minSize. If there is no buffer in pool, then a new buffer is allocated. If overall size occupied by all buffers has reached the limit, then this method will block until required buffer(s) returned to the pool.
  • release(ByteBuffer) - returns buffer to the pool and signals waiter on acquire

Using buffers only of size of power of 2 simplifies maintenance of the buffers and search. Internal structure to use:

List<ByteBuffer>[]

At position i there will be a list of available buffers of size 2^i.

In average the buffers will be filled to 75%.

Writing to disk

We can often see that disk IO operation is the bottleneck in Ignite. So we should try to make writing to disk efficient. 

...

For desktop PCs it doesn't make sense to use more than one writer thread. But for servers using RAID storages writing to several files in parallel could be faster overall. The solution should be build in assumption that there could be several writers. Need to make sure that writers don't pick up data for the same file.

Partition queue

There will be a separate blocking queue for each partition providing fast info about its fullness

  • put(ByteBuffer) - blocking method
  • ByteBuffer[] takeAll() - blocking method, takes all elements
  • long bytes() - fast non-blocking method 

Partition Queue Dispatcher Thread

The thread will periodically check disk writer queue size making sure that disk writer always has enough data to process. So, minimum number of elements will be 2 for single disk writer. Once the size goes down, the thread will scan sizes of partition queues and choose the winner with the maximum number of bytes. After that it will take all buffers from the queue and will put them together with file info (channel) to the disk writer queue.

In case of multiple disk writers additionally a flag should be passed, which will be considered when choosing largest partition queue and will be reset on buffer release.

Disk Writer and Free Queue 

They could be simple ArrayBlockingQueues.

Open question: Does it make sense to use non-blocking queue here for faster read/write operations on the queue?

Disk Writer Thread

Does 3 operations

  • Take next ByteBuffer[] from queue
  • Save to disk
  • Send ByteBuffer[] to Free Queue

ByteBuffer Releaser Thread

Takes buffers from queue and returns to the pool. This operation will probably be fast enough so that extra thread will be redundant.

Compression

Encryption

Shutdown / Execution Completion / Error handling


Questions

What happens if a dump entry doesn't fit into pool size? 

Other ideas

One of the proposed ideas was to switch from writing to several partition-specific files to single dump file. This idea wasn't considered much because of change complexity and limitation for multi-threaded I/O which could be beneficial on some server storages. And it is still possible to achieve sequential writes with multiple partition files.