...
| → | partition thread | → |
|
transaction thread | ||||
↑ | ↓ | |||
ByteBuffer Releaser Thread | Partition Queue Dispatcher Thread | |||
↑ | ↓ | |||
Free Queue | ← | Disk Writer Thread | ← | Disk Writer Queue |
Current solution uses thread local ByteBuffers with expanding size. This is ok for single threaded usage, but not suitable for passing to other threads. And this is also not good for transaction threads.
We can use pool of ByteBuffers which provides newly allocated or reused ByteBuffers and doesn't go beyond its predefined limit. For example,
class ByteBufferPool
ByteBufferPool(size) - constructor with maximum number of bytes to allocate by all buffers together.
...
In average the buffers will be filled to 75%.
Let's assume there is a request for 200k buffer and lots of 128k buffers allocated, but no buffer larger than 128k allocated in the pool and there is no capacity remaining in the pool. In this case we will take 2 ByteBuffers with 128k and use in ByteBuffersWrapper.
Let's assume there is a request for 11Mb buffer and pool limit is 10Mb. In this case we will wait until all buffers return to the pool, take them all, allocate a new HeapByteBuffer for 1Mb and wrap them into ByteBuffersWrapper. When buffers released, we will return only 10Mb buffers to the pool. The new 1Mb buffer will be given to GC.
Wraps several ByteBuffers, extends ByteBuffer. It is created in ByteBufferPool#acquire and destroyed in ByteBufferPool#release when all internal buffers returned to the pool.
...
We can often see that disk IO operation is the bottleneck in Ignite. So we should try to make writing to disk efficient.
...
Open question: Does it make sense to use non-blocking queue here for faster put/take queue operations?
Does 3 operations
...
In case of error the whole execution will stop. Resuming creating dump isn't considered, the only option is to rerun creating dump.
What happens if a dump entry doesn't fit into pool size?
...