Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This is not "fair" scheduling. Tasks that have been submit()ed and rejected and put in the blockingWorkQueue are not, in general, all processed ahead of new tasks arriving via other threads calling ExecutorService.submit(). The latter can cut in line in front of the thread servicing the blockingWorkQueue. This is not a bug per se, but it is surprising. The framework simply apparently doesn't have any fairness contract for these thread pools.

...

Why Do We Have This Unusual Arrangement

It's not clear why, even if the user supplies a non-synchronous queue to the constructor, PooledExecutorWithDMStats insists upon fabricating a synchronous queue for use as the ThreadPoolExecutor's workQueue.

...

Another issue not mentioned in ticket 37091 but which seems, in discussions, to have played a part here, was to try and prefer immediate execution (of a task) to queueing all the way up to maximumPoolSize. But by setting allowCoreThreadTimeout(true), and setting the corePoolSize = maximumPoolSize we could, with a modern JDK have our cake and eat it totoo, and eliminate the extra queue and thread, and eliminate the unfairness too! With those settings and any caller-supplied  BlockingQueue as the workQueue (including a non-synchronous one) we should be able to avoid queueing and immediately dispatch tasks to threads, right up to the maximumPoolSize (since that's also the corePoolSize.) Then the thread keepAliveTime would kick in and start aging out any unused threads to eventually bring us all the way back down to zero if all demand was removed for long enough.

I ran an An experiment with code derived from that Stack Overflow question . It seems to confirm that we Geode could now use corePoolSize = maximumPoolSize and allowCoreThreadTimeout(true) to get immediate dispatch of tasks (without queueing) right up to the maximumPoolSize, and also get pool draining when demand drops.

Here's that experiment:

https://gist.github.com/Bill/3e0587f43171d2d72718a945deb79fb3

It seems that this "unusual arrangement" (forcing all task submission through a synchronous queue, shunting failed tasks to another queue, and an extra thread competing (unfairly) with new task submissions, to drain that queue) isn't needed as of this writing.

Fixing this to eliminate the extra queue and the extra thread will entail running performance tests. Along with this work we should investigate whether the similar unusual arrangement in FunctionExecutionPooledExecutor can also be addressed. The latter has slightly different behavior in the backlog-processing thread: instead of put()ing items into the ThreadPoolExecutors workQueue, it offer()s and if the offer() fails, the thread runs the task directly