Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

There is a risk using the MemoryPool that, after we fill up the memory with fetch data, we can starve the coordinator's connection. For example, if we send a bunch of pre-fetches right before returning to the user, these fetches might return before the next call to poll(), in which case we might not have enough memory to receive heartbeats, which would block us from sending additional heartbeats until the next call to poll().

To alleviate this issue, only messages larger than 1Kb will be allocated in the MemoryPool. Smaller messages will be allocated directly on the Heap like before. This allows group/heartbeat messages to avoid we won't use the MemoryPool for messages received from the coordinator. To do that, we will mark the Node/Channel used by the coordinator with a flag (priority).  When reading messages of the network, messages coming from a Node/Channel with the flag set will be directly allocated (like without a MemoryPool) and have no chance of being delayed if the MemoryPool fills up.

...

  • Limit sending FetchRequests once a specific number of in-flight requests is reached:

    While this was initially considered, this method will result in a loss of performance. Throttling FetchRequests means that when memory is freed, the consumer first has to send a new FetchRequest and wait from the broker response before it can consume new messages.

  • Explicit disposal of memory by the user:

    It was suggested to have an explicit call to a dispose() method to free up memory in the MemoryPool. In addition of breaking the API, this seems confusing for Java