You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 19 Next »

Status

Current state: Under Discussion

Discussion thread: HERE

JIRA: KAFKA-4133

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

With KIP-74, we now have a good way to limit the size of Fetch responses, but it may still be difficult for users to control overall memory since the consumer will send fetches in parallel to all the brokers which own partitions that it is subscribed to. Currently we have:

-max.fetch.bytes: This enabled to control how much data will be returned by the broker for one fetch

-max.partition.fetch.bytes: This enables to control how much data per partition will be returned by the broker

None of these settings take into account that the consumer will be sending requests to multiple brokers in parallel so in practice the memory usage is as stated in KIP-74: min(num brokers * max.fetch.bytes,  max.partition.fetch.bytes * num_partitions)

 

To give users simpler control, it might make sense to add a new setting to properly limit the memory used by Fetch responses in the consumer in a similar fashion than what we already have on the producer.

Public Interfaces

The following option will be added for consumers to configure (in ConsumerConfigs.java):

  1. buffer.memory: Long, Priority High:

    The total bytes of memory the consumer can use to buffer records received from the server and waiting to be processed (decompressed and deserialized).

    This setting differs from the total memory the consumer will use because some additional memory will be used for decompression (if compression is enabled), deserialization as well as for maintaining in-flight requests.

Alongside, we will set the priority of max.partition.fetch.bytes to Low.

Proposed Changes

This KIP reuses the MemoryPool interface from KIP-72.

1) At startup, the consumer will initialize a MemoryPool with the size the user specified using buffer.memory. This pool enables to track how much memory the consumer is using for received messages.

2) In Selector.pollSelectionKeys(), before reading from sockets, the consumer will check there is still available space left in the MemoryPool.

3) If there is space, pollSelectionKeys will read from sockets and store messages in the MemoryPool, otherwise read is skipped.

4) Once messages are returned to the user, messages are deleted from the MemoryPool so new messages can be stored.

Caveats:

There is a risk using the MemoryPool that, after we fill up the memory with fetch data, we can starve the coordinator's connection.

For example, if we send a bunch of pre-fetches right before returning to the user, these fetches might return before the next call to poll(), in which case we might not have enough memory to receive heartbeats, which would block us from sending additional heartbeats until the next call to poll(). Because heartbeats are tiny, this is unlikely to be a significant issue. In any case, KAFKA-4137 (separate network client) suggests a possible way to alleviate this issue.

Compatibility, Deprecation, and Migration Plan

This KIP should be transparent to users not interested in setting this new configuration. Users wanting to take advantage of this new feature will just need to add this new settings to their consumer's properties.

Rejected Alternatives

  • Limit sending FetchRequests once a specific number of in-flight requests is reached :

    While this was initially considered, this method will result in a loss of performance. Throttling FetchRequest means that when memory is freed, the consumer first has to send a new FetchRequest and wait fro the broker response before it can consume new messages.


  • No labels