Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

PR: https://github.com/apache/kafka/pull/12545

Motivation

Reduce Fetcher#parseRecord() memory copy: we can direct Currently we use Deserializer#deserialize(String topic, Headers headers, byte[] data) in Fetcher#parseRecord(TopicPartition, RecordBatch, Record) to deserialize key&value, we first call Utils.toArray(ByteBuffer) to convert ByteBuffer into byte[] and then call Deserializer#deserialize(String topic, Headers headers, byte[] data) which will cause memory allocation and memory copying. Actually, we can directly use ByteBuffer instead of byte[] to deserialize for deserialization, which will reduce memory allocation and memory copying in some cases.

If we add the default method Deserializer#deserialize(String, Headers, ByteBuffer) and use it in Fetcher#parseRecord(TopicPartition, RecordBatch, Record), we can reduce the memory allocation and memory copy of StringDeserializer and ByteBufferDeserializer, of course if user-customized Deserializers implement this method, they also can reduce memory allocation and memory copying.

Public Interfaces

We propose adding default method Deserializer#deserialize(String, Headers, ByteBuffer).

...

Another solution I thought of is PoolArea, just like nettyNetty, but this solution is more complicated.