You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Status

Current state"Under Discussion"

Discussion thread: here

JIRA: KAFKA-7632

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

This proposal suggests adding compression level and compression buffer size options to the producer, topic, and broker config.

Basically, CPU (running time) and I/O (compressed size) are trade-offs in compression. Since the best is use case dependent, lots of compression algorithms provide a way to control the compression level with reasonable default level, which results in a good performance in general. Add to this, the compression ratio is also affected by the buffer size - some users may want to trade compression for less/more memory.

However, Kafka does not provide a way to configure those options. Although it shows good performance with default settings, there are some cases which don't fit. For example:

  1. zstd supports a wide range of compression levels to reach the Pareto frontier, which means it decompresses faster than any other algorithm with similar or better compression ratio. In other words, disallowing users to adjust compression level means abandoning much of potential of zstd. In fact, Kafka's go client (sarama) already supports compression level feature for gzip and zstd.
  2. The default buffer size for lz4 is a little bit small (64kb). By changing up this value, the compression ratio can be improved.

Public Interfaces

This feature introduces the following new options to producer, topic, and broker configuration.

Producer

NameDescription

compression.level

Compression level to be used by a producer.
compression.buffer.sizeThe size of compression buffer to be used by a producer.

Topic and Broker

NameDescription
compression.levelCompression level to be used by a broker, if 'compression.type' is 'none' nor 'producer.'
compression.buffer.sizeThe size of compression buffer to be used by a broker, if 'compression.type' is 'none' nor 'producer.'
compression.[gzip,lz4, zstd].levelCompression level to be used by a broker, if 'compression.type' is 'producer' and the compression type of produced record is 'gzip', 'lz4', or 'zstd.'
compression.[gzip,snappy,lz4].buffer.sizeThe size of compression buffer to be used by a broker, if 'compression.type' is 'producer' and the compression type of produced record is 'gzip', 'snappy', or 'lz4.'

The valid range and default value of compression level and buffer size are entirely up to the compression library, so they may be changed in the future. Their current values are like following:

Compression level

Compression CodecavailabilityValid RangeDefault
gzipYes1 (Deflater.BEST_SPEED) ~ 9 (Deflater.BEST_COMPRESSION)6
snappyNo--
lz4Yes1 ~ 179
zstdYes-131072 ~ 223

Compression buffer size

Compression CodecavailabilityValid RangeDefaultNote
gzipYesPositive Integer8192 (8kb)Kafka's own default.
snappyYesPositive Integer32768 (32kb)Library default.
lz4Yes4 ~ 7 (4=64kb, 5=256kb, 6=1mb, 7=4mb)4 (64kb)Kafka's own default.
zstdNo---

Proposed Changes

The compression will be done with the new options.

Producer

The record batches will be compressed with the specified level using the specified compression buffer size.

  • If the specified option is not supported for the codec ('compression.type'), it is ignored (e.g., compression level for snappy or buffer size for zstd.)
  • If the specified value is not available (or invalid), it raises an error.
  • If there is no specified value, it falls back to the default one.

Broker

After the offsets are assigned to the record batches sent by the producer, they will be recompressed like following:

  • For all options,
    • Topic configuration comes first; if it is not set, broker configuration is used. (In other words, topic configuration overrides the broker configuration.)
    • If the specified option is not supported for the codec ('compression.type'), it is ignored (e.g., compression level for snappy or buffer size for zstd.)
    • If there is no specified value, it falls back to the default one.
  • If 'compression.type' is not 'none' or 'producer', the record batches will be recompressed using 'compression.type' codec with 'compression.level' and 'compression.buffer.size.' If either of those values is not available (or invalid), it raises an error.
  • If 'compression.type' is 'producer', the record batches will be recompressed using the same codec with corresponding 'compression.[gzip,snappy,lz4,zstd].level' and 'compression.[gzip,snappy,lz4,zstd].buffer.size.' If either of those values is not available (or invalid), it raises an error.

Compatibility, Deprecation, and Migration Plan

Since this update follows the default compression level and current buffer size if they are not set, there is no backward compatibility problem.

Rejected Alternatives

Can we support compression level feature only?

We can, but after some discussion, we decided that supporting both options at once is better - since both of them impacts the compression. So we decided to expand the initial proposal, which handles compression level only.

Can we support universal 'default compression level' value for producer config?

Impossible. Currently, most compression codecs allow to adjust the compression level with int type value, and it seems like it never changes. However, not all of these codecs support a value to denote 'default compression level' or the assigned value to default level differs; For example, gzip uses '-1' for default level but zstd used 0 for default level; Since the latest release of zstd allows negative compression level, the meaning of 0 level is also changing.

For these reasons, we can't provide a universal int value to denote default compression level.

Can we use external dictionary feature?

This feature requires an option to specify the dictionary for the supported codec, e.g., snappy, lz4, and zstd. It obviously over the scope of this KIP.

  • No labels