You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Status

Current stateUnder Discussion

Discussion thread: here

JIRA: KAFKA-4514

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

On September 2016, Facebook announced a new compression implementation named ZStandard, which is designed to scale with modern data processing environment. With its great performance in both of Speed and Compression rate, Hadoop and HBase will support ZStandard in a close future.

I propose for Kafka to add support of Zstandard compression, along with new configuration options and binary log format update.

Before we go further, it would be better to see the benchmark result of Zstandard. I compared the compressed size and compression time of 3 1kb-sized messages (3102 bytes in total), with the Draft-implementation of ZStandard Compression Codec and all currently available CompressionCodecs. You can see the benchmark code from this commit. All elapsed times are the average of 1000 trials.

 

CodecLevelSizeTimeDescription
Gzip-396153 
Snappy-1,06337 
LZ4-38757 
Zstandard137456Speed-first setting.
237458 
337983Facebook's recommended default setting.
4379226 
5373102 
6373252 
7373667 
8373707 
9373830 
103731,029 
113731,973 
123731,985 
133732,352 
143732,324 
153741,668 
163744,996 
173712,418 
183717,434 
193689,997 
2036824,701 
2136890,044 
22368282,768Size-first setting.

 

As you can see above, ZStandard outplays all existing algorithms in both of compression rate and speed, especially working with the speed-first setting (level 1).

Public Interfaces

This feature requires modification on both of Configuration Options and Binary Log format.

Configuration

A new available option 'zstd' will be added to the compression.type property, which is used in configuring Producer, Topic and Broker.

Binary Log Format

The bit 2 of 1-byte "attributes" identifier in Message will be used to denote ZStandard compression; Currently, the first 3 bits (bit 0 ~ bit 2) of 1-byte attributes identifier is reserved for compression codec. Since only 4 compression codecs (NoCompression, GZipCompression, SnappyCompression and LZ4Compression) are currently supported, bit 2 has not been used until now. In other words, the adoption of ZStandard will introduce a new bit flag in the binary log format.

Proposed Changes

  1. Add a new dependency on the Java bindings of ZStandard compression.
  2. Add a new value on CompressionType enum type and define ZStdCompressionCodec on kafka.message package.

You can check the concept-proof implementation of this feature on this Pull Request.

Compatibility, Deprecation, and Migration Plan

None.

Rejected Alternatives

None yet.

Related issues

This update introduces some related issues on Kafka.

Whether to use existing library or not

There are two ways of adapting ZStandard to Kafka, each of which has its pros and cons.

  1. Use existing bindings.
    • Pros
      • Fast to work.
      • The building task doesn't need ZStandard to be pre-installed to the environment.
    • Cons
      • Somebody has to keep the eyeballs on the updates of both of the binding library and ZStandard itself. If needed, he or she has to update the binding library to adapt them to Kafka.
  2. Add existing JNI bindings directly.
    • Pros
      • Can concentrate on the updates of ZStandard only.
    • Cons
      • ZStandard has to be pre-installed before building Kafka.
      • A little bit cumbersome to work.

The draft implementation adopted the first approach, following its Snappy support. (In contrast, Hadoop follows the latter approach.) You can see the used JNI binding library at here. However, I thought it would be much better to discuss the alternatives, for I am a newbie to Kafka.

Whether to support dictionary feature or not

ZStandard supports dictionary feature, which enables boosting efficiency by sharing learned dictionary. Since each of Kafka log message has repeated patterns, supporting this feature can improve the efficiency one more step further. However, this feature requires a new configurable option to point the location of the dictionary.

  • No labels