You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Status

Current stateUnder Discussion

Discussion thread: here

JIRA: KAFKA-4514

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

On September 2016, Facebook announced a new compression implementation named ZStandard, which is designed to scale with modern data processing environment. With its great performance in both of Speed and Compression rate, Hadoop and HBase will support ZStandard in a close future.

I propose for Kafka to add support of Zstandard compression, along with new configuration options and binary log format update.

Before we go further, it would be better to see the benchmark result of Zstandard. I compared the compressed size and compression time of 3 1kb-sized messages (3102 bytes in total), with the Draft-implementation of ZStandard Compression Codec and all currently available CompressionCodecs. All elapsed times are the average of 20 trials.

 

CodecLevelSizeTimeDescription
Gzip-39611,543,001 
Snappy-1,0635,132,056 
LZ4-3872,066,373 
Zstandard13741,152Speed-first setting.
237412,549 
337914,899Facebook's recommended default setting.
437911,673 
537313,197 
637312,640 
737314,367 
837321,143 
937317,023 
1037323,525 
1137335,467 
1237314,358 
1337316,316 
1437319,332 
1537435,253 
1637435,208 
1737118,179 
1837128,485 
1936826,518 
2036858,522 
21368148,507 
22368405,486Size-first setting.

 

As you can see above, ZStandard outplays all existing algorithms in both of compression rate and speed, especially working with the speed-first setting (level 1).

Public Interfaces

This feature requires modification on both of Configuration Options and Binary Log format.

Configuration

A new available option 'zstd' will be added to the compression.type property, which is used in configuring Producer, Topic and Broker.

Binary Log Format

The bit 2 of 1-byte "attributes" identifier in Message will be used to denote ZStandard compression; Currently, the first 3 bits (bit 0 ~ bit 2) of 1-byte attributes identifier is reserved for compression codec. Since only 4 compression codecs (NoCompression, GZipCompression, SnappyCompression and LZ4Compression) are currently supported, bit 2 has not been used until now. In other words, the adoption of ZStandard will introduce a new bit flag in the binary log format.

Proposed Changes

  1. Add a new dependency on the Java bindings of ZStandard compression.
  2. Add a new value on CompressionType enum type and define ZStdCompressionCodec on kafka.message package.

You can check the concept-proof implementation of this feature on this Pull Request.

Compatibility, Deprecation, and Migration Plan

None.

Rejected Alternatives

None yet.

Related issues

This update introduces some related issues on Kafka.

Whether to use existing library or not

There are two ways of adapting ZStandard to Kafka, each of which has its pros and cons.

  1. Use existing bindings.
    • Pros
      • Fast to work.
      • The building task doesn't need ZStandard to be pre-installed to the environment.
    • Cons
      • Somebody has to keep the eyeballs on the updates of both of the binding library and ZStandard itself. If needed, he or she has to update the binding library to adapt them to Kafka.
  2. Add existing JNI bindings directly.
    • Pros
      • Can concentrate on the updates of ZStandard only.
    • Cons
      • ZStandard has to be pre-installed before building Kafka.
      • A little bit cumbersome to work.

The draft implementation adopted the first approach, following its Snappy support. (In contrast, Hadoop follows the latter approach.) You can see the used JNI binding library at here. However, I thought it would be much better to discuss the alternatives, for I am a newbie to Kafka.

Whether to support dictionary feature or not

ZStandard supports dictionary feature, which enables boosting efficiency by sharing learned dictionary. Since each of Kafka log message has repeated patterns, supporting this feature can improve the efficiency one more step further. However, this feature requires a new configurable option to point the location of the dictionary.

  • No labels