Status
Current state: Under Discussion
Discussion thread: here
JIRA: KAFKA-4514
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
On September 2016, Facebook announced a new compression implementation named ZStandard, which is designed to scale with modern data processing environment. With its great performance in both of Speed and Compression rate, Hadoop and HBase will support ZStandard in a close future.
I propose for Kafka to add support of Zstandard compression, along with new configuration options and binary log format update.
Before we go further, it would be better to see the benchmark result of Zstandard. I compared the compressed size and compression time of 3 1kb-sized messages (3102 bytes in total), with the Draft-implementation of ZStandard Compression Codec and all currently available CompressionCodecs. All elapsed times are the average of 20 trials.
Codec | Level | Size | Time | Description |
---|---|---|---|---|
Gzip | - | 396 | 11,543,001 | |
Snappy | - | 1,063 | 5,132,056 | |
LZ4 | - | 387 | 2,066,373 | |
Zstandard | 1 | 374 | 1,152 | Speed-first setting. |
2 | 374 | 12,549 | ||
3 | 379 | 14,899 | Facebook's recommended default setting. | |
4 | 379 | 11,673 | ||
5 | 373 | 13,197 | ||
6 | 373 | 12,640 | ||
7 | 373 | 14,367 | ||
8 | 373 | 21,143 | ||
9 | 373 | 17,023 | ||
10 | 373 | 23,525 | ||
11 | 373 | 35,467 | ||
12 | 373 | 14,358 | ||
13 | 373 | 16,316 | ||
14 | 373 | 19,332 | ||
15 | 374 | 35,253 | ||
16 | 374 | 35,208 | ||
17 | 371 | 18,179 | ||
18 | 371 | 28,485 | ||
19 | 368 | 26,518 | ||
20 | 368 | 58,522 | ||
21 | 368 | 148,507 | ||
22 | 368 | 405,486 | Size-first setting. |
As you can see above, ZStandard outplays all existing algorithms in both of compression rate and speed, especially working with the speed-first setting (level 1).
Public Interfaces
This feature requires modification on both of Configuration Options and Binary Log format.
Configuration
A new available option 'zstd' will be added to the compression.type property, which is used in configuring Producer, Topic and Broker.
Binary Log Format
The bit 2 of 1-byte "attributes" identifier in Message will be used to denote ZStandard compression; Currently, the first 3 bits (bit 0 ~ bit 2) of 1-byte attributes identifier is reserved for compression codec. Since only 4 compression codecs (NoCompression, GZipCompression, SnappyCompression and LZ4Compression) are currently supported, bit 2 has not been used until now. In other words, the adoption of ZStandard will introduce a new bit flag in the binary log format.
Proposed Changes
- Add a new dependency on the Java bindings of ZStandard compression.
- Add a new value on CompressionType enum type and define ZStdCompressionCodec on kafka.message package.
You can check the concept-proof implementation of this feature on this Pull Request.
Compatibility, Deprecation, and Migration Plan
None.
Rejected Alternatives
None yet.
Related issues
This update introduces some related issues on Kafka.
Whether to use existing library or not
There are two ways of adapting ZStandard to Kafka, each of which has its pros and cons.
- Use existing bindings.
- Pros
- Fast to work.
- The building task doesn't need ZStandard to be pre-installed to the environment.
- Cons
- Somebody has to keep the eyeballs on the updates of both of the binding library and ZStandard itself. If needed, he or she has to update the binding library to adapt them to Kafka.
- Pros
- Add existing JNI bindings directly.
- Pros
- Can concentrate on the updates of ZStandard only.
- Cons
- ZStandard has to be pre-installed before building Kafka.
- A little bit cumbersome to work.
- Pros
The draft implementation adopted the first approach, following its Snappy support. (In contrast, Hadoop follows the latter approach.) You can see the used JNI binding library at here. However, I thought it would be much better to discuss the alternatives, for I am a newbie to Kafka.
Whether to support dictionary feature or not
ZStandard supports dictionary feature, which enables boosting efficiency by sharing learned dictionary. Since each of Kafka log message has repeated patterns, supporting this feature can improve the efficiency one more step further. However, this feature requires a new configurable option to point the location of the dictionary.