You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state: Under Discussion

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]

JIRA: here [Change the link from KAFKA-1 to your own ticket]

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Compression is often used in Kafka to trade off extra CPU usage in Kafka clients for reduced storage and network resources on Kafka brokers. Compression is most commonly configured to be done by producers, though compression can also be configured to be performed by the brokers for situations where producers do not have spare CPU cycles. Regardless of the configuration used, the compression algorithm chosen will vary depending upon the needs of each use case.

To determine which compression algorithm to use, it is often helpful to be able to quantify the savings in storage, ingress bandwidth (if any), replication bandwidth, and egress bandwidth, all of which are a function of how much the compression algorithm reduces the overall size of the messages. Because the performance characteristics of each compression algorithm are highly dependent on the data being compressed, measuring the reduction in data size typically requires the user to produce data into Kafka using each compression algorithm and measure the resulting bandwidth utilization and log size for each use case. This process is time consuming and if the user is not careful, can easily provide vague or misleading results.

Public Interfaces

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

  • Binary log format

  • The network protocol and api behavior

  • Any class in the public packages under clientsConfiguration, especially client configuration

    • org/apache/kafka/common/serialization

    • org/apache/kafka/common

    • org/apache/kafka/common/errors

    • org/apache/kafka/clients/producer

    • org/apache/kafka/clients/consumer (eventually, once stable)

  • Monitoring

  • Command line tools and arguments

  • Anything else that will likely break existing users in some way when they upgrade


A new command line tool called kafka-compression-analyzer.sh that will accept several command line parameters.

A new command line tool called kafka-compression-analyzer.sh that measures what the size of a log segment would be after compressing it using each of the compression types supported by Kafka. It is a read-only tool and does not modify the log segment being analyzed. This tool will will accept several command line parameters


ParameterRequiredDescription
--logYesSpecifies the log file to analyze for the savings of compression.



Proposed Changes

Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

kafka-compression-analyzer.sh aims to compress messages in the same manner a producer would and record the different in size of each batch. The tool sequentially iterates over each RecordBatch in a log file (very similar to kafka-dump-log.sh), compresses it into a new MemoryRecords object, and records the sizes of the batch both before and after compression. Since the tool only compresses existing batches as they were written to the log file and does not merge or split them, the tool effectively measures the resulting log size as if compression were enabled without other producer configurations being changed (ex. linger.ms).

If a RecordBatch is already compressed, by default the tool will decompress the batch and then recompress it using the other compression types. This allows the tool to report the resulting size of the log if all RecordBatches are compressed using each compression type. This can be disabled via the --no-recompression flag, in which case compression will only be done on uncompressed batches. Therefore, results with the --no-recompression flag will effectively show the impact of compression if all producers currently using compression.type=none were configured to use a given compression type.

Notes:

  • The tool does not spawn multiple threads
  • The tool will likely consume an entire core while running
    • Consider running the tool on a non-broker machine to avoid starving the broker of CPU

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?


This proposal adds a new tool and changes no existing functionality.

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.

  • No labels