You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

This is a follow up task for the:Consumer threading refactor design

Objective

To ensure that our new Kafka consumer is robust, performant, and scalable by rigorously testing its capabilities in various scenarios, including different message sizes, numbers, consumer counts, and CPU throttling conditions.

Scope

  1. Consumption Rate

    • Varying message sizes: 100B, 1KB, 10KB, 100KB

    • Varying message numbers: 100, 1000, 10,000, 100,000

  2. CPU Throttling

    • No throttling

    • 99.99%

  3. Special Scenarios

    • Schema registry usage

    • Slow schema resolution

Testing Strategy

Setup

  • Kafka Cluster: n-broker setup (do we need more than 1?)

  • Producer: Pre-configured to produce varying sizes and volumes of messages

  • Consumer: The new Kafka consumer under test

Consumption Rate

  1. Varying Message Sizes: Measure the rate of message consumption across different message sizes.

    • Metrics: Throughput (messages/sec), Latency

    • Tools: Kafka built-in monitoring, custom logging

  2. Varying Message Numbers: Measure how well the consumer handles varying amounts of messages.

    • Metrics: Throughput, Backlog drain time

    • Tools: Kafka monitoring, custom loggingCPU Throttling

  1. No Throttling: Baseline performance metrics.

    • Metrics: Throughput, CPU, and Memory Usage

  2. 50% and 75% Throttling: Simulate CPU constraint scenarios.

    • Metrics: Throughput, Latency, CPU and Memory Usage

Special Scenarios

  1. High Deserialization CPU Cost: Simulate a high-CPU cost deserialization algorithm.

    • Metrics: Throughput, Latency, CPU Utilization

    • Tools: Kafka monitoring, Profiling tools

  2. Schema Registry: Measure the impact of using a schema registry for deserialization.

    • Metrics: Throughput, Latency, Schema registry lookup time

    • Tools: Kafka monitoring, Schema Registry logs

  • No labels