It would be worthwhile to automate our performance testing to act as a generic integration test suite.
The goal of this would be to do basic performance analysis and correctness testing in a distributed environment.
Required Metrics
Client Side Measurements
- Throughput
- Response Time/Latency
- Timeouts
- Consumer lag
Common Stats
- Vmstat - Context Switch, User CPU utilization %, System CPU utilization %, Total CPU utilization %
- Iostat - Reads/sec, Writes/sec, KiloBytes read/sec, KiloBytes write/sec, Average number of transactions waiting, Average number of active transactions, Average response time of transactions, Percent of time waiting for service, Percent of time disk is busy
- Prstat - Virtual memory size of each java process, RSS size of each process, Total CPU utilization of each process
GC Log Analysis
- Footprint (Maximal amount of memory allocated)
- Freed Memory (Total amount of memory that has been freed)
- Freed Memory/min (Amount of memory that has been freed per minute)
- Total Time (Time data was collected for)
- Acc Pauses (Sum of all pauses due to GC)
- Throughput (Time percentage the application was NOT busy with GC)
- Full GC Performance (Performance of full GC collections. Full GC collections are marked so in the gc logs.)
- GC Performance (Performance of minor collections. These are collections that are not full according to the definition above.)
- CMS counts and frequency (Number of CMS collections and their frequency)
- CMS failure count and frequency (CMS failure metrics)
Server side metrics
- Throughput and response time breakdown for each request at the LogManager, RequestPurgatory level
- ISR membership churn aggregate and per partition
- Number of expirations in the request purgatory
- Leader election rate aggregate and per partition
- Leader election latency aggregate and per partition
- High watermark change aggregate and per partition
- Replica lag time and bytes aggregate and per partition
- Replica fetch throughput and response time aggregate and breakdown at the LogManager, RequestPurgatory level
Log analysis
- Exceptions in logs, their frequency and types of exception
- Warnings in logs, their frequency and types of warnings
Miscellaneous
- Capture all the server machine profiles before tests are being executed (Such as disk space, number of CPUS etc)
- Capture all configurations for each run
Phase I: Perf Tools
The goal of this phase is just to create tools to help run perf tests. We already have some of these so this will primarily just be about expanding and augmenting these.
- kafka-producer-perf-test.sh - We will add a csv option to this to dump incremental statistics in csv format for consumption by automated tools.
- kafka-consumer-perf-test.sh - Likewise we will add a csv option here.
- jmx-dump.sh - This will just poll the kafka and zookeeper jmx stats every 30 seconds or so and output them as csv.
- dstat - This is the existing perf tool to get read/write/fs stats
- draw-performance-graphs.r - This will be an R script to draw relevant graphs given a bunch of csv files from the above tools. The output of this script will be a bunch of png files that can be combined with some html to act as a perf report.
Here are the graphs that would be good to see:
- Latency histogram for producer
- MB/sec and messages/sec produced
- MB/sec and messages/sec consumed
- Flush time
- Errors (should not be any)
- Consumer cache hit ratio (both the bytes and count, specifically 1 - #physical_reads / #requests and 1 - physical_bytes_read / bytes_read)
- Write merge ratio (num_physical_writes/num_produce_requests and avg_request_size/avg_physical_write_size)
- CPU, network, io, etc
Phase II: Automation
This phase is about automating the deployment and running of the performance tests. At the end of this phase we want to have a script that pulls from svn every night, runs a set of performance scenarios, and produces reporting on these.
We need the following helper scripts for this:
- kafka-deploy-kafka.sh - This script will take a set of hosts and deploy kafka to each of them.
- kafka-start-cluster.sh - This will start the kafka broker on the list of hosts
- kafka-stop-cluster - Stops cluster
The tests will be divided up into scenarios. Each scenario is a directory which contains the following:
- broker config
- producer config
- consumer config
- producer perf test command
- consumer perf test command
- env file that contain # brokers, # producers, and # consumers
The output of the scenario will be a directory which contains the following:
- Producer perf csvs
- Consumer perf csvs
- dstat csvs
- jmx csvs
- env file
Scenarios to test:
- Producer throughput with no consumers. We should cover the following cases:
- Vary the number of topics
- Vary the async batch size
- Vary the flush size
- Vary the message size
- Consumer throughput with no producer
- Vary the message size
- Vary the number of topics
- Single producer/consumer pair
- Cold consumption (i.e. not in cache)
- Active consumption (i.e. consumer caught up to producer)
- Vary the number of topics
- Multiple consumers for one topic
We should add a script to take two scenarios and produce a summary/diff of them, i.e. what go worse and what got better. We can use this to track things over time. We can also rsync these up to a public location as a service to open source developers.
Phase III: Correctness
The correctness testing can be very straight-forward, all we want to validate is that every message produced gets consumed. This could be as simple as logging out a simple message id in the consumer and comparing it to the produced value.
Simplest idea is just to have each producer produce a set of known messages (say sequential integers in some unique range). Then have consumers validate that all integers were consumed (no gaps) and record the number of duplicates (if any).
Ideally we would repeat this scenario and script in broker failures (kills), server pauses (simulated), etc.