Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Addressing comments from vote discussion

...

  1. Add test suite for basic operations, and a visible WebUI to check the throughput and latency data, pretty much like our existing flink speed center.
  2. Add more software and hardware metrics for the benchmark.
  3. Add test suite for state backend.
  4. Add test suite for shuffle service. 

...

Test Scenarios

The following dimensions of Flink job are taken into account when setting the test scenarios:

Topology

Logical Attributes of Edges

Schedule Mode

Checkpoint Mode

OneInput

Broadcast

Lazy from Source

ExactlyOnce

TwoInput

Rescale

Eager

AtLeastOnce


Rebalance




KeyBy



There're also other dimensions other than Flink characteristic, including:

  • Record size: to check both the processing (records/s) and data (bytes/s) throughput, we will test the 10B, 100B and 1KB record size for each Flink job.
  • Resource for each task: we will use the Flink default settings to cover the most used cases.
  • Job Parallelism: we will increase the parallelism to saturate the system until back-pressure. 
  • Source and Sink: to focus on Flink performance, we generate the source data randomly and use a blackhole consumer as the sink.

Test Job List

The above test scenarios could form 32 test jobs as shown below:

...