Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

We can introduce richer merge strategies, one of which is already introduced is PartialUpdateMergeFunction, which completes non-NULL fields when merging. We can introduce more powerful merge strategies, such as support for pre-aggregated merges. Currently the pre-aggregation is used by many big data systems, e.g. Apache Doris, Apache Kylin, Druid, to reduce storage cost and accelerate aggregation query. By introducing pre-aggregated merge to Flink table store, it can acquire the same benefit.  Aggregate functions which we plan to  implement includes sum, max/min, count, replace_if_not_null, replace, concatenate, or/and.

Public Interfaces

Usage Basic usage of pre-aggregated merge

To use pre-aggregated merge in Flink Table Store, two kind of configurations should be added to WITH clause when creating table.

  1. assign 'aggregation' to 'merge-engine' 
  2. designate aggregate function for each column of table.

For example,

-- DDL
CREATE TABLE T ( pk STRING PRIMARY KEY NOT ENFOCED, sum_field1 BIGINT, max_field1 BIGINT ) WITH ( 'merge-engine' = 'aggregation', 'sum_field1.aggregate-function' = 'sum', -- sum up all sum_field1 with same pk 'max_field.aggregate-function' = 'max' -- get max value of all max_field1 with same pk );
-- DML
INSERT INTO T VALUES ('pk1', 1, 1);
INSERT INTO T VALUES ('pk1', 1, 1);
-- verify
SELECT * FROM T;
=> output 'pk1', 2, 2

Advanced usage of pre-aggregated merge

Apart from creating table with pre-aggregated merge engine, using materialized view to get pre-aggregated merge result from a source table is another choice.


CREATE MATERIALIZED VIEW T
with (
    'merge-engine' = 'aggregation'
) AS SELECT
    pk,
    SUM(field1) AS sum_field1,
    MAX(field2) AS max_field1
FROM source_t
GROUP BY pk ;

This will start a stream job to synchronize data, consume source data, and write incrementally to T. This data synchronization job has no state.

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

...

Binary log format

...

The network protocol and api behavior

Any class in the public packages under clientsConfiguration, especially client configuration

...

org/apache/kafka/common/serialization

...

org/apache/kafka/common

...

...

org/apache/kafka/clients/producer

...

org/apache/kafka/clients/consumer (eventually, once stable)

...

Monitoring

...

Command line tools and arguments

...

Proposed Changes

Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

Compatibility, Deprecation, and Migration Plan

...

  • They can decide whether to use pre-aggregated merge. If no, no impact on them. Otherwise, they have to  add some configurations in WITH clause when creating table. Configurations are described in 'Public Interfaces' section.

...

  • No need to phase out older behavior.

...

  • None.

...

This is new feature, no compatibility, deprecation, and migration plan.

Test Plan

Each  Each pre-aggregated merge function will be covered with IT tests.

...