Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


...

Page properties

...


Discussion

...

thread

...

...

...


JIRA

...

Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyFLINK-27626

Released: <Flink Version>

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Release


Motivation

We can introduce richer merge strategies, one of which is already introduced is PartialUpdateMergeFunction, which completes non-NULL fields when merging. We can introduce more powerful merge strategies, such as support for pre-aggregated merges. Currently the pre-aggregation is used by many big data systems, e.g. Apache Doris, Apache Kylin, Druid, to reduce storage cost and accelerate aggregation query. By introducing pre-aggregated merge to Flink table storeTable Store, it can acquire the same benefit.  Aggregate functions which we plan to  implement includes  sum, max/min, last_non_null_value, last_value,  listagg, bool_or/bool_and.

Public Interfaces

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

  • Binary log format

  • The network protocol and api behavior

  • Any class in the public packages under clientsConfiguration, especially client configuration

    • org/apache/kafka/common/serialization

    • org/apache/kafka/common

    • org/apache/kafka/common/errors

    • org/apache/kafka/clients/producer

    • org/apache/kafka/clients/consumer (eventually, once stable)

  • Monitoring

  • Command line tools and arguments

  • Anything else that will likely break existing users in some way when they upgrade

Proposed Changes

Basic usage of pre-aggregated merge

To use pre-aggregated merge in Flink Table Store, two kind of configurations should be added to WITH clause when creating table.

  1. assign 'aggregation' to 'merge-engine' 
  2. designate aggregate function for each column of table.

For example,

--DDL
CREATE TABLE T (
    pk STRING PRIMARY KEY NOT ENFOCED,
    sum_field1 BIGINT,
    max_field1 BIGINT
    )
WITH (
'merge-engine' = 'aggregation',
'fields.sum_field1.function'='sum', -- sum up all sum_field1 with same pk;
'fields.max_field1.function'='max' -- get max value of all max_field1 with same pk
);

-– DML
INSERT INTO T VALUES ('pk1', 1, 2);
INSERT INTO T VALUES ('pk1', 1, 1);
– verify
SELECT * FROM T;
=> output 'pk1', 2, 2

Tips: each column should be designated aggregate functions.


Supported aggregate functions

The aggregate functions we propose to implement include sum, max/min, last_non_null_value, last_value,  listagg, bool_or/bool_and. These functions support different data types.

The sum aggregate function supports DECIMAL, TINYINT, SMALLINT, INTEGER, BIGINT, FLOAT, DOUBLE data types.

The max/min aggregate function supports DECIMAL, TINYINT, SMALLINT, INTEGER, BIGINT, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP, TIMESTAMP_LTZ data types.

The last_non_null_value/last_value aggregate functions support all data types.

The listagg aggregate function supports STRING  data types.

The bool_and/bool_or aggregate function supports BOOLEAN data type.


Changelog support

In most cases, the modification to Table Store is INSERT changes. However, Table Store can also be converted into retract stream which may include retract messages (UPDATE/DELETE changes).

Aforementioned aggregate functions all support INSERT changes. It needs more design to make aggregate functions support UPDATE/DELETE changes.


Future work
An advanced way of introducing pre-aggregated merge into Flink Table Store is using materialized view to get pre-aggregated merge result from a source table. Then a stream job is started to synchronize data, consume source data, and write incrementally . This data synchronization job has no state. More information is described in JIRA.

Proposed Changes

An ConfigOption<String> type variable named ‘AGGREGATE_FUNCTION’ is defined in CoreOptions.java to retrieve configuration of 'aggregate-function' in WITH clause.

Adding one more value named 'PRE_AGGREGATE'  to enum MergeEngine type in CoreOptions.java. It acts as one type of the merge-engines supported by Flink Table Store.

In the constructor of ChangelogWithKeyFileStoreTable, using 'PRE_AGGREGATE' as one more case in switch-case to initialize merge-engine.

A subclass of MergeFunction named AggregateMergeFunction is created in AggregateMergeFunction.java to conduct pre-aggregated mergeDescribe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Test Plan

Describe in few sentences how the FLIP will be tested. We are mostly interested in system tests (since unit-tests are specific to implementation details). How will we know that the implementation works as expected? How will we know nothing broke?

Rejected Alternatives

This is new feature, no compatibility, deprecation, and migration plan.

Test Plan

Each pre-aggregated merge function will be covered with IT tests.

Rejected Alternatives

NoneIf there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.