Status
Current state: [One of "Under Discussion", "Accepted", "Rejected"]
Discussion thread: here (<- link to https://mail-archives.apache.org/mod_mbox/flink-dev/)
JIRA: here (<- link to https://issues.apache.org/jira/browse/FLINK-XXXX)
Released: <Flink Version>
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
To support connectors in avoiding overwriting non-target columns with null values when processing partial column updates, we propose adding information on the target column list to DynamicTableSink#Context.
FLINK-18726 supports inserting statements with specified column list, it fills null values (or potentially declared default values in the future) for columns not appearing in the column list of insert statement to the target table. While in the field of big data, this behavior does not satisfy the partial column update requirements of some storage systems which allow storing null values.
Let's explain the scenario further, wide table model is a common data modeling method, which connects multiple tables to form a unified wide table to improve query and analysis efficiency.
The data of wide table comes from different source tables. When writing or updating data, it usually adopts the mode of specifying the column list to write.
For example, there is a wide table t1 which has a primary key `a`:
create table t1 ( `a` BIGINT, `b` STRING, `c` STRING, `d` STRING, `e` DOUBLE, `f` BIGINT, `g` INT, primary key(`a`) not enforced ) with ( ... )
where column b, c, d from s1 table, column e, f, g from s2 table,
by different processing data from the source table s1 and s2, the two insertions specify a different column list written to the wide table, ideally the two insertions will not affect each others' column
insert into t1 (a, b, c, d) select a, b, c, d from s1 where ...; insert into t1 (a, e, f, g) select a, e, f, g from s2 where ...;
The current problem is that connectors cannot distinguish whether the null value of a column is really from the user's data or whether it is a null value populated because of partial insert behavior.
Let's see the execution plan fragment corresponding to the first insert statement in the above example:
Sink(table=[default_catalog.default_database.t1], fields=[a, b, c, d, EXPR$3, EXPR$4, EXPR$5]) +- Calc(select=[a, b, c, d, null:VARCHAR(2147483647) AS EXPR$3, null:BIGINT AS EXPR$4, null:INTEGER AS EXPR$5])
The current connector implementor has no way of knowing that the last three fields were added by the planner and not from real user data. The user has to declare several different schemas (containing only partial column information and no overlap except for the primary key) to get around the current problem.
By adding targetColumnList information to the DynamicTableSink#Context, this problem can be solved.
Public Interfaces
Add new getTargetColumnList to DynamicTableSink#Context
/** * Returns the user specified target column name list or an empty list when not specified. * * <p>This information comes from the column list of the DML clause, e.g., * * <ul> * <li>insert: 'insert into target(a, b, c) ...', the column list will be 'a, b, c'. the * column list will be empty for 'insert into target select ...'. * <li>update: 'update target set a=1, b=2 where ...', the column list will be 'a, b'. * </ul> * * <p>Note: the column list will always be empty in the delete clause. */ List<String> getTargetColumnList();
Proposed Changes
The internal SinkRuntimeProviderContext will support new constructor with targetColumnList param, this can be used by connectors to recognize the user-specified column list.
Compatibility, Deprecation, and Migration Plan
This is a compatible change, the newly added information has no effect on the behavior of existing connectors, but simply provides additional information to satisfy the connector developers who need it
Test Plan
Related plan test will be added, and also update the test values sink for it cases.