Discussion threadhttps://lists.apache.org/thread/bk8x0nqg4oc62jqryj9ntzzlpj062wd9
Vote threadhttps://lists.apache.org/thread/kfqkmtlpk2k3q3cc3l8p0j7lg6b3o0sj
JIRA

Unable to render Jira issues macro, execution error.

Release<Flink Version>

Motivation

To support connectors in avoiding overwriting non-target columns with null values when processing partial column updates, we propose adding information on the target column list to DynamicTableSink#Context.

FLINK-18726 supports inserting statements with specified column list, it fills null values (or potentially declared default values in the future) for columns not appearing in the column list of insert statement to the target table. But this behavior does not satisfy some partial column update requirements of some storage systems which allow storing null values.

Let's explain the scenario further, denormalized table(or the commonly known 'wide table') model is a common data modeling method, which connects multiple tables to form a unified wide table to improve query and analysis efficiency.
The data of denormalized table comes from different source tables. When writing or updating data, it usually adopts the mode of specifying the column list to write.

For example, there is a denormalized table t1 which has a primary key `a`:

 create table t1 (
    `a` BIGINT,
    `b` STRING,
    `c` STRING,
    `d` STRING,
    `e` DOUBLE,
    `f` BIGINT,
    `g` INT,
    primary key(`a`) not enforced
  ) with (
    ...
  )

where column b, c, d from s1 table, column e, f, g from s2 table,
by different processing data from the source table s1 and s2,  the two insertions specify a different column list written to the denormalized table, ideally the two insertions will not affect each others' column

insert into t1 (a, b, c, d)
select a, b, c, d from s1 where ...;

insert into t1 (a, e, f, g)
select a, e, f, g from s2 where ...;

The current problem is that connectors cannot distinguish whether the null value of a column is really from the user's data or whether it is a null value populated because of partial insert behavior.

Let's see the execution plan fragment corresponding to the first insert statement in the above example:

Sink(table=[default_catalog.default_database.t1], fields=[a, b, c, d, EXPR$3, EXPR$4, EXPR$5])
+- Calc(select=[a, b, c, d, null:VARCHAR(2147483647) AS EXPR$3, null:BIGINT AS EXPR$4, null:INTEGER AS EXPR$5])

The current connector implementor has no way of knowing that the last three fields were added by the planner and not from real user data. The user has to declare several different schemas (containing only partial column information and no overlap except for the primary key) to get around the current problem. 

By adding target column list information to the DynamicTableSink#Context, this problem can be solved.

Public Interfaces

Add new getTargetColumns to DynamicTableSink#Context.

       /**
         * Returns an {@link Optional} array of column index paths related to user specified target
         * column list or {@link Optional#empty()} when not specified. The array indices are 0-based
         * and support composite columns within (possibly nested) structures.
         *
         * <p>This information comes from the column list of the DML clause, e.g., for a sink table
         * t1 which schema is: {@code a STRING, b ROW < b1 INT, b2 STRING>, c BIGINT}
         *
         * <ul>
         *   <li>insert: 'insert into t1(a, b.b2) ...', the column list will be 'a, b.b2', and will
         *       return {@code [[0], [1, 1]]}. The statement 'insert into target select ...' without
         *       specifying a column list will return {@link Optional#empty()}.
         *   <li>update: 'update target set a=1, b.b1=2 where ...', the column list will be 'a,
         *       b.b1', will return {@code [[0], [1, 0]]}.
         * </ul>
         *
         * <p>Note: will always return empty for the delete statement because it has no column list.
         */
         Optional<int[][]> getTargetColumns();


Proposed Changes

The internal SinkRuntimeProviderContext will support new constructor with targetColumns param, this can be used by connectors to recognize the user-specified column list.

Note: currently nested columns in column list of an insert/update statement is unsupported (as described in FLINK-31301 & FLINK-31344), so we can make this flip support simple columns first and then support nested columns after FLINK-31301 & FLINK-31344 fixed. 

Compatibility, Deprecation, and Migration Plan

This is a compatible change, the newly added information has no effect on the behavior of existing connectors, but simply provides additional information to satisfy the connector developers who need it

Test Plan

Related plan test will be added, and also update the test values sink for it cases.

Rejected Alternatives


  • No labels