You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

As discussed above, currently flink HybridSource is released, but it only be used in DataStream. We Need to add sql support for many table & sql end users.

so we propose this flip.

Basic Idea

Add a new built-in hybrid connector. First, In the HybridTableSourceFactory, use 'sources' option to concat ordered some child sources.

Next, we deal with indexed concrete child source option and pass to child source table factory to create child table source instances.

When child table source instances are ready, we use child table source ScanRuntimeProvider to get the actual child Source(FLIP-27 new Source API)

Finally, we bind sources to HybridSource.


ddl (normal)

DDL example
create table hybrid_source(
 f0 varchar,
 f1 varchar,
 f2 bigint
) with(
 'connector'='hybrid',
 'sources'='csv,kafka'
);


ddl(with different filed name, it's a feature, may not be implemented finally. need to be discussed)

DDL example
create table hybrid_source(
 f0 varchar,
 f1 varchar,
 f2 bigint
) with(
 'connector'='hybrid',
 'sources'='csv,kafka',
 'schema-field-mappings'='[{"f0":"A","f1":"B"},{}]'
);

csv acutal data names: A,B,f2

kafka acutal data names: f0,f1,f2

it means csv column is A,B we match them to the ddl fields.  kafka column is f0,f1,f2, no need to match.

user can use kafka acutal data names to be ddl fields or csv field names or other cases.


options:

sources:Use comma delimiter indicate child sources that need to be concatenated. it's in order. The boundedness of hybrid source is last child source's boundedness.

schema-field-mappings: Use json kv to match the different field names with ddl field (It's an extra feature, the draft pr below show how it implements and works).


Start position conversion:

Currently, the FileSource not expose the end position, we can't use it pass to the next streaming source. detail: Unable to render Jira issues macro, execution error.

Actually, by using sql we can definite the next streaming source, for example, we can definite kafka start-position. 

When first batch bounded data read finished, the hybrid source will call to read kafka with given start-position or other start strategy. 

Prototype implementation


HybridTableSource

HybridTableSource
public class HybridTableSource implements ScanTableSource {
    public HybridTableSource(
            String tableName,
            @Nonnull List<Source<RowData, ?, ?>> childSources,
            Configuration configuration,
            ResolvedSchema tableSchema) {
        this.tableName = tableName;
        this.tableSchema = tableSchema;
        this.childSources = childSources;
        this.configuration = configuration;
    }
}

HybridTableSource bind accepted child sources with given order to final HybridSource.


HybridTableSourceFactory

HybridTableSourceFactory
public class HybridTableSourceFactory implements DynamicTableSourceFactory {
   public DynamicTableSource createDynamicTableSource(Context context) {
      // core logic for creating HybridTableSource
   }
}

TableSourceFactory is using for Java SPI to search hybrid source implementation.

HybridConnectorOptions
HybridTableSourceFactory
public class HybridConnectorOptions {

    public static final String SOURCE_DELIMITER = ".";

    public static final ConfigOption<String> SOURCES =
            ConfigOptions.key("sources")
                    .stringType()
                    .noDefaultValue()
                    .withDescription(
                            "Use comma delimiter indicate child sources that need to be concatenated. e.g. sources='csv,kafka'");

    public static final ConfigOption<String> OPTIONAL_SCHEMA_FIELD_MAPPINGS =
            ConfigOptions.key("schema-field-mappings")
                    .stringType()
                    .noDefaultValue()
                    .withDescription(
                            "Use json kv to match the different field names with ddl field. e.g. '[{\"f0\":\"A\"},{}]' it means the "
                                    + "first child source column A is match to ddl column f0, the second source no matching.");

    private HybridConnectorOptions() {}
}

Options for creating HybridTableSource.

Draft PR:

Compatibility, Deprecation, and Migration Plan

It's a new support without migration currently.

Test Plan

Add unit test case for HybridTableSourceFactory and HybridTableSource.

Add integration test cases for hybrid source sql ddl.

Rejected Alternatives

to be added.






  • No labels