You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Status

Current state:  Under Discussion

Discussion thread: here (<- link to https://mail-archives.apache.org/mod_mbox/flink-dev/)

JIRA: here (<- link to https://issues.apache.org/jira/browse/FLINK-XXXX)

Released: <Flink Version>

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Flink introduced Descriptor API to configure and instatiate TableSources/TableSinks since 1.5.0, i.e. the TableEnvironment#connect API.

Currently, there are many problems with current Descriptor API which we want to resolve in this FLIP. 

  • The community focused on the new SQL DDL feature in recent releases. SQL DDL is well-designed and has many rich features. However, Descriptor API lacks many key features, e.g. computed columns, primary key, partition key and so on. 
  • Currently, a connector must provide a corresponding Descriptor (e.g. new Kafka()). We hope connectors can be registered without a corresponding Descriptor. This can ease the development of connectors and can be a replacement of registerTableSource/Sink.
  • The underlying implementation of Descriptor API and SQL DDL are different. It’s expensive to maintain two different code path. 

There are many known issues about Descriptor API: FLINK-17548, FLINK-17186, FLINK-15801, FLINK-15943.

Public Interfaces


We propose to drop the existing method TableEnvironment#connect (deprecated in 1.11) and introduce a new method in TableEnvironment:


/** creates a temporary table from a descriptor. */
void createTemporaryTable(String tablePath, TableDescriptor tableDescriptor);


The TableDescriptor is an unified interface/class to represent a SQL DDL strucutre (or CatalogTable internally). It can be a specific connector descriptor, e.g. Kafka, or a general purpose descriptor, i.e. Connector. All the methods can be chained called on the instance, including .schema(), .partitionedBy(), and .like(). We will discuss TableDescriptor in detail in the Proposed Changes section.

A full example will look like this:

tEnv.createTemporaryTable(
"MyTable",
new Kafka()
.version("0.11")
.topic("user_logs")
.property("bootstrap.servers", "localhost:9092")
.property("group.id", "test-group")
.startFromEarliest()
.sinkPartitionerRoundRobin()
.format(new Json().ignoreParseErrors(false))
.schema(
new Schema()
.column("user_id", DataTypes.BIGINT())
.column("user_name", DataTypes.STRING())
.column("score", DataTypes.DECIMAL(10, 2))
.column("log_ts", DataTypes.STRING())
.column("part_field_0", DataTypes.STRING())
.column("part_field_1", DataTypes.INT())
.proctime("proc")
.computedColumn("my_ts", "TO_TIMESTAMP(log_ts)") // 计算列
.watermarkFor("my_ts").boundedOutOfOrderTimestamps(Duration.ofSeconds(5))
.primaryKey("user_id"))
.partitionedBy("part_field_0", "part_field_1")
);

tEnv.createTemporaryTable(
"MyTable",
new Connector("kafka-0.11")
.option("topic", "user_logs")
.option("properties.bootstrap.servers", "localhost:9092")
.option("properties.group.id", "test-group")
.option("scan.startup.mode", "earliest")
.option("format", "json")
.option("json.ignore-parse-errors", "true")
.option("sink.partitioner", "round-robin")
.schema(
new Schema()
.column("user_id", DataTypes.BIGINT())
.column("user_name", DataTypes.STRING())
.column("score", DataTypes.DECIMAL(10, 2))
.column("log_ts", DataTypes.STRING())
.column("part_field_0", DataTypes.STRING())
.column("part_field_1", DataTypes.INT())
.proctime("proc")
.computedColumn("my_ts", "TO_TIMESTAMP(log_ts)") // 计算列
.watermarkFor("my_ts").boundedOutOfOrderTimestamps(Duration.ofSeconds(5))
.primaryKey("user_id"))
.partitionedBy("part_field_0", "part_field_1")
);


Additionally, we would like to propose two new methods for better usability for Table API users. 


interface TableEnvironment {
  /** reads a table from the given descriptor */
  Table from(TableDescriptor tableDescriptor); 
  // we already have a "from(String)" method to get registered table from catalog
}

interface Table {
  /** Writes the Table to a sink that is specified by the given descriptor. */
  TableResult executeInsert(TableDescriptor tableDescriptor); 
  // we already have a "executeInsert(String)" method to write into a registered table in catalog
}


With the above two methods, Table API users can skip the table registering step and can use the source/sink out-of-box. For example:


Schema schema = new Schema()
.column("user_id", DataTypes.BIGINT())
.column("score", DataTypes.DECIMAL(10, 2))
.column("ts", DataTypes.TIMESTAMP(3));
Table myKafka = tEnv.from(
new Kafka()
.version("0.11")
.topic("user_logs")
.property("bootstrap.servers", "localhost:9092")
.property("group.id", "test-group")
.startFromEarliest()
.sinkPartitionerRoundRobin()
.format(new Json().ignoreParseErrors(false))
.schema(schema)
);
myKafka.executeInsert(
new Connector("filesystem")
.option("path", "/path/to/whatever")
.option("format", "json")
.schema(schema)
);



Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

  • Binary log format

  • The network protocol and api behavior

  • Any class in the public packages under clientsConfiguration, especially client configuration

    • org/apache/kafka/common/serialization

    • org/apache/kafka/common

    • org/apache/kafka/common/errors

    • org/apache/kafka/clients/producer

    • org/apache/kafka/clients/consumer (eventually, once stable)

  • Monitoring

  • Command line tools and arguments

  • Anything else that will likely break existing users in some way when they upgrade

Proposed Changes

Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Test Plan

Describe in few sentences how the FLIP will be tested. We are mostly interested in system tests (since unit-tests are specific to implementation details). How will we know that the implementation works as expected? How will we know nothing broke?

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.



  • No labels