Status
Current state: Under Discussion
Discussion thread: here (<- link to https://mail-archives.apache.org/mod_mbox/flink-dev/)
JIRA: here (<- link to https://issues.apache.org/jira/browse/FLINK-XXXX)
Released: <Flink Version>
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
Flink introduced Descriptor API to configure and instatiate TableSources/TableSinks since 1.5.0, i.e. the TableEnvironment#connect
API.
Currently, there are many problems with current Descriptor API which we want to resolve in this FLIP.
- The community focused on the new SQL DDL feature in recent releases. SQL DDL is well-designed and has many rich features. However, Descriptor API lacks many key features, e.g. computed columns, primary key, partition key and so on.
- Currently, a connector must provide a corresponding Descriptor (e.g. new Kafka()). We hope connectors can be registered without a corresponding Descriptor. This can ease the development of connectors and can be a replacement of registerTableSource/Sink.
- The underlying implementation of Descriptor API and SQL DDL are different. It’s expensive to maintain two different code path.
There are many known issues about Descriptor API: FLINK-17548, FLINK-17186, FLINK-15801, FLINK-15943.
Public Interfaces
We propose to de
We propose to drop the existing method TableEnvironment#connect
(deprecated in 1.11) and some related interfaces/classes, including:
(drop) TableEnvironment#connect
(drop) ConnectTableDescriptor
(drop) BatchTableDescriptor
(drop) StreamTableDescriptor
(drop) ConnectorDescriptor
(drop) Rowtime
(refactor) TableDescriptor
(refactor) Schema
We propose to introduce a new set of descriptor APIs for Table API.
TableEnvironment#createTemporaryTable
/** creates a temporary table from a descriptor. */ void createTemporaryTable(String tablePath, TableDescriptor tableDescriptor);
The TableDescriptor
is an unified interface/class to represent a SQL DDL strucutre (or CatalogTable internally). It can be a specific connector descriptor, e.g. Kafka
, or a general purpose descriptor, i.e. Connector
. All the methods can be chained called on the instance, including .schema()
, .partitionedBy()
, and .like()
. We will discuss TableDescriptor
in detail in the Proposed Changes section.
A full example will look like this:
tEnv.createTemporaryTable(
"MyTable",
new Kafka()
.version("0.11")
.topic("user_logs")
.property("bootstrap.servers", "localhost:9092")
.property("group.id", "test-group")
.startFromEarliest()
.sinkPartitionerRoundRobin()
.format(new Json().ignoreParseErrors(false))
.schema(
new Schema()
.column("user_id", DataTypes.BIGINT())
.column("user_name", DataTypes.STRING())
.column("score", DataTypes.DECIMAL(10, 2))
.column("log_ts", DataTypes.STRING())
.column("part_field_0", DataTypes.STRING())
.column("part_field_1", DataTypes.INT())
.proctime("proc")
.computedColumn("my_ts", "TO_TIMESTAMP(log_ts)") // 计算列
.watermarkFor("my_ts").boundedOutOfOrderTimestamps(Duration.ofSeconds(5))
.primaryKey("user_id"))
.partitionedBy("part_field_0", "part_field_1")
);
tEnv.createTemporaryTable(
"MyTable",
new Connector("kafka-0.11")
.option("topic", "user_logs")
.option("properties.bootstrap.servers", "localhost:9092")
.option("properties.group.id", "test-group")
.option("scan.startup.mode", "earliest")
.option("format", "json")
.option("json.ignore-parse-errors", "true")
.option("sink.partitioner", "round-robin")
.schema(
new Schema()
.column("user_id", DataTypes.BIGINT())
.column("user_name", DataTypes.STRING())
.column("score", DataTypes.DECIMAL(10, 2))
.column("log_ts", DataTypes.STRING())
.column("part_field_0", DataTypes.STRING())
.column("part_field_1", DataTypes.INT())
.proctime("proc")
.computedColumn("my_ts", "TO_TIMESTAMP(log_ts)") // 计算列
.watermarkFor("my_ts").boundedOutOfOrderTimestamps(Duration.ofSeconds(5))
.primaryKey("user_id"))
.partitionedBy("part_field_0", "part_field_1")
);
TableEnvironment#from and Table#executeInsert
Additionally, we would like to propose two new methods for better usability for Table API users.
interface TableEnvironment { /** reads a table from the given descriptor */ Table from(TableDescriptor tableDescriptor); // we already have a "from(String)" method to get registered table from catalog } interface Table { /** Writes the Table to a sink that is specified by the given descriptor. */ TableResult executeInsert(TableDescriptor tableDescriptor); // we already have a "executeInsert(String)" method to write into a registered table in catalog }
With the above two methods, we can leverage the same TableDescriptor
definition, then Table API users can skip the table registering step and can use the source/sink out-of-box. For example:
Schema schema = new Schema()
.column("user_id", DataTypes.BIGINT())
.column("score", DataTypes.DECIMAL(10, 2))
.column("ts", DataTypes.TIMESTAMP(3));
Table myKafka = tEnv.from(
new Kafka()
.version("0.11")
.topic("user_logs")
.property("bootstrap.servers", "localhost:9092")
.property("group.id", "test-group")
.startFromEarliest()
.sinkPartitionerRoundRobin()
.format(new Json().ignoreParseErrors(false))
.schema(schema)
);
// reading from kafka table and write into filesystem table
myKafka.executeInsert(
new Connector("filesystem")
.option("path", "/path/to/whatever")
.option("format", "json")
.schema(schema)
);
Proposed Changes
TableDescriptor
The current TableDescriptor
will be refactored into:
/** * Describes a table to connect. It is a same representation of SQL CREATE TABLE DDL. */ @PublicEvolving public abstract class TableDescriptor { /** * Specifies the table schema. */ public TableDescriptor schema(Schema schema) {...} /** * Specifies the partition keys of this table. */ public TableDescriptor partitionedBy(String... fieldNames) {...} /** * Extends some parts from the original regsitered table path. */ public TableDescriptor like(String originalTablePath, LikeOption... likeOptions) {...} /** * Extends some parts from the original table descriptor. */ public TableDescriptor like(TableDescriptor originalTableDescriptor, LikeOption... likeOptions) {...} /** * Specifies the connector options of this table, subclasses should override this method. */ protected abstract Map<String, String> connectorOptions(); } public class Connector extends TableDescriptor { private final Map<String, String> options = new HashMap<>(); public Connector(String identifier) { this.options.put(CONNECTOR.key(), identifier); } public Connector option(String key, String value) { options.put(key, value); return this; } protected Map<String, String> toConnectorOptions() { return new HashMap<>(options); } } public class Kafka extends TableDescriptor { private final Map<String, String> options = new HashMap<>(); public Kafka() { this.options.put(CONNECTOR.key(), "kafka"); } public Kafka version(String version) { this.options.put(CONNECTOR.key(), "kafka-" + version); return this; } public Kafka topic(String topic) { this.options.put("topic", topic); return this; } public Kafka format(FormatDescriptor formatDescriptor) { this.options.putAll(formatDescriptor.toFormatOptions()); return this; } ... }
Schema
The current Rowtime
class will be removed. The current Schema
class will be refactored into:
/** * Describes a schema of a table. */ @PublicEvolving public class Schema { /** * Adds a column with the column name and the data type. */ public Schema column(String columnName, DataType columnType) {...} /** * Adds a computed column with the column name and the SQL expression string. */ public Schema computedColumn(String columnName, String sqlExpression) {...} /** * Adds a processing-time column with the given column name. */ public Schema proctime(String columnName) {...} /** * Specifies the primary key constraint for a set of given columns. */ public Schema primaryKey(String... columnNames) {...} /** * Specifies the watermark strategy for rowtime attribute. */ public SchemaWithWatermark watermarkFor(String rowtimeColumn) {...} public static class SchemaWithWatermark { /** * Specifies a custom watermark strategy using the given SQL expression string. */ public Schema as(String watermarkSqlExpr) {...} /** * Specifies a watermark strategy for situations with monotonously ascending timestamps. */ public Schema ascendingTimestamps() {...} /** * Specifies a watermark strategy for situations where records are out of order, but you can place * an upper bound on how far the events are out of order. An out-of-order bound B means that * once the an event with timestamp T was encountered, no events older than {@code T - B} will * follow any more. */ public Schema boundedOutOfOrderTimestamps(Duration maxOutOfOrderness) {...} } }
Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.
Compatibility, Deprecation, and Migration Plan
- What impact (if any) will there be on existing users?
- If we are changing behavior how will we phase out the older behavior?
- If we need special migration tools, describe them here.
- When will we remove the existing behavior?
Test Plan
Describe in few sentences how the FLIP will be tested. We are mostly interested in system tests (since unit-tests are specific to implementation details). How will we know that the implementation works as expected? How will we know nothing broke?
Rejected Alternatives
If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.