Status
...
Page properties | ||
---|---|---|
|
...
...
|
...
JIRA: here (<- link to https://issues.apache.org/jira/browse/FLINK-XXXX)
...
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
...
Table of Contents |
---|
Motivation
Flink introduced Descriptor API to configure and instatiate TableSources/TableSinks since The TableEnvironment#connect API has been introduced in Flink 1.5.0 , i.e. the TableEnvironment#connect
API.
Currently, there are many problems with current Descriptor API which we want to resolve in this FLIP.
in order to instantiate and configure table sources and sinks. Since then, the SQL DDL has been actively developed and improved, and as a result is more powerful and many of these feature are inaccessible from #connect. Furthermore, this API has shown to contain several shortcomings:
- Connectors have to implement corresponding descriptors (e.g. "new Kafka()"), which increases maintenance effort and duplicates information.
- The underlying implementation for the SQL DDL and #connector are different, requiring the maintenance of different code paths.
- The community focused on the new SQL DDL feature in recent releases. SQL DDL is well-designed and has many rich features. However, Descriptor API lacks many key features, e.g. computed columns, primary key, partition key and so on.
- Currently, a connector must implement a corresponding Descriptor (e.g. new Kafka()) to use the "connect" API. We hope connectors can be registered without a corresponding Descriptor. This can ease the development of connectors and can be a replacement of registerTableSource/Sink.
- The underlying implementation of Descriptor API and SQL DDL are different. It’s expensive to maintain two different code path.
- There are many known issues about Descriptor API: FLINK-17548, FLINK-17186, FLINK-15801, FLINK-15943.
Public Interfaces
We propose to drop the existing method TableEnvironment#connect
(deprecated in 1.11) and some related interfaces/classes, including:
(drop) TableEnvironment#connect
(drop) ConnectTableDescriptor
(drop) BatchTableDescriptor
(drop) StreamTableDescriptor
(drop) ConnectorDescriptor
(drop) Rowtime
(refactor) TableDescriptor
(refactor) Schema
We propose to introduce a new set of descriptor APIs for Table API.
TableEnvironment#createTemporaryTable()
Code Block | ||
---|---|---|
| ||
/** creates a temporary table from a descriptor. */
void createTemporaryTable(String tablePath, TableDescriptor tableDescriptor); |
The TableDescriptor
is an unified interface/class to represent a SQL DDL strucutre (or CatalogTable internally). It can be a specific connector descriptor, e.g. Kafka
, or a general purpose descriptor, i.e. Connector
. All the methods can be chained called on the instance, including .schema()
, .partitionedBy()
, and .like()
. We will discuss TableDescriptor
in detail in the Proposed Changes section.
A full example will look like this:
Code Block | ||
---|---|---|
| ||
// register a table using specific descriptor
tEnv.createTemporaryTable(
"MyTable",
new Kafka()
.version("0.11")
.topic("user_logs")
.property("bootstrap.servers", "localhost:9092")
.property("group.id", "test-group")
.startFromEarliest()
.sinkPartitionerRoundRobin()
.format(new Json().ignoreParseErrors(false))
.schema(
new Schema()
.column("user_id", DataTypes.BIGINT())
.column("user_name", DataTypes.STRING())
.column("score", DataTypes.DECIMAL(10, 2))
.column("log_ts", DataTypes.STRING())
.column("part_field_0", DataTypes.STRING())
.column("part_field_1", DataTypes.INT())
.column("proc", proctime()) // define a processing-time attribute with column name "proc"
.column("my_ts", toTimestamp($("log_ts")) // computed column
.watermarkFor("my_ts", $("my_ts").minus(lit(3).seconds())) // defines watermark and rowtime attribute
.primaryKey("user_id"))
.partitionedBy("part_field_0", "part_field_1") // Kafka doesn't support partitioned table yet, this is just an example for the API
); |
As a result, #connect has been deprecated sind Flink 1.11. In this FLIP, we want to propose a new API to programmatically define sources and sinks on the Table API without having to switch to SQL DDL.
Public Interfaces
Interface | Change | Comment |
---|---|---|
TableEnvironment | ||
#connect | Remove | Deprecated since Flink 1.11 |
#createTable(path, TableDescriptor) | New | |
#createTemporaryTable(path, TableDescriptor) | New | |
#from(TableDescriptor) | New | |
Table | ||
#executeInsert(TableDescriptor) | New | |
StatementSet | ||
#addInsert(TableDescriptor, Table) | New | |
Other | ||
ConnectTableDescriptor | Remove | |
BatchTableDescriptor | Remove | |
StreamTableDescriptor | Remove | |
ConnectorDescriptor | Remove | |
TableDescriptor | Refactor |
|
Rowtime | Remove |
TableEnvironment#createTable & TableEnvironment#createTemporaryTable
In order for users to register sources and sinks via Table API, we introduce two new methods:
Code Block | ||||
---|---|---|---|---|
| ||||
/**
* Creates a new table from the given descriptor.
*
* The table is created in the catalog defined by the given path.
*/
void createTable(String path, TableDescriptor descriptor);
/**
* Creates a new temporary table from the given descriptor.
*
* Temporary objects can shadow permanent ones. If a permanent object in a given path exists,
* it will be inaccessible in the current session. To make the permanent object available again
* one can drop the corresponding temporary object.
*/
void createTemporaryTable(String path, TableDescriptor descriptor); |
The TableDescriptor interface is a (generic) representation of the structure used in the SQL DDL, or CatalogTable, respectively. It implements a fluent API to allow chaining and make it easy to use. Options are either specified by referring to an actual ConfigOption instance (preferred), or by string. The latter is necessary, in particular, for options which are not represented through ConfigOption instances, e.g. if they contain placeholders such as "field.#.min". The interface also offers quality-of-life methods for specifying formats such that prefixing format options is handled by the descriptor itself, which allows using ConfigOption instances for format options.
Code Block | ||||
---|---|---|---|---|
| ||||
TableDescriptor {
// Create a builder
static TableDescriptorBuilder forConnector(String connector);
Optional<Schema> getSchema();
Map<String, String> getOptions();
Optional<String> getComment();
List<String> getPartitionKeys();
Optional<TableLikeDescriptor> getLikeDescriptor();
}
TableDescriptorBuilder<SELF> {
SELF schema(Schema schema);
SELF comment(String comment);
SELF option<T>(ConfigOption<T> configOption, T value);
SELF option(String key, String value);
SELF format(String format);
SELF format(ConfigOption<?> formatOption, String format);
SELF format(FormatDescriptor formatDescriptor);
SELF format(ConfigOption<?> formatOption, FormatDescriptor formatDescriptor);
SELF partitionedBy(String... partitionKeys);
SELF like(String tableName, LikeOption... likeOptions);
TableDescriptor build();
}
TableLikeDescriptor {
String getTableName();
List<TableLikeOption> getLikeOptions();
}
FormatDescriptor {
static FormatDescriptorBuilder forFormat(String format);
String getFormat();
Map<String, String> getOptions();
}
FormatDescriptorBuilder<SELF> {
SELF option<T>(ConfigOption<T> configOption, T value);
SELF option(String key, String value);
FormatDescriptor build();
}
interface LikeOption {
enum INCLUDING implements LikeOption {
ALL,
CONSTRAINTS,
GENERATED,
OPTIONS,
PARTITIONS,
WATERMARKS
}
enum EXCLUDING implements LikeOption {
ALL,
CONSTRAINTS,
GENERATED,
OPTIONS,
PARTITIONS,
WATERMARKS
}
enum OVERWRITING implements LikeOption {
GENERATED,
OPTIONS,
WATERMARKS
}
} |
The following example demonstrates a simple example of how these APIs can be used:
Code Block | ||
---|---|---|
| ||
tEnv.createTable(
"cat.db.MyTable",
TableDescriptor.forConnector("kafka")
.comment("This is a comment")
.schema(Schema.newBuilder()
.column("f0", DataTypes.BIGINT())
.columnByExpression("f1", "2 * f0" | ||
Code Block | ||
| ||
// register a table using general purpose Connector descriptor, this would be helpful for custom source/sinks tEnv.createTemporaryTable( "MyTable", new Connector("kafka-0.11") .optioncolumnByMetadata("topicf3", "user_logs"DataTypes.STRING()) .optioncolumn("properties.bootstrap.servers", "localhost:9092""t", DataTypes.TIMESTAMP(3)) .optionwatermark("properties.group.idt", "test-groupt - INTERVAL '1' MINUTE") .optionprimaryKey("scan.startup.mode", "earliestf0") .option("format", "json") build()) .optionpartitionedBy("json.ignore-parse-errors", "true""f0") .option(KafkaOptions.TOPIC, topic) .option("sinkproperties.bootstrap.partitionerservers", "round-robin…") .schemaformat("json") new Schema.build() ); tEnv.createTemporaryTable( "MyTemporaryTable", .column("user_id", DataTypes.BIGINT())TableDescriptor.forConnector("kafka") // … .columnlike("user_name", DataTypes.STRING()) .column("score", DataTypes.DECIMAL(10, 2)) .column("log_ts", DataTypes.STRING()) .column("part_field_0", DataTypes.STRING()) .column("part_field_1", DataTypes.INT()) .column("proc", proctime()) // define a processing-time attribute with column name "proc" .column("my_ts", toTimestamp($("log_ts")) // computed column .watermarkFor("my_ts", $("my_ts").minus(lit(3).seconds())) // defines watermark and rowtime attribute .primaryKey("user_id")) .partitionedBy("part_field_0", "part_field_1") // Kafka doesn't support partitioned table yet, this is just an example for the API ); |
LIKE clause for Descriptor API
We propose to support .like(...)
method on the TableDescriptor to support the same functionality of LIKE clause in CREATE TABLE DDL (FLIP-110). You can refer to FLIP-110 for more details about like options.
Here is a simple example to derive table from existing one:
Code Block | ||
---|---|---|
| ||
tEnv.createTemporaryTable(
"OrdersInKafka",
new Kafka()
.topic("user_logs")
.property("bootstrap.servers", "localhost:9092")
.property("group.id", "test-group")
.format(new Json().ignoreParseErrors(false))
.schema(
new Schema()
.column("user_id", DataTypes.BIGINT())
.column("score", DataTypes.DECIMAL(10, 2))
.column("log_ts", DataTypes.TIMESTAMP(3))
.column("my_ts", toTimestamp($("log_ts"))
)
);
tEnv.createTemporaryTable(
"OrdersInFilesystem",
new Connector("filesystem")
.option("path", "path/to/whatever")
.schema(
new Schema()
.watermarkFor("my_ts", $("my_ts").minus(lit(3).seconds())))
.like("OrdersInKafka", LikeOption.EXCLUDING.ALL, LikeOption.INCLUDING.GENERATED)
); |
The above "OrdersInFilesystem" table will be equivalent to:
Code Block | ||
---|---|---|
| ||
tEnv.createTemporaryTable(
"OrdersInFilesystem",
new Connector("filesystem")
.option("path", "path/to/whatever")
.schema(
new Schema()
.column("user_id", DataTypes.BIGINT())
.column("score", DataTypes.DECIMAL(10, 2))
.column("log_ts", DataTypes.TIMESTAMP(3))
.column("my_ts", toTimestamp($("log_ts"))
.watermarkFor("my_ts", $("my_ts").minus(lit(3).seconds()))
)
); |
TableEnvironment#from() and Table#executeInsert()
Additionally, we would like to propose two new methods for better usability for Table API users.
Code Block | ||
---|---|---|
| ||
interface TableEnvironment {
/** reads a table from the given descriptor */
Table from(TableDescriptor tableDescriptor);
// we already have a "from(String)" method to get registered table from catalog
}
interface Table {
/** Writes the Table to a sink that is specified by the given descriptor. */
TableResult executeInsert(TableDescriptor tableDescriptor);
// we already have a "executeInsert(String)" method to write into a registered table in catalog
} |
With the above two methods, we can leverage the same TableDescriptor
definition, then Table API users can skip the table registration step and can use the source/sink out-of-box. For example:
Code Block | ||
---|---|---|
| ||
Schema schema = new Schema()
.column("user_id", DataTypes.BIGINT())
.column("score", DataTypes.DECIMAL(10, 2))
.column("ts", DataTypes.TIMESTAMP(3));
Table myKafka = tEnv.from(
new Kafka()
.version("0.11")
.topic("user_logs")
.property("bootstrap.servers", "localhost:9092")
.property("group.id", "test-group")
.startFromEarliest()
.sinkPartitionerRoundRobin()
.format(new Json().ignoreParseErrors(false))
.schema(schema)
);
// reading from kafka table and write into filesystem table
myKafka.executeInsert(
new Connector("filesystem")
.option("path", "/path/to/whatever")
.option("format", "json")
.schema(schema)
); |
Proposed Changes
We will discuss in detail about the new interfaces/classes in this section.
All the classes will be located in org.apache.flink.table.descriptors
package and in flink-table-common
module.
TableDescriptor
is an abstract class, it represents a SQL DDL strucutre or a CatalogTable. It can be divided into several parts: schema, partitionedKey, and options. The TableDescriptor
determines how to define schema and partitionedKeys, but leaves options to be implemented by subclasses. Specific connectors can extend to TableDescriptor
and provide handy methods to set connector options (e.g. Kafka#topic(..)
). We also propose to provide a built-in and general implementation of TableDescriptor
, i.e. Connector
. The Connector class provides a general option(String key, String value)
method, thus it can support arbitrary custom connector implementations (based on FLIP-95). The Connector class can reduce the effort of development of custom connectors without implementing a specific descriptor.
TableDescriptor
The current TableDescriptor
will be refactored into:
Code Block | ||||
---|---|---|---|---|
| ||||
/**
* Describes a table to connect. It is a same representation of SQL CREATE TABLE DDL.
*/
@PublicEvolving
public abstract class TableDescriptor {
/**
* Specifies the table schema.
*/
public final TableDescriptor schema(Schema schema) {...}
/**
* Specifies the partition keys of this table.
*/
public final TableDescriptor partitionedBy(String... columnNames) {...}
/**
* Extends some parts from the original regsitered table path.
*/
public final TableDescriptor like(String originalTablePath, LikeOption... likeOptions) {...}
/**
* Extends some parts from the original table descriptor.
*/
public final TableDescriptor like(TableDescriptor originalTableDescriptor, LikeOption... likeOptions) {...}
/**
* Specifies the connector options of this table, subclasses should override this method.
*/
protected abstract Map<String, String> connectorOptions();
}
|
Code Block | ||||
---|---|---|---|---|
| ||||
public interface LikeOption {
enum INCLUDING implements LikeOption {
ALL,
CONSTRAINTS,
GENERATED,
OPTIONS,
PARTITIONS,
WATERMARKS
}
enum EXCLUDING implements LikeOption {
ALL,
CONSTRAINTS,
GENERATED,
OPTIONS,
PARTITIONS,
WATERMARKS
}
enum OVERWRITING implements LikeOption {
GENERATED,
OPTIONS,
WATERMARKS
}
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public class Connector extends TableDescriptor {
private final Map<String, String> options = new HashMap<>();
public Connector(String identifier) {
this.options.put(CONNECTOR.key(), identifier);
}
public Connector option(String key, String value) {
options.put(key, value);
return this;
}
protected Map<String, String> toConnectorOptions() {
return new HashMap<>(options);
}
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public class Kafka extends TableDescriptor {
private final Map<String, String> options = new HashMap<>();
public Kafka() {
this.options.put(CONNECTOR.key(), "kafka");
}
public Kafka version(String version) {
this.options.put(CONNECTOR.key(), "kafka-" + version);
return this;
}
public Kafka topic(String topic) {
this.options.put("topic", topic);
return this;
}
public Kafka format(FormatDescriptor formatDescriptor) {
this.options.putAll(formatDescriptor.toFormatOptions());
return this;
}
...
} |
Schema
The current Rowtime
class will be removed. The current Schema
class will be refactored into:
Code Block | ||||
---|---|---|---|---|
| ||||
/**
* Describes a schema of a table.
*/
@PublicEvolving
public class Schema {
/**
* Adds a column with the column name and the data type.
*/
public Schema column(String columnName, DataType columnType) {...}
/**
* Adds a computed column with the column name and the column Expression.
*/
public Schema column(String columnName, Expression columnExpr) {...}
/**
* Specifies the primary key constraint for a set of given columns.
*/
public Schema primaryKey(String... columnNames) {...}
/**
* Specifies the watermark strategy for rowtime attribute.
*/
public SchemaWithWatermark watermarkFor(String rowtimeColumn, Expression watermarkExpr) {...}
} |
Implementation
I propose to only support this in blink planner as we are going to drop old planner in the near future and old planner doesn't support FLIP-95 connectors.
The descriptors TableDescriptor/Schema
can be used in TableEnvironment#from()
, TableEnvironment#createTemporaryTable()
, Table#executeInsert()
. So does the StreamTableEnvironment
, but BatchTableEnvironment
will not support this as it is implemented in old planner.
The descriptors TableDescriptor/Schema
only defines the meta information (just like DDL string) used to build the CatalogTable. The implementation of TableEnvironment#createTemporaryTable(path, descriptor) will translate the descriptor into CatalogTable.
TableDescriptor
stores the meta information in the package-visible member fields/methods, e.g. schema, partitionedKeys, connectorOptions()
, so does the Schema
class.
TableEnvironmentImpl#createTemporaryTable will create a new instance of TableDescriptorRegistration
to register descriptor as a CatalogTable into catalog. It is an @Internal class located in org.apache.flink.table.descriptors.
So that TableDescriptorRegistration
can access member fields in TableDescriptor/Schema
. TableDescriptorRegistration
will convert schema
into TableSchema
(with the help of CatalogTableSchemaResolver
), and convert partitionedKeys, options, tableSchema
into CatalogTableImpl
.
TableEnvironment#from(descriptor)
will register descriptor under a system generated table path (just like TableImpl#toString
) first, and scan from the table path to derive the Table
. Table#executeInsert()
does it in the similar way.
Compatibility, Deprecation, and Migration Plan
This is indeed an incompatible interface change, because we propose to drop the existing one and introduce new ones. But I think this is fine, as TableEnvironment#connect
has been deprecated in 1.11. For the users who are using TableEnvironment#connect in 1.11, we have recommended them in the Javadoc to use SQL CREATE TABLE DDL (TableEnvironment#executeSql(String)
) instead.
We have a migration plan for users who are still using TableEnvironment#connect
and want to migrate to new Descriptor APIs. The following tables list the API changes:
Schema API Changes
...
ConnectTableDescriptor API Changes
...
cat.db.MyTable")
); |
TableEnvironment#from
We propose introducing TableEnvironment#from in order to create a Table from a given descriptor. This is in line with existing methods, e.g. from(String).
Code Block | ||||
---|---|---|---|---|
| ||||
/**
* Returns a {@link Table} backed by the given descriptor.
*/
Table from(TableDescriptor descriptor); |
Table#executeInsert
We propose introducing Table#executeInsert in order to write directly to a sink defined by a descriptor. This is in line with existing methods, e.g. executeInsert(String).
Code Block | ||||
---|---|---|---|---|
| ||||
/**
* Declares that the pipeline defined by this table should be written to a table defined by the given descriptor.
*
* If no schema is defined in the descriptor, it will be inferred automatically.
*/
TableResult executeInsert(TableDescriptor descriptor); |
StatementSet#addInsert
Similarly to Table#executeInsert, we propose extending StatementSet#addInsert to take a descriptor.
Code Block | ||
---|---|---|
| ||
/**
* Adds the given table as a sink defined by the descriptor.
*/
StatementSet addInsert(TableDescriptor descriptor, Table table); |
Package Structure
The new descriptor API will reside in flink-table-api-java in the org.apache.flink.table.descriptors package.
In order to make discovery of ConfigOption instances easier, we propose to move *Options classes from all built-in connectors (e.g. KafkaOptions, …) into a common package. Then users can easily discover those classes for any connectors on their current classpath. As this involves declaring these classes to be public (evolving) API, minor refactorings will be necessary to ensure they do not contain any internals.
Compatibility, Deprecation, and Migration Plan
The proposed changes drop an existing API and replace it with a new, incompatible API. However, the old API has been deprecated since Flink 1.11 and is lacking support for many features, thus we are not expecting this to affect many users, as the current recommendation has been to switch to SQL DDL. For affected users, the migration requires relatively small changes and all existing features can be covered.
...
Rejected Alternatives
Keep and follow the original TableEnvironment#connect
API
...
Code Block | ||
---|---|---|
| ||
tableEnv.connect( new Kafka() // can be replaced by new Connector("kafka-0.11") .version("0.11") .topic("myTopic") .property("bootstrap.servers", "localhost:9092") .property("group.id", "test-group") .startFromEarliestscanStartupModeEarliest() .sinkPartitionerRoundRobin() .format(new Json().ignoreParseErrors(false)) .schema( new Schema() .column("user_id", DataTypes.BIGINT()) .column("user_name", DataTypes.STRING()) .column("score", DataTypes.DECIMAL(10, 2)) .column("log_ts", DataTypes.TIMESTAMP(3)) .column("part_field_0", DataTypes.STRING()) .column("part_field_1", DataTypes.INT()) .column("proc", proctime()) .column("my_ts", toTimestamp($("log_ts")) .watermarkFor("my_ts", $("my_ts").minus(lit(3).seconds()))) .primaryKey("user_id") .partitionedBy("part_field_0", "part_field_1") .createTemporaryTable("MyTable"); |
...