Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Public Interfaces

We propose to de


We propose to drop the existing method TableEnvironment#connect (deprecated in 1.11) and introduce a new method in TableEnvironment:

Code Block
languagejava
/** creates a temporary table from a descriptor. */
void createTemporaryTable(String tablePath, TableDe
scriptor tableDescriptor);

some related interfaces/classes, including:

  • (drop) TableEnvironment#connect
  • (drop) ConnectTableDescriptor
  • (drop) BatchTableDescriptor
  • (drop) StreamTableDescriptor
  • (drop) ConnectorDescriptor
  • (drop) Rowtime
  • (refactor) TableDescriptor
  • (refactor) Schema


And introduce a new set of descriptor APIs for Table API.

TableEnvironment#createTemporaryTable


Code Block
languagejava
/** creates a temporary table from a descriptor. */
void createTemporaryTable(String tablePath, TableDescriptor tableDescriptor);


The TableDescriptor is an unified interface/class to represent a SQL DDL strucutre (or CatalogTable internally). It can be a specific connector descriptor, e.g. Kafka, or a general purpose descriptor, i.e. Connector. All the methods can be chained called on the instance, including .schema(), .partitionedBy(), and .like(). We will The TableDescriptor is an unified interface/class to represent a SQL DDL strucutre (or CatalogTable internally). It can be a specific connector descriptor, e.g. Kafka, or a general purpose descriptor, i.e. Connector. All the methods can be chained called on the instance, including .schema(), .partitionedBy(), and .like(). We will discuss TableDescriptor in detail in the Proposed Changes section.

...

tEnv.createTemporaryTable(
"MyTable",
new Kafka()
.version("0.11")
.topic("user_logs")
.property("bootstrap.servers", "localhost:9092")
.property("group.id", "test-group")
.startFromEarliest()
.sinkPartitionerRoundRobin()
.format(new Json().ignoreParseErrors(false))
.schema(
new Schema()
.column("user_id", DataTypes.BIGINT())
.column("user_name", DataTypes.STRING())
.column("score", DataTypes.DECIMAL(10, 2))
.column("log_ts", DataTypes.STRING())
.column("part_field_0", DataTypes.STRING())
.column("part_field_1", DataTypes.INT())
.proctime("proc")
.computedColumn("my_ts", "TO_TIMESTAMP(log_ts)") // 计算列
.watermarkFor("my_ts").boundedOutOfOrderTimestamps(Duration.ofSeconds(5))
.primaryKey("user_id"))
.partitionedBy("part_field_0", "part_field_1")
);

tEnv.createTemporaryTable(
"MyTable",
new Connector("kafka-0.11")
.option("topic", "user_logs")
.option("properties.bootstrap.servers", "localhost:9092")
.option("properties.group.id", "test-group")
.option("scan.startup.mode", "earliest")
.option("format", "json")
.option("json.ignore-parse-errors", "true")
.option("sink.partitioner", "round-robin")
.schema(
new Schema()
.column("user_id", DataTypes.BIGINT())
.column("user_name", DataTypes.STRING())
.column("score", DataTypes.DECIMAL(10, 2))
.column("log_ts", DataTypes.STRING())
.column("part_field_0", DataTypes.STRING())
.column("part_field_1", DataTypes.INT())
.proctime("proc")
.computedColumn("my_ts", "TO_TIMESTAMP(log_ts)") // 计算列
.watermarkFor("my_ts").boundedOutOfOrderTimestamps(Duration.ofSeconds(5))
.primaryKey("user_id"))
.partitionedBy("part_field_0", "part_field_1")
);


TableEnvironment#from and Table#executeInsert


Additionally, we would like to propose two new methods for better usability for Table API users. 

...

With the above two methods, we can leverage the same TableDescriptor definition, then Table API users can skip the table registering step and can use the source/sink out-of-box. For example:

Schema schema = new Schema()
.column("user_id", DataTypes.BIGINT())
.column("score", DataTypes.DECIMAL(10, 2))
.column("ts", DataTypes.TIMESTAMP(3));
Table myKafka = tEnv.from(
new Kafka()
.version("0.11")
.topic("user_logs")
.property("bootstrap.servers", "localhost:9092")
.property("group.id", "test-group")
.startFromEarliest()
.sinkPartitionerRoundRobin()
.format(new Json().ignoreParseErrors(false))
.schema(schema)
);
// reading from kafka table and write into filesystem table
myKafka.executeInsert(
new Connector("filesystem")
.option("path", "/path/to/whatever")
.option("format", "json")
.schema(schema)
);

Proposed Changes

TableDescriptor

The current TableDescriptor will be refactored into:

example:


Schema schema = new Schema()
.column("user_id", DataTypes.BIGINT())
.column("score", DataTypes.DECIMAL(10, 2))
.column("ts", DataTypes.TIMESTAMP(3));
Table myKafka = tEnv.from(
new Kafka()
.version("0.11")
.topic("user_logs")
.property("bootstrap.servers", "localhost:9092")
.property("group.id", "test-group")
.startFromEarliest()
.sinkPartitionerRoundRobin()
.format(new Json().ignoreParseErrors(false))
.schema(schema)
);
// reading from kafka table and write into filesystem table
myKafka.executeInsert(
new Connector("filesystem")
.option("path", "/path/to/whatever")
.option("format", "json")
.schema(schema)
);

Proposed Changes

TableDescriptor

The current TableDescriptor will be refactored into:

Code Block
languagejava
titleTableDescriptor
/**
 * Describes a table to connect. It is a same representation of SQL CREATE TABLE DDL.
 */
@PublicEvolving
public abstract class TableDescriptor {

	/**
	 * Specifies the table schema.
	 */
	public TableDescriptor schema(Schema schema) {...}

	/**
	 * Specifies the partition keys of this table.
	 */
	public TableDescriptor partitionedBy(String... fieldNames) {...}

	/**
	 * Extends some parts from the original regsitered table path.
	 */
	public TableDescriptor like(String originalTablePath, LikeOption... likeOptions) {...}

	/**
	 * Extends some parts from the original table descriptor.
	 */
	public TableDescriptor like(TableDescriptor originalTableDescriptor, LikeOption... likeOptions) {...}

	/**
	 * Specifies the connector options of this table, subclasses should override this method.
	 */
	protected abstract Map<String, String> connectorOptions();
}

public class Connector extends TableDescriptor {

	private final Map<String, String> options = new HashMap<>();

	public Connector(String identifier) {
		this.options.put(CONNECTOR.key(), identifier);
	}

	public Connector option(String key, String value) {
		options.put(key, value);
		return this;
	}

	protected Map<String, String> toConnectorOptions() {
		return new HashMap<>(options);
	}
}

public class Kafka extends TableDescriptor {

	private final Map<String, String> options = new HashMap<>();

	public Kafka() {
		this.options.put(CONNECTOR.key(), "kafka");
	}

	public Kafka version(String version) {
		this.options.put(CONNECTOR.key(), "kafka-" + version);
		return this;
	}

	public Kafka topic(String topic) {
		this.options.put("topic", topic);
		return this;
	}

	public Kafka format(FormatDescriptor formatDescriptor) {
		this.options.putAll(formatDescriptor.toFormatOptions());
		return this;
	}

   ...
Code Block
languagejava
titleTableDescriptor
/**
 * Describes a table to connect. It is a same representation of SQL CREATE TABLE DDL.
 */
@PublicEvolving
public abstract class TableDesc
riptor {

	/**
	 * Specifies the table schema.
	 */
	public TableDescriptor schema(Schema schema) {...}

	/**
	 * Specifies the partition keys of this table.
	 */
	public TableDescriptor partitionedBy(String... fieldNames) {...}

	/**
	 * Extends some parts from the original table.
	 */
	public TableDescriptor like(String originalTablePath, LikeOption... likeOptions) {...}

	/**
	 * Extends some parts from the original table.
	 */
	public TableDescriptor like(TableDescriptor originalTableDescriptor, LikeOption... likeOptions) {...}

	/**
	 * Specifies the connector options of this table, subclasses should override this method.
	 */
	protected abstract Map<String, String> connectorOptions();
}



Schema

The current Rowtime class will be removed. The current Schema class will be refactored into:

Code Block
languagejava
titleSchema
/**
 * Describes a schema of a table.
 */
@PublicEvolving
public class Schema {

	/**
	 * Adds a column with the column name and the data type.
	 */
	public Schema column(String columnName, DataType columnType) {...}

	/**
	 * Adds a computed column with the column name and the SQL expression string.
	 */
	public Schema computedColumn(String columnName, String sqlExpression) {...}

	/**
	 * Adds a processing-time column with the given column name.
	 */
	public Schema proctime(String columnName) {...}

	/**
	 * Specifies the primary key constraint for a set of given columns.
	 */
	public Schema primaryKey(String... columnNames) {...}

	/**
	 * Specifies the watermark strategy for rowtime attribute.
	 */
	public SchemaWithWatermark watermarkFor(String rowtimeColumn) {...}

	public static class SchemaWithWatermark {

		/**
		 * Specifies a custom watermark strategy using the given SQL expression string.
		 */
        public Schema as(String watermarkSqlExpr) {...}

		/**
		 * Specifies a watermark strategy for situations with monotonously ascending timestamps.
		 */
		public Schema ascendingTimestamps() {...}

		/**
		 * Specifies a watermark strategy for situations where records are out of order, but you can place
		 * an upper bound on how far the events are out of order. An out-of-order bound B means that
		 * once the an event with timestamp T was encountered, no events older than {@code T - B} will
		 * follow any more.
		 */
		public Schema boundedOutOfOrderTimestamps(Duration maxOutOfOrderness) {...}
	}
}

...