You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 19 Next »

Status

Current state[Accepted]

Discussion thread:  http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-69-Flink-SQL-DDL-Enhancement-td33090.html

JIRA:  https://issues.apache.org/jira/browse/FLINK-14686

Released:  TBD

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation:

In flink 1.9, we have introduced some awesome features such as complete catalog support[1] and sql ddl support[2]. These features have been a critical integration for Flink to be able to manage data and metadata like a classic RDBMS and make developers more easy to construct their real-time/off-line warehouse or sth similar base on flink.

Despite that big advancement, there is still a lack of support on how Flink SQL DDL to manage metadata and data mainly in TableEnvironment API and partially in SQL CLIENT.

  1. Don’t support show/describe/use catalog(s)
  2. Don’t support create/drop/alter/use/show/desc database(s) 
  3. Don’t support show/desc table(s) and create table as select
  4. Don’t support create/drop/show/desc view(s)

Goals:

We would like to achieve the following goals in this FLIP.

  • Add Catalog DDL enhancement support
  • Add Database DDL enhancement support
  • Add Table DDL enhancement support

Note:

The following issues are out of the scope of this FLIP. They will be taken care of by other teams/committers.

  1. DDL related to View(s)
  2. DDL related to Partition(s)
  3. DDL related to Function(s) (another seperate FLIP-79 to solve this issure)
  4. Unify the way of SQL CLI and TableEnvironment parse sql through common Flink SQL Parser module

Proposed DDL Changes:

There are a few DDLs following that is already supported in SQL client. We want them should be migrated using the Flink Sql Parser approach properly.

Database DDL:

We propose to add the following DDLs related to database operations.

  • createDatabaseStatement:

CREATE  DATABASE [ IF NOT EXISTS ] [ catalogName.] dataBaseName
[ COMMENT database_comment ]
[WITH ( name=value [, name=value]*)]

IF NOT EXISTS:

If a database with the same name already exists, nothing will happen.

  • dropDatabaseStatement:

DROP  DATABASE [ IF EXISTS ] [ catalogName.] dataBaseName
[ (RESTRICT|CASCADE)]

IF EXISTS

If the database to drop does not exist, nothing happens.

RESTRICT

Dropping a non-empty database triggers an exception. Enabled by default.

CASCADE

Dropping a non-empty database also drops all associated tables and functions.

Catalog.dropDatabase method should add new cascade flag

/**
* Drop a database.
*
* @param name              Name of the database to be dropped.
* @param ignoreIfNotExists Flag to specify behavior when the database does not exist:
*                          if set to false, throw an exception,
*                          if set to true, do nothing.

* @param cascade Flag to specify behavior when the database is not empty
*                          if set to false, throw an exception,
*                          if set to true, delete all related tables and functions.
* @throws DatabaseNotExistException if the given database does not exist
* @throws CatalogException in case of any runtime exception
*/
void dropDatabase(String name, boolean ignoreIfNotExists, boolean cascade) throws DatabaseNotExistException,
  DatabaseNotEmptyException, CatalogException;

  • alterDatabaseStatement:

ALTER  DATABASE  [ catalogName.] dataBaseName SET
( name=value [, name=value]*)

Set one or more properties in the specified database. If a particular property is already set in the database, override the old value with the new one.

  • useDatabaseStatement:

USE [ catalogName.] dataBaseName 

Set the current database. All subsequent commands that do not explicitly specify a database will use this one. If the provided database does not exist, an exception is thrown.

note: already support in sql client but need support in TableEnvironment;

  • showDatabasesStatement:

SHOW DATABASES

Return all databases in the current catalog.

note: already support in sql client but need support in TableEnvironment;

  • descDatabaseStatement:

DESCRIBE  DATABASE [ EXTENDED] [ catalogName.] dataBasesName 

Return the metadata of an existing database (name, comment). 

EXTENDED:

Return the metadata of an existing database (name, comment, and properties). 

use eg:

FLINK SQL> describe database hive.default;
database name: default
comment: hive default database
FLINK SQL > describe database extended hive.default;
database name: default
comment: hive default database
properties:
create-time: 2019-09-26 10:00:00

Table DDL:

We propose to add the following DDLs related to table operations.

  • showTablesStatement:

SHOW TABLES

Return all tables in the current database.

note: already support in sql client but need support in TableEnvironment.

  • descTableStatement:

DESCRIBE [ EXTENDED]  [[catalogName.] dataBasesName].tableName

Return the metadata of an existing table (column names, data types, and comments). 

EXTENDED

Display detailed information about the table, including table type, properties and so on.

note: already support in sql client but need support in TableEnvironment.

  • alterTableStatement:

ALTER TABLE  [[catalogName.] dataBasesName].tableName
RENAME TO newTableName

Rename an existing table.

ALTER TABLE  [[catalogName.] dataBasesName].tableName
SET  ( name=value [, name=value]*)

Set the properties of an existing table 

Proposed TableEnvironment SQL API Changes:

Notice: 

Not the scope of this flip anymore, we have another dedicated flip FLIP-84 to discuss the API changes. So I just removed this partition, anyone interested can refer to the archive of table api change

Implementation Changes:

Support these ddl in the TableEnvironment, and there a few DDL implemented by SQL Client that should be migrated to using Calcite parser.

Implementation Plan:

  1. Implement all parser logic in the flink-sql-parser module
  2. After FLIP-84 completed, add DDLs support one by one through the `executeString(String sql)` or something similar interface.
  3. After step1,2 finished, SQL CLI can very easily support correspondingly ddl by just calling TableEnvironment’s execute sql method.

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?

Not Applicable.

  • If we are changing behavior how will we phase out the older behavior?

Not Applicable.

  • If we need special migration tools, describe them here.

Not Applicable.

  • When will we remove the existing behavior?

Not Applicable.

Test Plan

  1. New syntax add to calcite parser can be tested using FlinkSqlParserImplTest by adding test cases
  2. DDLs supported in TableEnvironment can be verified by unit test.

Rejected Alternatives

No rejected alternatives.

Reference:

  1. https://issues.apache.org/jira/browse/FLINK-11275
  2. https://issues.apache.org/jira/browse/FLINK-10232
  3. hive ddl https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
  4. spark ddl https://docs.databricks.com/spark/latest/spark-sql/index.html
  5. flink sql ddl design https://docs.google.com/document/d/1TTP-GCC8wSsibJaSUyFZ_5NBAHYEB1FVmPpP7RgDGBA/edit


  • No labels