Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Status

Current state["Under Discussion"]

Discussion thread: here http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-69-Flink-SQL-DDL-Enhancement-td33090.html

JIRA: here: TBD

...

Page properties


Discussion thread
Vote thread
JIRA

Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyFLINK-14686

Release1.10


Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation:

In flink 1.9, we have introduced some awesome features such as complete catalog support[1] and sql ddl support[2]. These features have been a critical integration for Flink to be able to manage data and metadata like a classic RDBMS and make developers more easy to construct their real-time/off-line warehouse or sth similar base on flink.

Despite that big advancement, there is still a lack of support on how Flink SQL DDL to manage metadata and data mainly in TableEnvironment API and partially in SQL CLIENT.

  1. Don’t support show/describe/use catalog(s)
  2. Don’t support create/drop/alter/use/show/desc database(s) 
  3. Don’t support show/desc table(s) and create table as select
  4. Don’t support create/drop/show function(s)Don’t support create/drop/show/desc view(s)

Goals:

We would like to achieve the following goals in this FLIP.

  • Add Catalog DDL enhancement support
  • Add Database DDL enhancement support
  • Add Table DDL enhancement support
  • Add Function DDL enhancement support

Note:

The following issues are out of the scope of this FLIP. They will be taken care of by other teams/committers.

  1. DDL related to View(s)
  2. DDL related to Partition(s)
  3. Make SQL CLIENT and tEnv.sqlUpdate support new added ddl in a unified way

Public Interfaces

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

  1. DDL related to Function(s) (another seperate FLIP-79 to solve this issure)
  2. Unify the way of SQL CLI and TableEnvironment parse sql through common Flink SQL Parser module

Proposed DDL Changes:

...

There are a few DDLs following that is already supported in SQL client. We want them should be migrated using the parser Flink Sql Parser approach properly.


Catalog DDL:

We propose to add the following DDLs related to catalog operations.

  • showCatalogsStatement

...

...

SHOW

...

CATALOGS

...


Return all

...

catalogs in the current catalog.

...

note: already support in sql client but need support in

...

TableEnvironment.

...

  • describeCatalogStatement

...

...

DESCRIBE

...

CATALOG

...

catalogName

...

Return the default database name and expalnCatalog(newly added method in the catalog) content of an existing catalogName 

Newly added method in Catalog.java:

/**

        EXTENDED:

            Display the catalog properties.

  • useCatalogStatement

Code Block
languagesql
USE CATALOG catalogName 

* Get a user defined catalog description.
* @return a user-implement catalog detailed explanation
*/
default String explainCatalog() {
    return String.format("CatalogClass:%s", this.getClass().getCanonicalName());
}

use eg:

Flink SQL> describe catalog hiveCatalog;
default database: default.
hiveVersion: 2.3.4
...


  • useCatalogStatement

USE CATALOG catalogName 

        Set the current catalog. 

...

note: already support in sql client but need support in tEnv.

...

sqlUpdate();

Database DDL:


We propose to add the following DDLs related to database operations.

  • createDatabaseStatement:

...

...

CREATE

...

  DATABASE

...

[ IF

...

NOT

...

EXISTS ]

...

[ catalogName.]

...

dataBaseName
[ COMMENT

...

database_comment ]

...


[WITH

...

(

...

name=value

...

[,

...

name=value]*)]

IF NOT EXISTS:

If a database with the same name already exists, nothing will happen.

  • dropDatabaseStatement:

...

...

languagesql

DROP

...

  DATABASE

...

[

...

IF

...

EXISTS

...

]

...

[

...

catalogName.]

...

dataBaseName
[ (RESTRICT|CASCADE)]

...

IF EXISTS

If the database to drop does not exist, nothing happens.

RESTRICT

Dropping a non-empty database triggers an exception. Enabled by default.

CASCADE

Dropping a non-empty database also drops all associated tables and functions.

...

  • alterDatabaseStatement:

...

languagesql

...

Catalog.dropDatabase method should add new cascade flag

/**
* Drop a database.
*
* @param name              Name of the database to be dropped.
* @param ignoreIfNotExists Flag to specify behavior when the database does not exist:
*                          if set to false, throw an exception,
*                          if set to true, do nothing.

* @param cascade Flag to specify behavior when the database is not empty
*                          if set to false, throw an exception,
*                          if set to true, delete all related tables and functions.
* @throws DatabaseNotExistException if the given database does not exist
* @throws CatalogException in case of any runtime exception
*/
void dropDatabase(String name, boolean ignoreIfNotExists, boolean cascade) throws DatabaseNotExistException,
  DatabaseNotEmptyException, CatalogException;


  • alterDatabaseStatement:

ALTER  DATABASE  [ catalogName.] dataBaseName

...

SET

...


( name=value

...

[,

...

name=value]*)


Set one or more properties in the specified database. If a particular property is already set in the database, override the old value with the new one.

  • useDatabaseStatement:

...

USE

...

[ catalogName.]

...

dataBaseName 

Set the current database. All subsequent commands that do not explicitly specify a database will use this one. If the provided database does not exist, an exception is thrown.

note: already support in sql client but need support in

...

TableEnvironment;

  • showDatabasesStatement:

...

languagesql

...

SHOW DATABASES

Return all databases in the current catalog.

note: already support in sql client but need support in

...

TableEnvironment;

  • descDatabaseStatement:

...

languagesql

...

DESCRIBE  DATABASE [ EXTENDED]

...

[

...

catalogName.]

...

dataBasesName 

Return the metadata of an existing database (name, comment). 

EXTENDED:

...

Return the metadata of an existing database (name, comment, and properties). 

use eg:

FLINK SQL> describe database hive.default;
database name: default
comment: hive default database
FLINK SQL > describe database extended hive.default;
database name: default
comment: hive default database
properties:
create-time: 2019-09-26 10:00:00

Table DDL:


We propose to add the following DDLs related to table operations.

  • showTablesStatement:

...

languagesql

...

SHOW

...

TABLES

Return all tables in the current database.

note: already support in sql client but need support in

...

TableEnvironment.

...

  • descTableStatement:

...

...

languagesql

...

DESCRIBE

...

[ EXTENDED

...

[[catalogName.]

...

dataBasesName].tableName


Return the metadata of an existing table (column names, data types, and comments). 

EXTENDED

Display detailed information about the table, including table type, properties and so on.

note: already support in sql client but need support in

...

TableEnvironment.

...

  • alterTableStatement:

 

...

languagesql

ALTER

...

TABLE

...

  [[catalogName.]

...

dataBasesName].

...

tableName
RENAME TO newTableName


Rename an existing table.

...

languagesql

ALTER

...

TABLE

...

  [[catalogName.]

...

dataBasesName].

...

tableName
SET  ( name=value

...

[,

...

name=value]*)


Set the properties of an existing table 

...

Proposed TableEnvironment SQL API Changes:

  • createFunctionStatement:

Code Block
languagesql
CREATE FUNCTION [IF NOT EXISTS] [[catalogName.]databaseName.]functionName AS className.

Create a function indicated by a className

IF NOT EXISTS

If a function with the same name already exists, nothing will happen.

  • dropFunctionStatement:

Code Block
languagesql
DROP FUNCTION [IF EXISTS] [[catalogName.]databaseName.]functionName.

Create a function indicated by a className

IF EXISTS

If a function with the same name not exists, nothing will happen.

  • alterFunctionStatement:

Code Block
languagesql
  ALTER FUNCTION [IF EXISTS] [[catalogName.]databaseName.]functionName RENAME TO new_name;

Rename a non-temporary function to new name.

IF EXISTS

If a function with the same name not exists, nothing will happen.

  • showFunctionsStatement:

Code Block
languagesql
  SHOW FUNCTIONS

Return all functions in the current database.

note: already support in sql client but need support in tEnv.sqlQuery();

Implementation Changes:

Notice: 

Not the scope of this flip anymore, we have another dedicated flip FLIP-84 to discuss the API changes. So I just removed this partition, anyone interested can refer to the archive of table api change


Implementation Changes:

Support these ddl in the TableEnvironment, and there a few DDL implemented by SQL Client that should be migrated to using Calcite parser.

Implementation Plan:

  1. Implement all parser logic in the flink-sql-parser module
  2. After FLIP-84 completed, add DDLs support one by one through the `executeString(String sql)` or something similar interface.
  3. After step1,2 finished, SQL CLI can very easily support correspondingly ddl by just calling TableEnvironment’s execute sql method.
  4. Unify the parser way of SQL CLI and Flink SQL parser
  5. Add DDL mentioned above one by one in flink sql parser and support execute both in SQL CL and TableEnvironment.sqlQuery() or TableEnvironment.sqlUpdate()


Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?

Not Applicable.

  • If we are changing behavior how will we phase out the older behavior?

Not Applicable.

  • If we need special migration tools, describe them here.

Not Applicable.

  • When will we remove the existing behavior?

Not Applicable.

Test Plan

Describe in few sentences how the FLIP will be tested. We are mostly interested in system tests (since unit-tests are specific to implementation details). How will we know that the implementation works as expected? How will we know nothing broke?

Rejected Alternatives

...

  1. New syntax add to calcite parser can be tested using FlinkSqlParserImplTest by adding test cases
  2. DDLs supported in TableEnvironment can be verified by unit test.

Rejected Alternatives

No rejected alternatives.

Reference:

  1. https://issues.apache.org/jira/browse/FLINK-11275
  2. https://issues.apache.org/jira/browse/FLINK-10232
  3. hive ddl https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
  4. spark ddl https://docs.databricks.com/spark/latest/spark-sql/index.html
  5. flink sql ddl design https://docs.google.com/document/d/1TTP-GCC8wSsibJaSUyFZ_5NBAHYEB1FVmPpP7RgDGBA/edit