...
Page properties |
---|
...
|
Discussion thread:
JIRA: here (<- link to https://issues.apache.org/jira/browse/FLINK-XXXX)
...
|
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
...
We need to allow users to specify the dialect in the sql-client yaml file. In PlannerContext::getSqlParserConfig
, we’ll choose which parser to use And we'll implement a SqlParserImplFactory
which can create the parser according to the dialect/conformance in use. Suppose Suppose the new parser for Hive is named FlinkHiveSqlParserImpl
, and then PlannerContext::getSqlParserConfig
will be changed to something like this. The SqlParserImplFactory
implementation, assuming it's named FlinkSqlParserImplFactory
, will be something like this:
Code Block | ||
---|---|---|
| ||
public class FlinkSqlParserImplFactory implements SqlParserImplFactory {
private final SqlConformance conformance;
public FlinkSqlParserImplFactory(SqlConformance conformance) {
this.conformance = conformance;
}
@Override
public SqlAbstractParserImpl getParser(Reader stream) {
if (conformance == FlinkSqlConformance.HIVE) {
return FlinkHiveSqlParserImpl.FACTORY.getParser(stream);
} else {
return FlinkSqlParserImpl.FACTORY.getParser(stream);
}
}
} |
In PlannerContext::getSqlParserConfig
, we’ll use FlinkSqlParserImplFactory
to create the config:
Code Block | ||
---|---|---|
| ||
private SqlParser.Config getSqlParserConfig() { return JavaScalaConversionUtil.<SqlParser.Config>toJava(getCalciteConfig(tableConfig).getSqlParserConfig()).orElseGet( // we use Java lex because back ticks are easier than double quotes in programming // and cases are preserved () -> { SqlConformance conformance = getSqlConformance(); SqlParserImplFactory factory = conformance == FlinkSqlConformance.HIVE ? FlinkHiveSqlParserImpl.FACTORY : FlinkSqlParserImpl.FACTORY; return SqlParser .configBuilder() .setParserFactory(factory) new FlinkSqlParserImplFactory(conformance)) .setConformance(conformance) .setLex(Lex.JAVA) .setIdentifierMaxLength(256) .build(); } ); } |
New or Changed Public Interfaces
...
The following table summarizes the DDLs that will be supported in this FLIP. Unsupported features are also listed so that we can track them and decide whether/how to support them in the future.
Database | Supported | Comment | Not Supported | Comment |
CREATE | SHOW DATABASES LIKE | Show databases filtering by a regular expression. Missing Catalog API. | ||
DROP | ||||
ALTER | ||||
USE | ||||
SHOW | ||||
DESCRIBE | We don't have a TableEnvironment API for this. Perhaps it's easier to implement when FLIP-84 is in place. | |||
Table | CREATE | Support specifying EXTERNAL, PARTITIONED BY, ROW FORMAT, STORED AS, LOCATION and table properties. Data types will also be in HiveQL syntax, e.g. STRUCT | Bucketed tables | |
DROP | CREATE LIKE | Wait for FLIP-110 | ||
ALTER | Include rename, update table properties, update SerDe properties, update fileformat and update location. | CREATE AS | Missing underlying functionalities, e.g. create the table when the job succeeds. | |
SHOW | Temporary tables | Missing underlying functionalities, e.g. removing the files of the temporary table when session ends. | ||
DESCRIBE | SKEWED BY [STORED AS DIRECTORIES] | Currently we don't use the skew info of a Hive table. | ||
STORED BY | We don't support Hive table with a storage handler yet. | |||
UNION type | ||||
TRANSACTIONAL tables | ||||
DROP PURGE | Data will be deleted w/o going to trash. Applies to either a table or partitions. Missing Catalog API. | |||
TRUNCATE | Remove all rows from a table or partitions. Missing Catalog APIs. | |||
TOUCH, PROTECTION, COMPACT, CONCATENATE, UPDATE COLUMNS | Applies to either a table or partitions. Too Hive-specific or missing underlying functionalities. | |||
SHOW TABLES 'regex' | Show tables filtering by a regular expression. Missing Catalog API. | |||
FOREIGN KEY, UNIQUE, DEFAULT, CHECK | These constraints are currently not used by the Hive connector. | |||
Partition | ALTER | Include add, drop, update fileformat and update location. | Exchange, Discover, Retention, Recover, (Un)Archive | Too Hive-specific or missing underlying functionalities. |
SHOW | Support specifying partial spec | RENAME | Update a partition's spec. Missing Catalog API. | |
DESCRIBE | We don't have a TableEnvironment API for this. Perhaps it's easier to implement when FLIP-84 is in place. | ALTER with partial spec | Alter multiple partitions matching a partial spec. Missing Catalog API. | |
Column | ALTER | Change name, type, position, comment for a single column. Add new columns. Replace all columns. | ||
Function | CREATE | CREATE FUNCTION USING FILE|JAR… | To support this, we need to be able to dynamically add resources to a session. | |
DROP | RELOAD | Hive-specific | ||
SHOW | SHOW FUNCTIONS LIKE | Show functions filtering by a regular expression. Missing Catalog API. | ||
View | CREATE | Wait for FLIP-71 | SHOW VIEWS LIKE | Show views filtering by a regular expression. Missing Catalog API. |
DROP | Wait for FLIP-71 | |||
ALTER | Wait for FLIP-71 | |||
SHOW | Wait for FLIP-71 | |||
DESCRIBE | Wait for FLIP-71 |
The following table summarizes the DMLs that will be supported in this FLIP. Unsupported features are also listed so that we can track them and decide whether/how to support them in the future.
Supported | Comment | Unsupported | Comment | |
DMLs | INSERT INTO/OVERWRITE PARTITION | Support specifying dynamic partition columns in the specification | Multi-insert |