Status
Current state: ["Under Discussion"]
Discussion thread: here (<- link to https://lists.apache.org/thread/mc0lv4gptm7som02hpob1hdp3hb1ps1v)
JIRA:
Released: 1.16
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
The current syntax/features of Flink SQL is very perfect in both stream mode and batch mode. But there are still some usability to improve. For example, If the user wants to insert data into a new table, 2 steps are required:
- First, prepare the DDL statement of the table named t1;
- Second, insert the data into t1;
These two steps seem to be normal, but if there are many fields, spelling DDL statements can be difficult, and write out these columns in the following insert statement. Therefore, we can support CTAS (CREATE TABLE AS SELECT) like MySQL, Oracle, Microsoft SQL Server, Hive, Spark, etc ... It will be more user friendly. In addition, the Hive dialect already has some support for CTAS. My suggestion would be to support a variation of an optional Feature T172, “AS subquery clause in table definition”, of SQL standard.
Public API Changes
Through the appendix research summary and analysis, the current status of CREAE TABLE AS SELECT(CTAS) in the field of big data is:
- Flink: Flink dialect does not support CTAS. ==> LEVEL-1
- Spark DataSource v1: is atomic (can roll back), but is not isolated. ==> LEVEL-2
- Spark DataSource v2: Guaranteed atomicity and isolation. ==> LEVEL-3
- Hive MR: Guaranteed atomicity and isolation. ==> LEVEL-3
Combining the current situation of Flink and the needs of our business, choosing a Level-2 implementation for Flink in batch execution mode. However, in streaming mode, we don't provide atomicity guarantees because of job is long running. Moreover, at the moment here no strong needs to guarantee atomicity in stream mode.
Syntax
I suggest introducing a CTAS clause with a following syntax:
CREATE TABLE [ IF NOT EXISTS ] table_name [ WITH ( table_properties ) ] [ AS query_expression ]
Example:
CREATE TABLE ctas_hudi WITH ('connector.type' = 'hudi') AS SELECT id, name, age FROM hive_catalog.default.test WHERE mod(id, 10) = 0;
Resulting table equivalent to:
CREATE TABLE ctas_hudi ( id BIGINT, name STRING, age INT ) WITH ('connector.type' = 'hudi'); INSERT INTO ctas_hudi SELECT id, name, age FROM hive_catalog.default.test WHERE mod(id, 10) = 0;
Table
Providing method that are used to execute CTAS(CREATE TABLE AS SELECT) for Table API user.
@PublicEvolving /** |
CreateOrReplaceTable
Proposing a public interface CreateOrReplaceTable used by CTAS(CREATE TABLE AS SELECT) for table API user.
/** |
The CreateOrReplaceTable interface is introduced newly because if we add the create/createIfNotExist API in the Table interface, the user must call the saveAs API before calling these API, which will cause additional usage costs to the user. This API only support Create Table As Select syntax currently, but in the future, we maybe support Replace Table As Select and Create Or Replace As Table syntax which is also supported by some other batch compute engine.
The recommended way to use CreateOrReplaceTable as following:
TablePipeline tablePipeline = table.saveAs("my_ctas_table") |
We save the properties set through the option API and set them in the CatalogBaseTable when executing the create/createIfNotExist API, so as to generate the TableSink.
Catalog
Providing method that are used to infer the options of CatalogBaseTable, these options will be used to compile the sql to JobGraph successfully.
@PublicEvolving /**
|
Catalog#inferTableOptions is convenient for users to customize the Catalog, and when it supports the CTAS function, the options of the table can be automatically inferred to avoid job failure due to lack of information.
Implementation Plan
The overall execution process is shown in the following figure.
Due to the client process may exit soon, such as detached mode, choose to create/drop the table on the JM side, and create table and drop table are executed through Catalog. Therefore, a new Hook mechanism and Catalog serialization and deserialization solution need to be introduced. So the overall execution process of CTAS job is as following:
- Flink Client compiles SQL and generates an execution plan, In this process, the Hook that needs to be executed on the JM side is generated, and the Hook, Catalog and CatalogBaseTable are serialized.
- Submit the job to the cluster, if it is in detached mode, the client can exit.
- When the job starts, deserialize hooks, Catalog and CatalogBaseTable; Call the Catalog#createTable method through the hook to create the CatalogBaseTable.
- start task execution.
- If the final status of the job is failed or canceled, the created CatalogBaseTable needs to be dropped by calling the hook of the Catalog#dropTable method.
In streaming mode, usually the data is written in real time and visible in real time, so streaming mode does not provide atomicity guarantee, then there is no need to use JobStatusHook mechanism. However, batch mode requires to use the JobStatusHook mechanism to ensure atomicity. We will be compatible in Planner between stream mode without JobStatusHook and batch mode with JobStatusHook to achieve the ultimate atomicity of batch mode.
Planner
Providing method for planner to register JobStatusHook with StreamGraph.
public class StreamGraph implements Pipeline { private final List<JobStatusHook> jobStatusHooks = new ArrayList<>();
/** Registers the JobStatusHook. */ |
The final tasks of the job are all generated by Planner. We want to complete the create/drop table action through Hook on the JM side, so we need an API to register the Hook on the JM side.
Introduce the process of CTAS in Planner:
step1:
Compile SQL to generate CatalogBaseTable (The table to be created) and CreateTableASOperation.
step2:
Use Catalog#inferTableOptions interface to do options filling to CatalogBaseTable. The specific implementation is determined by the Catalog.
For example, when using JdbcCatalog, if the user does not fill in any table options, JdbcCatalog can set connector to 'jdbc' and fill username, password and base-url; when using HiveCatalog, if the user does not fill in any table options, HiveCatalog can set connector to 'hive'; User-implemented catalogs can also use this mechanism to fill in some options.
It should be noted that the InMemoryCatalog, the tables saved in it all exist in the external system, so the table options have to be filled in manually by the user, the Catalog cannot infer it automatically. If the Catalog does not support ManagedTable and the user does not set the connector information, the execution will fail.
step3:
Using CatalogBaseTable and Catalog objects to construct JobStatusHook. Due to the JobStatusHook is finally executed on the JM side, and the CatalogBaseTable needs to be create/drop through the Catalog in hook, so Catalog and CatalogBaseTable are member variables of hook, which also need to be serialized and can be passed to JM.
step4:
Planner registers JobStatusHook with StreamGraph, then the JobStatusHook is also serialized and passed to JM through the serialization of JobGraph.
For CatalogBaseTable, we use CatalogPropertiesUtil to serialize/deserialize it , it's the tools that Flink already provides.
The complexity of serialization Catalog is high, and we need to introduce a new serialization mechanism to achieve this, which is described in a separate section.
Catalog Serialization Solutions :
Option 1: Serialize the options in the Create Catalog DDL
We need to serialize catalog name and the options which are used in create catalog DDL, then JM side can use these options to re-initialize the catalog by flink ServiceLoader mechnism(UsingFactoryUtil#createCatalog to get catalog). To InMemoryCatalog, here are some special case. Due to the tables in InMemoryCatalog already exist in the external system, metadata information in InMemoryCatalog is only used by the job itself and is only stored in memory. The database related information in InMemoryCatalog needs to be serialized and then passed to JM, otherwise the database may not exist when JM creates the table.
Here we give an example about catalog serializable process that catalog is created by DDL way.
CREATE CATALOG my_catalog WITH( |
1) The Planner registers the catalog to the CatalogManager, it also registers the properties in the with keyword to the CatalogManager.
2) When serializing the catalog, only need to serialize and save the catalog name(my_catalog) and properties, like this:
my_catalog |
{'type'='jdbc', 'default-database'='...', 'username'='...', 'password'='...', 'base-url'='...'} |
The advantages of this solution are simple design, ease of compatibility and reduced complexity of implementation for the user, and does not require complex serialization and deserialization tools. The disadvantage of this solution is that it does not cover the usage scenario of TableEnvironment#registerCatalog.
Regarding the disadvantage, we can introduce CatalogDescriptor (like TableDescriptor) for Table API used to register catalog in the future, and Flink can get the properties of Catalog through CatalogDescriptor. The interface pseudo-code in TableEnvironment as following:
void registerCatalog(String catalogName, CatalogDescriptor catalogDescriptor);
Note: This solution only works if we create the Catalog using DDL, because we can only get the Catalog properties with the with keyword. If we use a Catalog registered by TableEnvironment#registerCatalog method, we cannot get these properties. Therefore, CTAS does not currently support jobs that use TableEnvironment#registerCatalog.
In the HiveCatalog solution, since the configuration of hive-conf-dir is a local path, make sure that all nodes in the cluster are placing hive configuration files under the same path. The current Application mode of Flink also has this problem.
Runtime
Provide JM side, job status change hook mechanism.
/** |
Flink's current Hook design cannot meet the needs of CTAS. For example, the JobListener is on the Client side; JobStatusListener is on the JM side, but it cannot be serialized. Thus we tend to propose a new interface JobStatusHook, which could be attached to the JobGraph and executed in the JobMaster. The interface will also be marked as Internal.
The process of CTAS in runtime
- When the task starts, the JobGraph will be deserialized, and then the JobStatusHook can also be deserialized.
- When deserializing JobStatusHook, Catalog and CatalogBaseTable will also be deserialized.
- Deserialize CatalogBaseTable using CatalogPropertiesUtil#deserializeCatalogTable method.
- When deserializing a Catalog, first read the catalog name and properties, then use the FactoryUtil#createCatalog to get catalog instance.
- When the job is start and the job status changes, the JobStatusHook method will be called:
For example, our JobStatusHook implementation is called CTASJobStatusHook, and use JdbcCatalog, JdbcCatalog serialized by Planner has been covered in the previous section and will not be repeated.
We can deserialize the Catalog Name and properties, and then use the FactoryUtil#createCatalog method to get the JdbcCatalog instance. Then when the job status changes, the CTASJobStatusHook method can be called:
- When the job status is CREATED, the runtime module will call the CTASJobStatusHook#onCreated method, which will call the JdbcCatalog#createTable method to create a table.
- When the final status of the job is FAILED, the runtime module will call the CTASJobStatusHook#onFailed method, which will call the JdbcCatalog#dropTable method to drop table.
- When the final status of the job is CANCELED, the runtime module will call the CTASJobStatusHook#onCanceled method, which will call the JdbcCatalog#dropTable method to drop table.
- When the final status of the job is FINISH, the runtime module will call the CTASJobStatusHook#onFinished method, and we do not need to do any additional operations.
Data Visibility
Regarding data visibility, it is determined by the TableSink and runtime-mode:
Stream mode:
If the external storage system supports transactions or two-phase commit, then data visibility is related to the Checkpoint cycle. Otherwise, data is visible immediately after writing, which is consistent with the current flink behavior.
Batch mode:
- FileSystem Sink: Data should be written to the temporary directory first, visible after the final job is successful(final visibility).
- Two-phase commit Sink: Data visible after the final job is successful(final visibility).
- Supports transaction Sink: Commit transactions after the final job is successful(final visibility). Commit transactions periodically or with a fixed number of records(incremental visibility).
- Other Sink: Data is visible immediately after writing(write-visible).
Managed Table
For Managed Table, please refer to FLIP-188 . Table options that do not contain the ‘connector’ key and value represent a managed table. CTAS also follows this principle. For details, please refer to the Table Store docs: https://nightlies.apache.org/flink/flink-table-store-docs-master/docs/development/create-table.
CTAS supports Managed Table and Non-Managed Table, user need to be clear about their business needs and set the table options correctly. The Catalog#inferTableOptions API can also automatically infer whether to add the connector attribute based on whether the Catalog supports ManagedTable.
Compatibility, Deprecation, and Migration Plan
It is a new feature with no implication for backwards compatibility.
Test Plan
changes will be verified by UT
Rejected Alternatives
Catalog serialize
For Catalog, if we added serialize and deserialize APIs, and the Catalog implements its own properties that need to be serialized. We save the classname of the Catalog together with the serialized content, like this:
Catalog ClassName |
Catalog serialized data |
Since the Catalog class may not have a parameterless constructor, so we can't use Class#newInstance to initialize an object, we can use the framework objenesis to solve. After using objenesis to get the Catalog object (an empty Catalog instance), get the real Catalog instance through the Catalog#deserialize API. This solves the serialization/deserialization problem of Catalog.
For example, JdbcCatalog#serialize can save catalogName, defaultDatabase, username, pwd, baseUrl, and JdbcCatalog#deserialize can re-initialize a JdbcCatalog object through these parameters; HiveCatalog#serialize can save catalogName, defaultDatabase, hiveConf, hiveVersion, and HiveCatalog#deserialize can re-initialize a HiveCatalog object through these parameters; InMemoryCatalog#serialize only needs to save the catalogName and defaultDatabase, and InMemoryCatalog#deserialize can re-initialize an InMemoryCatalog object through these two parameters.
The tables in the InMemoryCatalog already exist in the external system. The metadata information held in the InMemoryCatalog is only used by the job itself, and is held only in memory. Therefore, all metadata information in the InMemoryCatalog does not need to be serialized and passed to JM. In JM, only need to initialize a new InMemoryCatalog.
The solution serialization tool is more complex to implement, and the user-defined Catalog is more expensive to implement, so it is abandoned.
References
- Support SELECT clause in CREATE TABLE(CTAS)
- MySQL CTAS syntax
- Microsoft Azure Synapse CTAS
- LanguageManual DDL#Create/Drop/ReloadFunction
- Spark Create Table Syntax
Appendix
Program research
I investigated other bigdata engine implementations such as hive, spark:
Hive(MR) :atomic
Hive MR is client mode, the client is responsible for parsing, compiling, optimizing, executing, and finally cleaning up.
Hive executes the CTAS command as follows:
- Execute query first, and write the query result to the temporary directory.
- If all MR tasks are executed successfully, then create a table and load the data.
- If the execution fails, the table will not be created.
Spark(DataSource v1) : non-atomic
There is a role called driver in Spark, the driver is responsible for compiling tasks, applying for resources, scheduling task execution, tracking task operation, etc.
Spark executes CTAS steps as follows:
- Create a sink table based on the schema of the query result.
- Execute the spark task and write the result to a temporary directory.
- If all Spark tasks are executed successfully, use the Hive API to load data into the sink table created in the first step.
- If the execution fails, driver will drop the sink table created in the first step.
Spark(DataSource v2, Not yet completed, Hive Catalog is not supported yet) : optional atomic
Non-atomic
Non-atomic implementation is consistent with DataSource v1 logic. For details, see CreateTableAsSelectExec .
Atomic
Atomic implementation( for details, see AtomicCreateTableAsSelectExec), supported by StagingTableCatalog and StagedTable .
StagedTable supports commit and abort.
StagingTableCatalog is in memory, when executes CTAS steps as follows:
- Create a StagedTable based on the schema of the query result, but it is not visible in the catalog.
- Execute the spark task and write the result into StagedTable.
- If all Spark tasks are executed successfully, call StagedTable#commitStagedChanges(), then it is visible in the catalog.
- If the execution fails, call StagedTable#abortStagedChanges().
Research summary
We want to unify the semantics and implementation of Streaming and Batch, we finally decided to use the implementation of Spark DataSource v1.
Reasons:
- Streaming mode requires the table to be created first(metadata sharing), downstream jobs can consume in real time.
- In most cases, Streaming jobs do not need to be cleaned up even if the job fails(Such as Redis, cannot be cleaned unless all keys written are recorded).
- Batch jobs try to ensure final atomicity(The job is successful and the data is visible; otherwise, drop the metadata and delete the temporary data).