Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Status

Current state: "Under Discussion"

...

Page properties


...

...

JIRA: TBD

...

thread/yzg9gz3w3fmq2x58tdbmsjthl7obg6lt
JIRA

Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyFLINK-27344

Release


Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

With the efforts in FLIP-24 - SQL Client and FLIP-91: Support SQL Client Gateway, Flink SQL client supports submitting queries SQL jobs but lacks further support for their lifecycle afterwards, lifecycles afterward which is crucial for streaming use cases. That means Flink SQL client users have to turn to other clients (e.g. CLI) or APIs (e.g. REST API) to manage the queriesjobs, like triggering savepoints or canceling queries, which makes the user experience of SQL client incomplete. 

Therefore, this proposal aims to complete the capability of SQL client by adding query lifecycle statements. With these statements, users could manage queries SQL jobs and savepoints through pure SQL in SQL client.

Public Interfaces

  • New Flink SQL Statements

Proposed Changes

Architecture Overview

The overall architecture of Flink SQL client/gateway would be as follow:

Most parts are remained unchanged, only SQL Parser and Planner need to be modified to support new statements, and a new component ClusterClientFactory is introduced in Executor to enable direct access to Flink clusters.

...

SQL Job Lifecycle Statements

Query SQL job lifecycle statements mainly interact with deployments (clusters and jobs) and have few connections with Table/SQL concepts, thus it’d be better to keep them SQL-client-only like jar statements.

SHOW QUERIES

.

Note:

  1. The keyword for Flink SQL jobs was `QUERY`, and now is updated as `JOB`.
  2. All the <job_id> and <savepoint_path> should be string literals (wrapped in single quotes), otherwise it's hard to parse them.

SHOW RUNNING FLINK SQL JOBS

This statement lists SHOW QUERIES  statements list the queries in the Flink cluster, which is similar to flink list in CLI. 

Code Block
languagesql
titleSyntax: SHOW QUERIESJOBS
SHOW QUERIESJOBS

The result contains three four columns: queryjob_id (namely Flink job idID), queryjob_name (namely job name), and status, status, start/end time, duration, and a link to the job's web UI address.

Code Block
languagesql
titleResult: SHOW QUERIES
+----------------------------------+-------------+----------+----------------------+----------------------+--------------+----------------------|+
|            queryjob_id                | query job_name   |  status  |       start_time     |        end_time      |    duration  |        web_url       |
+----------------------------------+-------------+----------|----------------------|----------------------|--------------|----------------------|
| cca7bc1061d61cf15238e92312c2fc20 |    query1   |  RUNNING |  2022-05-01 10:20:33 |  2022-05-01 20:45:35 |  10h 25m  2s | http://127.0.0.1:8081|
| 0f6413c33757fbe0277897dd94485f04 |    query2   |  FAILED  |  2022-05-01 14:04:24 |  2022-05-01 19:09:47 |   5h  5m 23s | http://127.0.0.1:8081|
+----------------------------------+-------------+----------+----------------------+----------------------+--------------+----------------------|+

STOP

...

A RUNNING FLINK SQL JOB

This statement stops

...

STOP QUERY statements stop a non-terminated querySQL, which is similar to `flink stop` and `flink cancel` in CLI. As stop command has a `--drain` option, we should introduce a table config like `sql-client.stop-with-drain` to support the same functionality. 

Code Block
languagesql
titleSyntax: STOP QUERYJOB
STOP QUERYJOB <query'<job_id>' [WITH SAVEPOINT] [WITH DRAIN]

The result would the savepoint path.

Code Block
languagesql
titleResult: STOP DROP QUERY
+-----------------------------------------------------------------|
|                     savepoint_path                              |
+-----------------------------------------------------------------|
| hdfs:/tmp/mycluster/flink-savepoints/savepoint-cca7bc-bb1e257f0dab    |
+-----------------------------------------------------------------|

CANCEL QUERY


There're two related options to control the fine-grained behavior:

1. WITH SAVEPOINT

If specified, the stop statement stops a SQL job with a savepointCANCEL QUERY  statements cancel a non-terminated query, which is similar to `flink cancel` stop` in CLI. 

Code Block
languagesql
titleSyntax: SHOW QUERIES
CANCEL QUERY <query_id>

Otherwise, the stop statement stops a SQL job ungracefully, just like `flink cancel` In CLI. Since an ungrateful drop doesn’t Since CANCEL QUERY doesn’t trigger a savepoint, the result would be a simple OK, like the one returned by DDL.

TRIGGER SAVEPOINT

2. WITH DRAIN

If specified, the stop statement stops a SQL job and increases the watermark to MAX_WATERMARK to trigger all the timers, which is similar to `flink stop .. --drain` in CLI.

CREATE A SAVEPOINT

This statement triggers TRIGGER SAVEPOINT  statements trigger savepoints for the specified querySQL job, which is similar to `flink savepoint` in CLI.

Code Block
languagesql
titleSyntax: TRIGGER SAVEPOINTCREATE Savepoint
TRIGGERCREATE SAVEPOINT FOR <queryJOB'<job_id>'

The result would the savepoint path.

Code Block
languagesql
titleResult: TIGGER CREATE SAVEPOINT
+------------------------------------------------------------------|
|                            savepoint_path                        |
+------------------------------------------------------------------|
| hdfs://mycluster/flink-savepoints/savepoint-cca7bc-bb1e257f0dab  |
+------------------------------------------------------------------|

SHOW SAVEPOINTS

This statement shows all savepoints in a best-effort manner (since the savepoints are managed by users and outlive Flink clusters, the job manager may not know about all savepoints).

Code Block
languagesql
titleSytax: SHOW SAVEPOINTS
SHOW SAVEPOINTS

The result would be savepoint paths.

Code Block
+------------------------------------------------------------------|
|                            savepoint_path                        |
+------------------------------------------------------------------|
| hdfs:/tmp/mycluster/flink-savepoints/savepoint-cca7bc-bb1e257f0dab  |
+------------------------------------------------------------------|
| hdfs://mycluster/flink-savepoints/savepoint-ca62ea-ce73f92adba2  |
+------------------------------------------------------------------|

...

DROP A SAVEPOINT

DISPOSE SAVEPOINT statements delete This statement deletes the specified savepoint, which is similar to `flink savepoint –dispose` in CLI.Syntax:

DISPOSE SAVEPOINT <savepoint_path>

Code Block
languagesql
titleSyntax: DISPOSE SAVEPOINTRelease Transaction Savepoint
DROPDISPOSE SAVEPOINT '<savepoint_path>'

The result would be a simple OK.

COMPLETE USAGE EXAMPLE

Code Block
languagesql
Flink SQL> INSERT INTO tbl_a SELECT * FROM tbl_b;
[INFO] Submitting SQL update statement to the cluster...
[INFO] SQL update statement has been successfully submitted to the cluster:
Job ID: 6b1af540c0c0bb3fcfcad50ac037c862

Flink SQL> SHOW JOBS;
+----------------------------------+--------------------+---------+---------------------+---------------------+-------------+----------------------+
|           job_id                 |       job_name     | status  |    start_time       |      end_time       |   duration  |       web_url        |
+----------------------------------+--------------------+---------|---------------------|---------------------|-------------|----------------------|
| 6b1af540c0c0bb3fcfcad50ac037c862 | INSERT INTO tbl_a..| RUNNING | 2022-05-01 10:20:33 | 2022-05-01 10:20:53 |  0h 0m 20s  | http://127.0.0.1:8081|
+----------------------------------+--------------------+---------+---------------------+---------------------+-------------+----------------------+

Flink SQL > CREATE SAVEPOINT FOR JOB '6b1af540c0c0bb3fcfcad50ac037c862';
+------------------------------------------------------------------|
|                            savepoint_path                        |
+------------------------------------------------------------------|
| hdfs://mycluster/flink-savepoints/savepoint-cca7bc-bb1e257f0dab  |
+------------------------------------------------------------------|

Flink SQL > STOP JOB '6b1af540c0c0bb3fcfcad50ac037c862';
[INFO] The specified job is stopped.

Flink SQL > DROP SAVEPOINT 'hdfs://mycluster/flink-savepoints/savepoint-cca7bc-bb1e257f0dab';
[INFO] The specified savepoint is dropped.


SQL Parser & Planner

To support the new statements, we need to introduce new SQL operators for SQL parser and new SQL operations for the planner.

SQL operator

SQL operation

SqlShowQueries

SqlShowJobs

ShowQueriesOperation

ShowJobsOperation

SqlStopQuery

StopQueryOperation

StopJobOperation

SqlCancelQuery

SqlShowSavepoints

CancelQueryOperation

ShowSavepointsOperation

SqlTriggerSavepoint

SqlCreateSavepoint

TriggerSavepointOperation

CreateSavepointOperation

SqlDisposeSavepoint

SqlDropSavepoint

DisposeSavepointOperation

DropSavepointOperation

Executor

Executor would need to convert the query lifecycle operations into ClusterClient commands.

SQL operation

Cluster Client Command

ShowQueriesOperation

ShowJobsOperation

ClusterClient#listJobs

StopQueryOperation

StopJobOperation

ClusterClient#stoplWithSavepoint

ClusterClient#stopWithSavepoint | ClusterClient#cancel

CancelQueryOperation

ShowSavepointOperation

ClusterClient#cancel

ClusterClient

TriggerSavepointOperation

CreateSavepointOperation

ClusterClient#triggerSavepoint

DisposeSavepointOperation

DropSavepointOperation

ClusterClient#disposeSavepoint

In addition, to interact with the clusters, Executor should be able to create ClusterClient through ClusterClientFactory, thus a  ClusterClientServiceLoader would be added to Executor.

Implementation Plan

The implementation plan would be simple:

  1. Support the new statements and operations in SQL parser and Planner.
  2. Extend Executor to support the new operations.

Compatibility, Deprecation, and Migration Plan

This FLIP introduces new SQL keywords, which may cause troubles for the existing SQLs. Users need to escape the new keywords if they use them as SQL identifiers.

The new keywords are:

    • JOB (new)
    • JOBS (new)
    • STOP (new)
    • DRAIN (new)
    • SAVEPOINT (already reserved)
    • SAVEPOINTS (already reserved)

Rejected Alternatives

Book Keep Query Status in SQL Gateway

An alternative An alternatives approach to query monitoring is that the SQL client or gateway book keeps every query and is responsible for updating the query status through polling or callbacks. In that way, the query status is better maintained, and we wouldn’t lose track of the queries in cases that they’re cleaned up by the cluster or the cluster is unavailable.

...

  1. Table/SQL API should provide the same capabilities as its peer DataStream API, thus show queries statement implement should be aligned with flink list in CLI as well. 
  2. Maintaining query status at the client/gateway side requires additional work but brings little extra user value, since the client/gateway doesn’t persist metadata at the moment.

Savepoint Syntax: SAVEPOINT / RELEASE SAVEPOINT

An alternative syntax of savepoints is like:

Code Block
languagesql
SAVEPOINT '<job_id>'
RELEASE SAVEPOINT '<savepoint_path>'

But there are mainly two concerns:

  • Generally speaking, SAVEPOINT is more appropriate to be followed by a savepoint identifier instead of a job identifier.
  • The statements are often used within database transaction blocks, so it would be kind of unnatural to be used alone.