Status
Current state: ["Under Discussion"]
Discussion thread: here (<- link to https://mail-archives.apache.org/mod_mbox/flink-dev/)
JIRA: here (<- link to https://issues.apache.org/jira/browse/FLINK-XXXX)
Released: <Flink Version>
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
Currently, Flink can directly write or read ClickHouse through flink connector JDBC, but it is not flexible and easy to use, especially in the scenario of writing data to ClickHouse by FlinkSQL.
The ClickHouse-JDBC project group implemented a BalancedClickhouseDataSource component that adapts to the ClickHouse cluster, and has also been tested by a good production environment, which can solve well the problem of insufficient flexibility of flex connector JDBC.
Things to confirm
About ClickHouseDynamicTableSource
It should implement :
- ScanTableSource:
- LookupTableSource:
SupportsLimitPushDown: To avoid scenarios with large amounts of data
About ClickHouseDynamicTableSink
It should implement :
DynamicTableSink
The following scenarios also need to be considered:
- Support writing distributed and local tables
- write into a distributed table
- We don't need to care about the load balancing of ClickHouse table.
- In the case of asynchronous writing to distributed tables, data may be lost. When using the synchronous write mechanism, there may be a certain delay.
- And Writing data into a distributed table will cause the pressure to the network & io loading of ClickHouse cluster. This limitation has nothing to do with how to implement the connector, which is the limitation of ClickHouse itself.
- write into local tables
- The write frequency is controlled according to the data batch size and batch interval to achieve a balance between part merge pressure and data real-time for ClickHouse.
- The BalancedClickhouseDataSource can ensure theoretically the load balance of each ClickHouse instance through random routing, but it is only detect the instance through periodic Ping, shields the currently inaccessible instances, and has no failover. Therefore, once we try to write to the failed node, we will lose data. We can introduce a retry mechanism that can configure parameters to minimize this situation.
- Enhance the route strategies to ClickHouse instance. e.g. round robin, shard key hash.
- write into a distributed table
- ClickHouse does not support the transaction feature, so we do not need to consider the consistency of data writing
Public Interfaces
Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.
A public interface is any change to the following:
- DataStream and DataSet API, including classes related to that, such as StreamExecutionEnvironment
- Classes marked with the @Public annotation
- On-disk binary formats, such as checkpoints/savepoints
- User-facing scripts/command-line tools, i.e. bin/flink, Yarn scripts, Mesos scripts
- Configuration settings
- Exposed monitoring information
Proposed Changes
- Introduce ClickHouse SQL Connector.
- Introduce ClickHouse Catalog.
Compatibility, Deprecation, and Migration Plan
- Introduce ClickHouse SQL connector for users
- It will be a new feature, so we needn't phase out the older behavior.
- we don't need special migration tools
Test Plan
We could add unit test cases and integration test cases based on testcontainers.