You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Status

Current state["Under Discussion"]

Discussion threadhere (<- link to https://mail-archives.apache.org/mod_mbox/flink-dev/)

JIRAhere (<- link to https://issues.apache.org/jira/browse/FLINK-XXXX)

Released: <Flink Version>

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Currently, Flink can directly write or read ClickHouse through flink connector JDBC, but it is not flexible and easy to use, especially in the scenario of writing data to ClickHouse by FlinkSQL. 

The ClickHouse-JDBC project  group implemented a BalancedClickhouseDataSource component that adapts to the ClickHouse cluster, and has also been tested by a good production environment, which can  solve well the problem of insufficient flexibility of flex connector JDBC.

Things to confirm

About ClickHouseDynamicTableSource

It should  implement :

  1. ScanTableSource:
  2. LookupTableSource: 
  3. SupportsLimitPushDown: To avoid scenarios with large amounts of data

About ClickHouseDynamicTableSink

It should  implement :

  1. DynamicTableSink

The following scenarios also need to be considered:

  1. Support writing distributed and local tables
    1. write into a distributed table
      1. We don't need to care about the load balancing of ClickHouse table.
      2. In the case of asynchronous writing to distributed tables, data may be lost. When using the synchronous write mechanism, there may be a certain delay.
      3. And Writing data into a  distributed table will cause the pressure to the network & io loading of ClickHouse cluster. This limitation has nothing to do with how to implement the connector, which is the limitation of ClickHouse itself. 
    2. write into local tables
      1. The write frequency is controlled according to the data batch size and batch interval to achieve a balance between part merge pressure and data real-time for ClickHouse.
      2. The BalancedClickhouseDataSource can  ensure theoretically the load balance of each ClickHouse instance through random routing, but it is only detect the instance through periodic Ping, shields the currently inaccessible instances, and has no failover. Therefore, once we try to write to the failed node, we will lose data. We can introduce a retry mechanism that can configure parameters to minimize this situation.
      3. Enhance the route strategies to ClickHouse instance. e.g. round robin, shard key hash.
  2. ClickHouse does not support the transaction feature, so we do not need to consider the consistency of data writing

Public Interfaces

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

  • DataStream and DataSet API, including classes related to that, such as StreamExecutionEnvironment
  • Classes marked with the @Public annotation
  • On-disk binary formats, such as checkpoints/savepoints
  • User-facing scripts/command-line tools, i.e. bin/flink, Yarn scripts, Mesos scripts
  • Configuration settings
  • Exposed monitoring information

Proposed Changes

  1. Introduce ClickHouse SQL Connector.
  2. Introduce ClickHouse Catalog.

Compatibility, Deprecation, and Migration Plan

  • Introduce ClickHouse SQL connector for users
  • It will be a new feature, so we needn't phase out the older behavior.
  • we don't need special migration tools

Test Plan

We could add unit test cases and integration test cases based on testcontainers.

Rejected Alternatives


Related References

  1. BalancedClickhouseDataSource.java
  2. https://clickhouse.com/docs/en/engines/table-engines/


  • No labels