You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 19 Next »

Status

Motivation


Table Store supports streaming and batch data processing, Flink ETL jobs can read data from and write data to Table Store in streaming and batch. Following is the architecture

Main processes

  1. Each Flink ETL job is independent, it manages and performs checkpoints in its job manager.
  2. Each ETL job generates snapshot data in Table Store according to its checkpoint independently.
  3. Flink OLAP/Batch jobs read snapshots of tables from Table Store and performs complex computations such as join and agg.

In the whole process of streaming data processing, there are mainly the following 4 How problems(general streaming and batch ETL):

  • HOW to manage the relationship between ETL jobs?

Multiple ETL jobs and tables in Table Store form a topology. For example, there are dependencies between Table1, Table2, and Table3. Currently, users can't get relationship information between these tables, and when data on a table is late, it is difficult to trace the root cause based on job dependencies.

  • HOW to define the data correctness in query?


As shown above, Flink ETL jobs will generate V11, V21, V31 in Table1,2,3 respectively for V1 in CDC. Suppose the following case: there's a base table in database, a user creates cascaded views or materialized views Table1, Table2 and Table3 based on it. Then the user executes complex queries on Table1, Table2, and Table3 and the query results are consistent with the base table. When the user creates the three tables in Table Store and incrementally updates data of them by Flink ETL jobs in real-time, these tables can be regarded as materialized views that are incrementally updated in real time. In the process of streaming data processing, how to define data correctness in query when users perform join query on these three tables? The case in point is how to ensure V11, V21 and V31 are read or not read in one query?

  • HOW to define E2E data processing delay of ETL jobs and tables in topology above?

Flink ETL jobs update tables above in real-time, there are dependencies between them. While the data is flowing, how to define the data delay in these tables? For the above example, how to define the E2E delay of streaming data from CDC to Table2? How much does the delay of each ETL job affect the E2E delay, and which ETL job needs to be optimized?

  • HOW to revise the data in tables updated by streaming job?

When one of the tables needs to be revised, how to revise it in the streaming process on the premise of ensuring the correctness of the data? For instance, the data in Table1 needs to be revised, what should the users do in the topology to ensure that the data is not lost or repeated?

In order to answer the above questions, we introduce Timestamp Barrier in Flink to align data, introduce MetaService in Table Store to coordinate Flink ETL jobs, manage the relationships and dependencies between ETL jobs and tables, and support data consistency in Table Store.

Proposed Design

Architecture


We can regard each Flink ETL job as a single node with complex computation, and the table in Table Store as a data stream. Flink ETL and tables form a huge streaming job, which we call ETL Topology. We setup a MetaService node to manage the ETL Topology. The main architecture is:

There are two core points in the architecture: Timestamp Barrier Mechanism and MetaService 

  • Timestamp Barrier Mechanism

We need a barrier mechanism in Flink to guarantee the data consistency.

  1. Each ETL source needs to be assigned a unified Timestamp Barrier
  2. Stateful and temporal operators in Flink ETL job align and compute data according to the barrier.
  3. Sink operator in ETL job confirms data with barrier to sink tables in Table Store. 
  • MetaService component

MetaService  is the coordinator in ETL Topology , its capabilities are as followed:

1> Coordinate the Global Timestamp Barrier in ETL Topology

  1. As the coordinator of ETL Topology, MetaService interacts with source ETL job and generates a global Timestamp Barrier .
  2. Timestamp Barrier  is transmitted between ETL job nodes by tables, then these job nodes can create  globally consistency snapshots in Table Store according to the barrier.

2> Manage dependencies between ETL jobs and tables

  1. MetaService manages the relationship between ETL jobs and tables in ETL Topology. Users can query these dependencies from MetaServices.
  2. MetaService manages Timestamp Barrier  in each ETL job, including barrier progress, completed barriers, etc.
  3. MetaService manages the Timestamp Barrier and snapshot of tables, including the latest completed snapshots, the relationship between barriers and snapshots.

3> Manage Data Consistency in Query On Table Store

  1. MetaService supports data consistency in query based on the management of dependencies between tables.
  2. MetaService determines the compaction and expiration of snapshots for each table according to snapshots being used by the OLAP and ETL jobs.

User Interfaces

User Interaction

Setup MetaService

In the first phase, we'd like to start a standalone MetaService with storage path and REST port in configuration.

User Cases

We add a new metastore type: table-store , which manages the Catalog and data consistency in Table Store. Users can create a Catalog with metastore table-store  in Sql-Client, and specify the address and consistency type by uri and consistency-type. Flink ETL job, which reads from and writes to Table Store will be managed by MetaService to ensure data consistency. In the first stage, table-store  metastore only supports FileSystemCatalog and will support HiveCatalog later. The user cases are shown as followed.

-- create a catalog with MetaService
CREATE CATALOG my_catalog WITH (
 'type'='table-store',
 'warehouse'='file:/tmp/table_store',
 'metastore' = 'table-store',
 'uri'='http://<meta-service-host-name>:<port>',
 'consistency'='strong' );

USE CATALOG my_catalog;

-- create three tables in my_catalog which will be managed by MetaService
CREATE TABLE word_value (
 word STRING PRIMARY KEY NOT ENFORCED,
 val BIGINT );

CREATE TABLE word_count (
 word STRING PRIMARY KEY NOT ENFORCED,
 cnt BIGINT );

CREATE TABLE word_sum (
 word STRING PRIMARY KEY NOT ENFORCED,
 val_sum BIGINT );

Users can create a source table and three streaming jobs. The jobs write data to the three tables.

-- create a word data generator table
CREATE TEMPORARY TABLE word_table (
 word STRING,
 val BIGINT ) WITH (
 'connector' = 'datagen',
 'fields.word.length' = '1');

-- table store requires checkpoint interval in streaming mode 
SET 'execution.checkpointing.interval' = '10 s'; 

-- write streaming data to word_value, word_count and word_sum tables 
INSERT INTO word_value SELECT word, val FROM word_table;
INSERT INTO word_count SELECT word, count(*) FROM word_value GROUP BY word;
INSERT INTO word_sum SELECT word, sum(val) FROM word_value GROUP BY word;

Users can query data from the three tables.

-- use tableau result mode 
SET 'sql-client.execution.result-mode' = 'tableau'; 

-- switch to batch mode 
RESET 'execution.checkpointing.interval'; 
SET 'execution.runtime-mode' = 'batch'; 

-- olap query the table 
SELECT
   T1.word,
   T1.cnt as t1cnt,
   T1.sum_val as t1sum_val,
   T2.cnt as t2cnt,
   T3.sum_val as t3sum_val
 FROM
 (SELECT word, count(*) as cnt, sum(val) as sum_val
   FROM word_value GROUP BY word) T1
 JOIN word_count T2
 JOIN word_sum T3
 ON T1.word=T2.word and T2.word=T3.word;

Since the data between jobs and tables is streaming, the results t1cnt and t2cnt, t1sum_val and t3sum_val are different without consistency guarantee; while MetaService guarantees data consistency, the results t1cnt and t2cnt, t1sum_val and t3sum_val will be the same.

Query consistency information

MetaService stores consistency information in Table Store as tables

  • ETL job with source tables
CREATE TABLE __META_ETL_SOURCE (
 job_id STRING,     -- The id of streaming etl job
 table_name STRING, -- The source table of the etl job
 PRIMARY KEY(job_id, table_name));
  • ETL job with sink table
CREATE TABLE __META_ETL_SINK (
 job_id STRING,     -- The id of streaming etl job
 table_name STRING, -- The sink table of the etl job
 PRIMARY KEY(table_name));
  • Table name with version
CREATE TABLE __META_TABLE_VERSION (
 table_name STRING, -- The table name
 version INT,       -- The version of the table
 table_type STRING, -- Root or Intermediate
 PRIMARY KEY(table_name));


User can query dependencies and versions from these tables. For example, query the ETL jobs' sink tables for Table1

SELECT T.table_name
 FROM __META_JOB_SOURCE S
 JOIN __META_JOB_Sink T ON S.job_id=T.job_id
 WHERE S.table_name='Table1'

Data Consistency Type


When a query is submitted, it gets different versions of tables from MetaService according to different delay requirements.


As tables shown above, suppose there are three queries

  1. Query1:SELECT * FROM table1

  2. Query2:SELECT * FROM table1 JOIN table2

  3. Query3:SELECT * FROM table1 JOIN table2 JOIN table3

  • Strong Consistency

It will guarantee strong data consistency among queries above. Query gets the minimum version of all the related tables according to the source tables and the dependencies between them, which ensure data consistency between related tables. For the examples above, Query1, Query2 and Query3 will get Min(table1 version, table2 version) for table1 and table2, Min(table3 version) for table3.

  • Weak Consistency

It doesn't guarantee the data consistency among queries above, but only the data consistency of a single query. At this time, each query can get its latest version of tables, this ensures better data freshness.

Design of Data Consistency

Global Timestamp Barrier Mechanism

The coordination of Timestamp Barrier is divided into two parts: the barrier within one ETL job and across ETL jobs.

1. Timestamp Barrier within one ETL job

There are two tables in the ETL Topology

  • Root Table: The sink tables in Table Store that ETL jobs consume external source(Kafka/cdc, etc) and write results to them.

  • Intermediate Table: The sink tables in Table Store that ETL jobs consume Root Tables and Intermediate Tables, then write results to them.

ETL jobs which consume the same external source can't be managed by a global timestamp barrier. For example, two ETL jobs consume a Kafka Topic with system timestamp barrier, and write results to Table1 and Table2 in Table Store. So we only guarantee the consistency of ETL jobs and tables in Table Store: that means users must load external data into Table Store by Flink ETL job which generates timestamp barrier for it, then we guarantee data consistency based on ETL Topology.

Correspondingly, there're two ETL types: Root ETL  reads data from external sources and write data to Root Table ,  where Intermediate ETL reads data from Root Table and Intermediate Table . The main difference between them is the way to generate Timestamp Barrier .

JobManager in Root ETL will generate a new unified Timestamp Barrier  itself according to the sources with different strategy, such as system timestamp or event timestamp, and write it into the table in Table Store . The overall process is as followed.

Intermediate ETL  cannot generate a new Timestamp Barrier  itself. It must read Timestamp Barrier  from data in Table Store , report it to JobManager and then broadcast it to the downstream tasks.

Timestamp Barrier will be transmitted in data stream between subtasks after all the records belong to it, and each record which is processed by operator has a timestamp field equals to the Timestamp Barrier .  Besides the source, there are three types of operators as followed.

  • Stateless operator. The operator processes every input record and output the result which it just does before. It does not need to align data with Timestamp Barrier , and when it receives Timestamp Barrier , it should broadcast the barrier to downstream tasks. 
  • Stateful operator and temporal operator. Records in a same Timestamp Barrier  are out of order, stateful and temporal operators should align them according to their timestamp field. The operators will execute computation when they collect all the timestamp barrier, and broadcast it downstream tasks. There's a sequence relationship between timestamp barriers, and records between timestamp barriers are ordered. It means that the operators compute and output results for a timestamp barrier based on the result of a previous timestamp barrier.
  • Sink operator. Sink streaming output results to Table Store , and commit the results when it collects all the timestamp barrier. The source of downstream ETL job can prefetch data from Table Store , but should produce data after the upstream sink committed.

2. Timestamp Barrier across ETL jobs

Root Table is sources of ETL Topology and the Intermediate Table is streaming edge and sink. Each vertex in it is an independent Flink job, in which JobManager schedules and reads snapshots from each table.

Each JobManager interacts with MetaService, creates and sends global timestamp barriers to its sources. The sources collect and broadcast the timestamp barriers. ETL job generates snapshots in sink tables with timestamp barrier, then the downstream ETL job can read the timestamp barrier directly, which ensures the timestamp barrier can be transmitted among jobs.

The overall process of global timestamp barrier is as follow

There are two layers in Global Timestamp Barrier: MetaService and JobManager. MetaService regards each ETL job as a single node, manages the global timestamp barrier in the ETL Topology; JobManager interacts with MetaService and manages the global timestamp barrier in each ETL job.
There are two parts in the global timestamp barrier processing: interaction between MetaService and JobManager, and interaction between JobManager and Source Node.

  • Interaction between MetaService and JobManager

  1. JobManager of each ETL job requests a start timestamp barrier from MetaService for its sources when it is started.

  2. When a ETL job completes a timestamp barrier and commit the data to Table Store , it reports the timestamp barrier to MetaService .
  • Interaction between JobManager and Source Node

  1. JobManager reads and manages snapshot and timestamp barrier from Table Store , when it collects all the timestamp barrier of table, it sends the barrier to source subtasks.

  2. Source Node processes splits of snapshots. When it receives timestamp barrier from JobManager, it broadcasts timestamp barrier after finishing specified splits.

The interactions among JobManager and Source Node are as followed.

Data Consistency Management


MetaService manages checkpoints between jobs and versions of each table, the main information includes

  • The topology of ETL jobs and tables

MetaService manages dependencies between tables and ETL jobs. Based on the relationship information, it supports consistent reading and computing in OLAP, calculates the delay for E2E and each ETL job, helps users to find the bottleneck jobs. When revising data on tables, users can rollback snapshots on tables and state in ETL jobs based on the dependencies.

  • Relationship between timestamps and snapshots of each table

MetaService ensures data consistency among ETL/OLAP jobs and tables by managing the relationship between timestamp and snapshot.

Firstly, it's used to ensure the consistency of timestamp and snapshot among ETL jobs that consume the same table. For example, a Root Table is consumed by an ETL job and MetaService creates timestamps on snapshots for it. When a new ETL job consumed this table is started, MetaService will create the same timestamp on snapshots for it according to the previous job.

Secondly, it helps to ensure that the timestamp barrier consistency between tables when an ETL job consumes them. For example, an ETL job consumes Table1 and Table2. When the job is started, it will get snapshot ids for Table1 and Table2 with the same timestamp from MetaService, even when the progresses of Table1 and Table2 are different. This ensures that the timestamp of the job can be aligned.

Finally, OLAP/Batch jobs read snapshots from source tables with the same checkpoint too, and this ensures data consistency in job computation.

  • The completed timestamp of each table

MetaService manages completed timestamps of each table and guarantees data consistency in OLAP query. OLAP query should request versions of source tables from MetaService, and MetaService calculates snapshot ids of tables based on the dependencies between tables, completed timestamps and snapshots in each table and consistency type requirement. OLAP reads data from tables according to the given snapshot ids, which ensure the data consistency for it.

  • Information about tables and snapshots being used by the jobs

MetaService manages information about snapshots being used by ETL jobs or OLAP on tables, then determines which snapshots of tables can be safely deleted, compacted without affecting the jobs who are reading the data, ensures these jobs can read correct data.

  • Timestamp progress of each ETL job

MetaService manages start time, finish time, total cost of timestamp barriers for each job, it helps users to analyze the E2E delay and optimize the ETL jobs.

ETL Jobs Failover


Each ETL job may fail in the ETL Topology, but unlike the general Flink Streaming Job, it should not cause the failover of ETL Topology. The ETL job in ETL Topology must meet the following conditions

  • The determination of reading data

Flink jobs read snapshots from Table Store. When a job fails, it must be able to reread snapshots according to the previous timestamp from checkpoint. If the relationship between timestamp and snapshot is determined, and the timestamp can be recovered from checkpoint, the failed job can read the same data from the same snapshot according to the same timestamp, which means the job will read determined data from Table Store before and after failover.

  • The determination of writing data

Flink jobs commit data with timestamp information to Table Store according to their timestamp barrier. Each job commits data only when the specified timestamp are completed, which means the job writes the determined data in Table Store before and after failover.

  • Orderliness of data and computation

Flink jobs read and write snapshots which are according to their timestamps. Timestamp barriers will be aligned in each job and among multiple jobs. This means that although the data in one timestamp barrier is out of order, the data and computation between timestamp barriers across multiple jobs are in order.

Because of determination and orderliness, the failover of a single ETL job will not cause the failover of the entire ETL Topology. The JobManager of each ETL job only needs to process the failover within the job. To do that, we need to support failover of Timestamp Barrier , which includes:

  1. Recover timestamp barriers from Checkpoint . The boundaries of checkpoint and timestamp barrier are aligned, and the job can recover the same timestamp barrier for failed checkpoint. For example, there are timestamp barrier 1, 2, 3 in checkpoint 1, and the ETL job is processing data for checkpoint 2 with timestamp 3, 4. When the job failed, it will recover from checkpoint 1 and assign the same timestamp 3 and 4 for checkpoint 2.
  2. Replay data for the same timestamp barriers. For the above example, when job recover from checkpoint 1 and replay data for timestamp 3 and 4, it must produce the same data as before failover.

To achieve that, Flink should store <Timestamp Barrier, Offset> and <Checkpoint, Timestamp Barrier> information when a timestamp barrier is generated.

After implementing this function, in addition to the above failover processing, we can do something easily when some table data needs to be revised due to the certainty of the snapshot, timestamp and checkpoint of Table Store. For example, when we need to revise data in Table3, we can roll back to a specified checkpoint in all downstream cascaded ETL jobs and tables.

  • Incremental processing

All table snapshots are aligned according to a unified checkpoint. When a specified table data needs to be revised, we just need to rollback all its downstream tables to a unified snapshot, reset the streaming jobs' state to the specified checkpoint, and then restart the jobs to consume incremental data.

  • Full processing

Due to reasons such as the ETL jobs' state TTL, we cannot perform incremental processing. At this time, we can perform full processing, clear the data and ETL state of all downstream tables and jobs, and then restart the jobs to consume the data in full.
Incremental processing is as followed.

Start And Stop ETL Jobs

  • Register Tables

Flink ETL job needs to register its source and sink tables with MetaService when it is submitted. At present, the client will create the specified TableStoreSource and TableStoreSink from Table Store in the process of generating the Flink execution plan. In this process, we can register the jobid and table information with MetaService.

MetaService creates relationship between the source and sink tables by the jobid. After an ETL job generates the plan, it may not be submitted to the cluster successfully due to some exceptions such as network or resources. The register information of tables can't be accessed and can only be accessed after the job is submitted to cluster and the SplitEnumerator registers itself to MetaService too.

  • Query Data Versions

ETL and OLAP jobs must get snapshot ids of tables from MetaService when they are submitted to the cluster according to consistency requirement. Flink jobs can get versions of tables when they create them in FlinkCatalog. The main processes are as followed.

  • Stop ETL Job

The relationship between source and sink tables of an ETL job should be deleted when the job terminates. We can add a listener JobTerminatedListener in Flink, and notify JobManager to send delete event to MetaService when job is terminated.

Summary

The main work in Timestamp Barrier  and the differences between Timestamp Barrier and existing Watermark in Flink are in the following table.


Timestamp Barrier

Watermark

Generation

JobManager must coordinate all source subtasks and generate a unified timestamp barrier from System Time or Event Time for them

Each source subtask generate timestamp barrier(watermark event) from System Time or Event Time

Checkpoint

Store <checkpoint, timestamp barrier> when the timestamp barrier is generated, so that the job can recover the same timestamp barrier for the uncompleted checkpoint.

None

Replay data

Store <timestamp barrier, offset> for source when it broadcasts timestamp barrier, so that the source can replay the same data according to the same timestamp barrier.

None

Align data

Align data for stateful operator(aggregation, join and etc.) and temporal operator(window)

Align data for temporal operator(window)

Computation

Operator compute for a specific timestamp barrier based on the results of a previous timestamp barrier.

Window operator only computes results in the window range.

Output

Operator output or commit results when it collects all the timestamp barriers, including operators with data buffer or async operations.

Window operator support "emit" output


The main work in Flink and Table Store are as followed


ComponentMain Work






Table Store

MetaService
  1. Manage the relationship between ETL job and table in Table Store , including source table, sink table.
  2. Manage the finished timestamp barrier of each table in Table Store
  3. Interaction between Flink and MetaService, such as register ETL job, get consistency version of table and ect.

Catalog

  1. Register source and sink table with ETL job id.
  2. Create table based on a consistency version from MetaService

Source and SplitEnumerator
  1. SplitEnumerator managers the snapshot, split and timestamp barrier for specific table.
  2. Source read data and timestamp barrier from split, broadcast timestamp barrier
  3. Notify MetaService to update the completed timestamp barrier for tables.
  4. Notify MetaService to cleanup the information of the terminated ETL job.
Sink
  1. Write data to table store and commit data with timestamp barrier



Flink

Timestamp Barrier MechanismThe detailed and main work is in the above table
Planner
  1. Register job to MetaService to create relationship between source and sink tables.
  2. Create table based on a consistency version from MetaService 
JobManager
  1. Add a listener and call back it when the job ends

The Next Step

This is an overall FLIP for data consistency in streaming and batch ETL. Next, we would like to create FLIP for each functional module with detailed design. For example:

  1. Timestamp Barrier Coordination and Generation
  2. Timestamp Barrier Checkpoint and Recovery
  3. Timestamp Barrier Replay Data Implementation
  4. Timestamp Barrier Alignment and Computation In Operator
  5. Introduce MetaService in Table Store and etc

Rejected Alternatives

Data consistency management


What we need in Flink is a Timestamp Barrier Mechanism  to align data in stateful and temporal operator. As shown above, the existing Watermark cannot align data. At present, Aligned Checkpoint is the only one which can align data in stateful operator such as aggregation and join operators in Flink. But there are some problems of Checkpoint  for data consistency

  • Flink uses Checkpoint as a fault-tolerant mechanism, it supports aligned checkpoint, non-aligned checkpoint, and may even task local checkpoint in the future.
  • Even for Aligned Checkpoint, data consistency cannot be guaranteed for some operators, such as Temporal operators with timestamp or data buffer.

Data consistency coordinator


By coordinating timestamp barriers between jobs, the consistency of data among multiple ETL jobs Sink Tables can be ensured during query. Besides global timestamp barrier between jobs, we also consider adaptive timestamp barrier.
Each ETL job manages its timestamp barrier and MetaServices manages the relationships of timestamp barriers between ETL jobs.


As shown above, Timestamp30 in Table1 and Timestamp10 in Table2 generates Timestamp3 in Table3, and so on. When users query on these tables, MetaService calculates the snapshot ids of them according to the timestamp barriers relationships in the ETL jobs.
In this way, we can define the data consistency of queries, but it's difficult to define the data processing delay between jobs and tables. For example, it is difficult to define the data delay from Table1, Table2 to Table3. As the number of cascaded layers increases, this definition will become very complex.

On the other hand, this proposal increases the cost of data operation and management. When the data of a table needs to be rolled back to the specified snapshot for some reason, each downstream table needs to be reset to a different snapshot. It's terrible. For the above reasons, we choose the global checkpoint mechanism in the first stage. 

Roadmap In Future


Data consistency of ETL Topology is our first phase of work. After completing this part, we plan to promote the capacity building and improvement of Flink + Table Store in future, mainly including the following aspects.

  1. Materialized View in SQL. Next, we hope to introduce materialized view syntax into Flink to improve user interaction experience. Queries can also be optimized based on materialized views to improve performance.

  2. Improve MetaService capabilities. ManagerService is a single point in the system, and it should supports failover. In the other way, MetaService supports managing Flink ETL jobs and tables in Table Store, accessed by other computing engines such as Spark and being an agent of Hive Metastore later.

  3. Improve data consistency semantics. As mentioned above, we need to implement "Timestamp Barrier" to support full semantics data consistency instead of "Aligned Checkpoint" in the first stage. 
  4. Improve OLAP performance. We have created issues in FLINK-25318] Improvement of scheduler and execution for Flink OLAP to manage improvement of OLAP in Flink. At the same time, we hope to continue to enhance the online query capability of Table Store and improve the OLAP performance of Flink + Table Store.

  5. Improvement of data real-time. At present, our consistency design is based on Flink checkpoint mechanism and supports minute level delay. In the future, we hope to support second level or even millisecond level data real-time on the premise of ensuring data consistency.

By promoting the above optimization and implementation, we hope that Flink + Table Store can support the full StreamingWarehouse capability. Users can create materialized views and execute OLAP queries in the system, just like using databases and data warehouses, and output data to the application layer (such as KV) as required.

  • No labels