Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Status

Discussion thread
-
https://lists.apache.org/thread/185mbcwnpokfop4xcb22r9bgfp2m68fx
Vote thread
-
https://lists.apache.org/thread/5f9806kw1q01536wzwgx1psh7jhmfmjw
JIRA

Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyFLINK-

31275

32402

Release
-
1.18

Motivation

Flink ETL job consumes data from Source Table and produces result to Sink Table. Source Table creates relationship with Sink Table through Flink ETL job. Flink needs a mechanism for users to report these relationships to external systems, such as meta system Datahub [1], Atlas [2] and meta store we mentioned in FLIP-276 [3].

This FLIP aims We would like to introduce listeners interface listener interfaces in Flink, users can implement them to report the progress of jobs and meta data and lineage to external systems. Flink SQL and Table jobs are supported in the first stage, and DataStream will be consider in the future. The main information is as followsThe main information is as follows

1. Source and Sink information, such as table name, fields, partition keys, primary keys, watermarks, configurations

2. Job information, such as job id/name, execution mode, scheduler job type, logical planjob configuration

3. Relationship between Source/Sink and jobs, such as source and sink tables, columns in tables for joband their column lineages.

4. Job execution status changed information, such as job status, checkpoints

Public Interfaces

CatalogEventListener

exception.

This FLIP focuses on customized meta data listener and customized job lineage listener will be introduced in FLIP-314 [4]

Public Interfaces

CatalogModificationListener

DDL operations such DDL operations such as create/alter/drop tables and etc will generate different events and notify CatalogEventListener CatalogModificationListener . All events for CatalogEventListener CatalogModificationListener extend the basic BaseCatalogEvent CatalogModificationEvent and listeners can get catalog from it. Some general events for database/table are defined as follows and more events can be implemented based on the requirements in the future.

Code Block
/**
 * Different events will be fired when a catalog/database/table is modified. The customized listener can receiveget theseand
 events* andreport thenspecific doinformation somefrom specificthe operationsevent according to the event type.
 */
@PublicEvolving
public interface CatalogEventListenerCatalogModificationListener {
    /** The Eventevent firedwill afterbe a catalog/fired when the database/table is modified. */
    void onEvent(CatalogEventCatalogModificationEvent catalogEventevent);
}

    /* TheBasic basic classinterface for catalog related eventmodification. */
@PublicEvolving
    public abstractinterface class BaseCatalogEventCatalogModificationEvent {
        /* TheContext catalog offor the event. */
         Catalog catalogCatalogContext context();
}

/* Context for catalog modification and job lineage events. */
@PublicEvolving
public interface CatalogContext {
    
    /* The name of catalog. */
        String catalogNamegetCatalogName();
    }

    /* TheClass basicof class for database related eventthe catalog. */
    publicClass<? abstractextends class BaseDatabaseEvent {Catalog> getClass();

    /* Identifier for  String databaseName();  the catalog from catalog factory, such as jdbc/iceberg/paimon. */
    }Optional<String> getFactoryIdentifier();

    /* EventConfig for database creationcatalog. */
     @PublicEvolving
    public class CreateDatabaseEvent extends BaseDatabaseEvent {
        CatalogDatabase database();
        boolean ignoreIfExists();
    }

    Configuration getConfiguration();
}

/* The basic class for database related event. */
public interface DatabaseModificationEvent extends CatalogModificationEvent {
    CatalogDatabase database();
}

/* Event for altercreating database. */
@PublicEvolving
    public classinterface AlterDatabaseEventCreateDatabaseEvent extends BaseDatabaseEventDatabaseModificationEvent {
        CatalogDatabase newDatabaseboolean ignoreIfExists();
        boolean ignoreIfNotExists
}

/* Event for altering database. */
@PublicEvolving
public interface AlterDatabaseEvent extends DatabaseModificationEvent {
    CatalogDatabase newDatabase();
    boolean ignoreIfNotExists();
}

    /* Event for dropping database. */
    @PublicEvolving
    public classinterface DropDatabaseEvent extends BaseDatabaseEventDatabaseModificationEvent {
        boolean ignoreIfExists();
    }

    //**
 * Base table event, provides column list, primary keys, partition keys, watermarks and properties in
 * CatalogBaseTable. The table can be source or sink.
 */
    public abstractinterface class BaseTableEventTableModificationEvent extends BaseCatalogEventCatalogModificationEvent {
        ObjectIdentifierObjectIdentifier identifier();  
        CatalogBaseTableCatalogBaseTable table();
    }

    /* Event for creating table creation. */
    @PublicEvolving
    public classinterface CreateTableEvent extends BaseTableEventCatalogModificationEvent {
        boolean ignoreIfExists();
}
    }

    /* Event for altering table, provides all informationchanges infor old table and new table. */
    @PublicEvolving
    public classinterface AlterTableEvent extends BaseTableEventCatalogModificationEvent {
        List<TableChange> tableChanges();
        boolean ignoreIfExists();
    }

    /* Event for dropping table. */
    @PublicEvolving
    public classinterface DropTableEvent extends BaseTableEventCatalogModificationEvent {
        boolean ignoreIfExists();   
    }

}

/* Factory for catalog modification listener. */
@PublicEvolving
public interface CatalogEventListenerFactoryCatalogModificationListenerFactory {
     CatalogModificationListener public CatalogEventListener createListener(Configuration configuration, ClassLoader classLoaderContext context);
}

/*   Add listeners@PublicEvolving in
 the catalog context. */
@PublicEvolving
public interface CatalogFactoryContext {
    /** Add listeners in the context. */Configuration getConfiguration();
    @PublicEvolving
    interface Context {ClassLoader getUserClassLoader();
    	/*
    /	* Get an Executor pool for the listeners from context if they are exists. */ listener to run async operations that can potentially be IO-heavy.
    	*/
    List<CatalogEventListener>	Executor listenersgetIOExecutor();
      }
}

Users can may create different catalogs on the same physical catalog, for example, create two hive catalog named hive_catalog1  and hive_catalog2  for the same metastore. The tables hive_catalog1.my_database.my_table  and hive_catalog2.my_database.my_table  are the same table in hive metastore.

In addition, there are two table types: persistent table  and temporal table . The persistent table  can be identified by catalog and database above, while the temporal table  can only be identified by properties in ddl. Different temporal tables with the same connector type and related properties are the same physical table in external system, such as two tables for the same topic in Kafka.

ExternalStorage is added in CatalogTable to identify different Flink tables on the same physical table. TableStorage is created by ExternalStorageFactory , which is loaded with specific connector type.

Code Block
/* External storage for different physical table. */
@PublicEvolving
public interface ExternalStorage {
    /* External storage information such as kafka, hive, iceberg or paimon. */
    String type();
    /* Physical location which identify the unique physical table. */
    String location();
}

/* External storage factory is loaded with specific connector type and create {@link ExternalStorage}. */
@PublicEvolving
public interface ExternalStorageFactory {
    /* Create external storage for different connector type. */
    DynamicTableStorage createDynamicTableStorage(Configuration config);
}

@PublicEvolving
public interface CatalogTable {
    /* Get physical storage for the table. */
    ExternalStorage storage();
}

JobSubmissionListener 

Before job is submitted, Flink can create logical plan for the job and notify the listener. We add JobSubmissionListener for this and users can create relationships between source/sink tables in it. The logical plan of job is static information which may contains much data and Flink only need to report it once when the job is submitted. Therefor, this listener is on the client side. The RestClusterClient is the input of all jobs such as sql/table/datastream and event other developers who build job themselves and submit job with client.

Code Block
/**
 * This listener will be notified before job is submitted in {@link RestClusterClient}.
 */
@PublicEvolving
public interface JobSubmissionListener {
    /* Event is fired before a job is submitted. */
    void onEvent(JobSubmissionEvent submissionEvent);

    /* Event for job submission. */
    @PublicEvolving
    public class JobSubmissionEvent {
        JobID jobId();
        String jobName();
        JobLogicalPlan plan();
    }
}

/* Factory for job submission listener. */
@PublicEvolving
public interface JobSubmissionListenerFactory {
    public JobSubmissionListener createListener(Configuration configuration, ClassLoader classLoader);
}

There is JobLogicalPlan in JobSubmissionEvent which describe the job detailed information such as relationships between source/sink tables and columns dependencies. Users can get the plan to report more information about the job.

Code Block
/**
 * Job logical plan is built according to JobGraph in the client. Users can get sources, sinks and the relationship between nodes from plan.
 */
@PublicEvolvig
public interface JobLogicalPlan {
    /* Job type, BATCH or STREAMING. */
    String jobType();

    /* Source info list. */
    List<JobSourceInfo> sources();

    /* Sink info list. */
    List<JobSinkInfo> sinks();

    /* Job configuration. */
    Map<String, String> config();

    /* Source and column lineages for the sink, the key is sink name and the value is source and column lineages. */
    Map<String, SourceColumnLineage> columnLineages();
}
 
/* Source info of the job plan. */
@PublicEvolving
public class JobSourceInfo {
    ExternalStorage source();

    /* Collect/Table/DataStreamSource. */
    String sourceType();

    /* Source column name list. */
    List<String> columns();
    Map<String, String> config();
}
 
/* Sink info of the job plan. */
@PublicEvolving 
public class JobSinkInfo {
    ExternalStorage sink();

    /* Sink column name list. */
    List<String> columns();

    /* Modify type, INSERT/UPDATE/DELETE. */
    String modifyType();

    /* Update mode, APPEND/RETRACT/UPSERT. */
    String updateMode();
    boolean overwrite();
    Map<String, String> config();
}
 
/* Source column list for sink vertex. */
@PublicEvolving  
public class SourceColumnLineage {
    /* Sink name. */
    String sinkName();

    /* Source name list for the given sink. */
    List<String> sourceNames();

    /* Column lineages, the key is the column in the sink and the value is source column list. */
    Map<String, ColumnLineage> columnLineages();
}

/* Source columns which are used to generate sink column. */
@PublicEvolving
public class ColumnLineage {
    /* The sink column. */
    String sinkColumn();

    /* Source Name -> Source Columns. */
    Map<String, List<String>> sourceColumns();
}

JobExecutionListener

JobManager generates events when status of job is changed or checkpoint is started and notify JobExecutionListener .  JobStatusEvent indicates the status of Flink job in JobStatus with old status, new status and job logical plan.

Users can identify the physical connector by CatalogContext and options in CatalogBaseTable through the following steps:

1. Get connector name.

Users can get value of option 'connector' from options in CatalogBaseTable  for temporal tables. If it doesn't exist, users can get factory identifier from CatalogContext as connector name. If none of the above exist, users can define the connector name themselves through Class<? extends Catalog> .

2. Uses can get different properties based on connector name from table options and create connector identifier. Flink has many connectors, and we given the example of kafka options below, users can create kafka identifier with servers, group and topic as needed.

Code Block
/* Kafka storage identifier options. */
"properties.bootstrap.servers" for Kafka bootstrap servers
"topic" for Kafka Topic
"properties.group.id" for Kafka group id
"topic-pattern" for Kafka topic pattern

For some sensitive information, users can encode and desensitize them in their customized listeners.

Config Customized Listener

Users should add their listeners to the classpath of client and flink cluster, and config the listener factory in the following options

Code Block
# Config for catalog modification listeners.
table.catalog-modification.listeners: {table catalog listener factory1},{table catalog listener factory2}

Proposed Changes

Use case of job lineage

Users may create different tables on a same storage, such as the same Kafka topic. Suppose there's one Kafka topic, two Paimon tables and one Mysql table. Users create these tables and submit three Flink SQL jobs as follows.

a) Create Kafka and Paimon tables for Flink SQL job

Code Block
-- Create a table my_kafka_table1 for kafka topic 'kafka_topic'
CREATE TABLE my_kafka_table1 (
  val1 STRING,
  val2 STRING,
  val3 STRING
) WITH (
  'connector' = 'kafka',
  'topic' = 'kafka_topic',
  'properties.bootstrap.servers' = 'kafka_bootstrap_server',
  'properties.group.id' = 'kafka_group',
  'format' = 'json'
);

-- Create a Paimon catalog and table for warehouse 'paimon_path1'
CREATE CATALOG paimon_catalog WITH (
    'type'='paimon',
    'warehouse'='paimon_path1'
);
USE CATALOG paimon_catalog;
CREATE TABLE my_paimon_table (
    val1 STRING,
    val2 STRING,
    val3 STRING
) WITH (...);

-- Insert data to Paimon table from Kafka
INSERT INTO my_paimon_table SELECT ... FROM default_catalog.default_database.my_kafka_table1 WHERE ...;

b) Create another Kafka and Paimon tables for Flink SQL job

Code Block
-- Create another table my_kafka_table2 for kafka topic 'kafka_topic' which is same as my_kafka_table1 above
CREATE TABLE my_kafka_table2 (
  val1 STRING,
  val2 STRING,
  val3 STRING
) WITH (
  'connector' = 'kafka',
  'topic' = 'kafka_topic',
  'properties.bootstrap.servers' = 'kafka_bootstrap_server',
  'properties.group.id' = 'kafka_group',
  'format' = 'json'
);

-- Create a Paimon catalog with the same name 'paimon_catalog' for different warehouse 'paimon_path2'
CREATE CATALOG paimon_catalog WITH (
    'type'='paimon',
    'warehouse'='paimon_path2'
);
USE CATALOG paimon_catalog;
CREATE TABLE my_paimon_table (
    val1 STRING,
    val2 STRING,
    val3 STRING
) WITH (...);

-- Insert data to Paimon table from Kafka
INSERT INTO my_paimon_table SELECT ... FROM default_catalog.default_database.my_kafka_table2 WHERE ...;

c) Create Mysql table for Flink SQL job

Code Block
-- Create two catalogs for warehouse 'paimon_path1' and 'paimon_path2', there are two different tables 'my_paimon_table'
CREATE CATALOG paimon_catalog1 WITH (
    'type'='paimon',
    'warehouse'='paimon_path1'
);
CREATE CATALOG paimon_catalog2 WITH (
    'type'='paimon',
    'warehouse'='paimon_path2'
);

-- Create mysql table
CREATE TABLE mysql_table (
    val1 STRING,
    val2 STRING,
    val3 STRING
) WITH (
    'connector' = 'jdbc',
    'url' = 'jdbc:mysql://HOST:PORT/my_database',
    'table-name' = 'my_mysql_table',
    'username' = '***',
    'password' = '***'
);

-- Insert data to mysql table from two paimon tales
INSERT INTO mysql_table 
    SELECT ... FROM paimon_catalog1.default.my_paimon_table
        JOIN paimon_catalog2.default.my_paimon_table
    ON ... WHERE ...;

After completing the above operations, we got one Kafka topic, two Paimon tables and one Mysql table which are identified by connector identifier. These tables are associated through Flink jobs, users can report the tables and relationships to datahub as an example which is shown below (The job lineage will be supported in FLIP-314)

Image Added

Changes for CatalogModificationListener

TableEnvironmentImpl creates customized CatalogModificationListener according to the option lineage.catalog-modification.listeners , and build CatalogManager with the listeners. Some other components such as Sql-Gateway can create CatalogManager with the listeners themselves. Currently all table related operations such as create/alter are in CatalogManager , but database operations are not. We can add database modification operations in CatalogManager  and notify the specified listeners for tables and databases.

Code Block
/* Listeners and related operations in the catalog manager. */
public final class CatalogManager {
    private final List<CatalogModificationListener> listeners;

    /* Create catalog manager with listener list. */
    private CatalogManager(
            String defaultCatalogName,
            Catalog defaultCatalog,
            DataTypeFactory typeFactory,
            ManagedTableListener managedTableListener,
            List<CatalogModificationListener> listeners
Code Block
/**
 * When job status is changed in job manager, it will generate job event and notify job execution listener.
 */
@PublicEvolving
public interface JobExecutionListener {
    /* Event fired after job status has been changed. */ 
    void onJobStatusChanged(JobStatusEvent jobStatusEvent);

    /* JobNotify statusthe eventlisteners with plangiven catalog event. */
    @PublicEvolving
private void notify(CatalogModificationEvent  public class JobStatusEvent event) {
         JobID jobId(listeners.forEach(listener -> listener.onEvent(event));
    }

    String jobName();
        JobStatus oldStatus();
        JobStatus newStatus();/* Notify listener for tables. */
    public void createTable/dropTable/alterTable(...) {
        /* Exception for job when it is failed. */
        @Nullable Throwable exception(....;
        notify(Create Different Table Modification Event With Context);
    }
}

/* Factory for job execution listener /* Add database ddls and notify listener for databases. */
@PublicEvolving
    public interface JobExecutionListenerFactoryvoid createDatabase/dropDatabase/alterDatabase(...) {
    public JobExecutionListener createListener(Configuration configuration, ClassLoader classLoader);
}

Config Customized Listener

Users should add their listeners to the classpath of client and flink cluster, and config the listener factory in the following options

Code Block
# Config catalog event listeners.
table.catalog.listeners: {job catalog listener factory1},{job catalog listener factory2}

# Existing config job submission listeners.
execution.job-submission-listeners: {job submission listener factory1},{job submission listener factory2}

# Config job execution listeners.
jobmanager.execution.listeners: {job execution listener factory1},{job execution listener factory2}

Proposed Changes

Changes for CatalogEventListener

TableEnvironmentImpl creates customized CatalogEventListener according to the option table.catalog.listeners , and build CatalogManager with the listeners. Users can create CatalogManager with the listeners in some other components such as Sql-Gateway too. The database related operations are in the Catalog , the listeners are added in AbstractCatalog  and users can notify them after database operations in their customized catalog.

Code Block
/* Listeners related operations in the catalog manager. */
public final class CatalogManager {
    /* Create catalog manager with listener list. */
    private CatalogManager(
            String defaultCatalogName,
            Catalog defaultCatalog,
            DataTypeFactory typeFactory,
            ManagedTableListener managedTableListener,
            List<CatalogEventListener> listeners);

    /* Notify the listeners with given catalog event. */
    private void notify(CatalogEvent event) {
        for (CatalogEventListener listener : listeners) {
            listener.onEvent(event);
        }
    }

    /* Notify listener for tables. */
    public void createTable/dropTable/alterTable(...) {
        ....;
        notify(Create Different Table Event);
    }

    /* Builder for catalog manager. */
    public static final class Builder {
        Builder listeners(List<CatalogEventListener> listeners);
    }
}

/* Listeners related operations in AbstractCatalog. */
public abstract class AbstractCatalog implements Catalog {
    /* Create the catalog with listeners. */
    public AbstractCatalog(String name, String defaultDatabase, List<CatalogEventListener> listeners); 

    /**
     * Notify the listeners with given database event, after the customized implementation of AbstractCatalog create/alter/drop a database,
     * it can create the specific event and call the notify method.
     */
    protected void notify(BaseDatabaseEvent event) {
        for (CatalogEventListener listener : listeners) {
            listener.onEvent(event);
        }
    }
}

/* Create default catalog context with listeners. */
public DefaultCatalogContext {
    public DefaultCatalogContext(
            String name,
            Map<String, String> options,
            ReadableConfig configuration,
            ClassLoader classLoader,
            List<CatalogEventListener> listeners);

    /* Get catalog event listeners from the context. */
    public List<CatalogEventListener> listeners() {
        return listeners;
    }
}

Changes for JobSubmissionListener

Flink creates Planner for sql and table jobs which contains exec nodes, then the planner will be converted to Operation , Transformation and StreamGraph. DataStream jobs are similar with SQL, Flink create datastream from environment and converted it to Transformation and StreamGraph. The job conversion is shown as followed. 

...

There is a graph structure in StreamGraph , we can create JobLogicalPlan based on StreamGraph easily. 

Changes for JobExecutionListener

Flink sql or table jobs are created from Planner which contains exec nodes, then it is converted to Operation , Transformation and StreamGraph. Finally, the jobs are submitted as JobGraph and job managers create ExecutionGraph from it. The operations of source/sink list are as follows.

SourceScan in Planner contains source information such as table name, fields and configurations. But these information is hidden in the Source which is an interface when the SourceScan  is converted to Transformation. We should add source information in the conversion of SourceScan->Operation->Transformation->StreamNode.

Similar to sources, Sink and DataStreamSink contain sink information such as table names and configuration. We should add sink information in the conversion of Sink->Operation->Transformation->StreamNode, then we can add Map<JobVertexID, JobSinkVertexInfo> sources in JobGraph and ExecutionGraph too.

After completing the above changes, JobManager can create JobLogicalPlan  from JobGraph  for JobExecutionListener . When the status of job is changed, DefaultExecutionGraph  in JobManager  will notify the listener. At the same time, this listener will also listen to the execution of checkpoint. When CheckpointCoordinator starts/completes/aborts a specific checkpoint, it will notify the listener too.

Listener Construction and Execution

While the current JobListener is created by an empty constructor, all customized listeners above can be created by a constructor with Configuration or an empty constructor. Flink takes precedence over constructors with Configuration if it is exist.

Multiple listeners are independent, and client/JobManager will notify the listeners synchronously. It is highly recommended NOT to perform any blocking operation inside the listeners. If blocked operations are required, users need to perform asynchronous processing in their customized listeners.

Plan For The Future

...

Source/Sink relationships for SQL/Table jobs are supported, DataStream  jobs will be supported later.

...

Currently we only supports scan source, lookup join source should be supported later.

...

 ....;
        notify(Create Different Database Modification Event With Context); 
    }

    /* Add listeners in Builder for catalog manager. */
    public static final class Builder {
        Builder listeners(List<CatalogModificationListener> listeners);
    }
}

Listener Execution

Multiple listeners are independent, and client/JobManager will notify the listeners synchronously. It is highly recommended NOT to perform any blocking operation inside the listeners. If blocked operations are required, users need to perform asynchronous processing in their customized listeners.


[1] https://datahub.io/

[2] https://atlas.apache.org/#/

[3] FLIP-276: Data Consistency of Streaming and Batch ETL in Flink and Table Store

[4]FLIP-314: Support Customized Job Lineage Listener