Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Page properties


Discussion thread
Vote thread
JIRA

Status

Current state"Under Discussion"

Discussion thread:

...

Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyFLINK-15350

...

Release


Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Currently users have to manually create schemas in Flink source/sink mirroring tables in their relational databases in use cases like direct JDBC read/write and consuming CDC. Many users have complaint about this process as the manual work is unnecessary and redundant. Users also need to manually keep up their Flink job with schema changes in databases and figuring out type matches. Any mismatch can lead to a failing Flink job at runtime instead of compile time. All these have been quite unpleasant, resulting in a broken user experience.

...

We want to provide a JDBC catalog interface for Flink to connect to all kinds of relational databases, enabling Flink SQL to 1) retrieve table schema automatically without requiring user inputs DDL 2) check at compile time for any potential schema errors. It will greatly streamline user experiences when using Flink to deal with popular relational databases like Postgres, MySQL, MariaDB, AWS Aurora, etc.

Proposal

We propose to add a `JDBCCatalog` user-face catalog and a `PostgresJDBCCatalog` implementation. With such a fundamental work, implementations for other relational db can be added easily later.

Design

`JDBCCatalog` will be a read only catalog that supports the following operations on db and table:

...

Users can only read these objects from databases, but not able to manipulate them yet in this FLIP.

Configurations

To configure a `JDBCCatalog`, users only need to provide

...

A Flink client can have multiple `JDBCCatalog`s to connect to different db instances.

Interfaces, Classes, and Modules

classes will be all in flink-jdbc module.


Code Block
languagejava
interface JDBCDialect {
	// new APIs
	String getListTablesStatement(String database);
	
	String getListDatabasesStatement();

	String getTableMetadataStatement();

	PostgresDialect() {
		// new impls

		String getListTablesStatement(String database) { ... }
	
		String getListDatabasesStatement() { ... }

		String getTableMetadataStatement() { ... }
	}

	DerbyDialect {
		String getListTablesStatement(String database) { // unsupported exception }
	
		String getListDatabasesStatement() { // unsupported exception }

		String getTableMetadataStatement() { // unsupported exception }
	}

	...
}



public JDBCCatalog extends AbstractCatalog {
	JDBCDialect dialect;
	InternalJDBCCatalogCatalog internal;
	
	public JDBCCatalog(String catalogName, String defaultDatabase, String userName, String pwd, String baseUrl) {
		JDBCDialect d = JDBCDialects.get(baseUrl).get();

		switch(d) {
			case postgres:
				internal = new PostgresJDBCCatalog(...);
				break;
			case derby:
				...
		}
	} 

	void open() {
		// verify jdbc connection to fail earlier
	}

	void close();

	List<String>Optional<TableFactory> listDatabasesgetTableFactory() {
		// execute and parse dialect.getListDatabasesStatement()return Optional.of(new JDBCTableSourceSinkFactory());
	}

	booleanList<String> databaseExistslistDatabases(String dbname) {
		return internal.listDatabases.contains(dbname);
	}

	boolean tableExistsdatabaseExists(ObjectPathString pathdbname) {
		databaseExists(path.getDatabaseName())
		return listTables().contains(path.getObjectName());
	}

	List<String> listDatabases() {
		return internal.listDatabases();// just try to make a connection, true if succeeded, false otherwise
	}

	CatalogTable getTable(ObjectPath op) {
		return internal.getTable(op);
	}

	List<String> listTables(String db) {
		return internal.listTables(db);
	}
}

public InternalPostgresJDBCCatalogPostgresJDBCCatalog extends InternalJDBCCatalogJDBCCatalog {
	JDBCDialect dialect;
// cannot be instantiated by users
	protected PostgresJDBCCatalog(String catalogName, String defaultDatabase, String userName, String pwd, String baseUrl) {
		dialect = new PostgresDialect();}

	boolean tableExists(ObjectPath path) {
		// execute and parse result
	}

	List<String> listDatabases() {
		// execute and parse dialect.getListDatabasesStatement();result
	}

	CatalogTable getTable(ObjectPath op) {
		// execute and parse dialect.getTableMetadataStatement();result
		// map data types and build catalog table
	}

	List<String> listTables() {
		// execute and parse dialect.getListTablesStatement();result
	}
}


The base url from constructor should be without databases, like "jdbc:postgresql://localhost:5432/" or "jdbc:postgresql://localhost:5432". The reason being that it's normal for users to switch databases within a catalog  and Postgres doesn't allow changing databases within a session, and thus we need to establish multiple sessions based on the base url for different dbs in a pg instance.

The full url when connecting to the db will be with database name, like "jdbc:postgresql://localhost:5432/db"

Using the catalog

Table API


Code Block
// java
tEnv.registerCatalog(name, new JDBCCatalog(...))

SQL CLI yaml configs

Code Block
catalogs:
	- name: ...
	  type: jdbc
	  username: xxx
	  password: yyy
	  base_-url: jdbc:... :<db_type>://<ip>:<port>
      default-database: ... # optional, dbms specific, will be access id in postgres by default

Versions

We reply rely on the driver itself for backward compatibility. Users can also build jars with their own versions.

Postgres has an additional name space as `schema` besides database. A pg instance can have multiple dbs, each db can have multiple schemas with a default one named "postgres", each schema can have multiple tables.

...

1) jdbc connection to Postgres have to be for a specific database without schema name. If there's no db specified in the url, the default db is the username.

2) when query querying a table in Postgres, users can use either <schema.table> or just <table>. The schema is optional and defaults to "postgres"  

...

The full path of Postgres table in Flink would be "<catalog>.<db>.`<schema.table>`"  if schema is specified.


Flink Type

Postgres Type

smallint

smallint

int

integer

bigint

bigint

float

real

double

double precision

decimal(p,s)

numeric(p,s)

boolean

boolean

timestamp

timestamp without timezone

timestamp with local timezonetimestamp with timezo timezone (depending on pg server timezone)

date

date

time

time without timezone

time with timezonetime with

timezone

intervalinterval

string

text

varchar(n)

character varying

char

char, character

binary/varbinary/bytes

bytea

array

[] (array of all the above primitive types)

...