THIS IS A TEST INSTANCE. ALL YOUR CHANGES WILL BE LOST!!!!
...
Using the following format:
{
"data": [
{
"id": "102",
"name": "car battery"
}
],
"database": "inventory”,
"table": "products",
"es": 1589374013000,
"ts": 1589374013680,
"type": "DELETE"
} |
Kafka: Read metadata from Kafka's ConsumerRecord and use it for computation
...
PHYSICAL COLUMNS + FORMAT METADATA COLUMNS + SOURCE METADATA COLUMNS
Metadata for existing connectors and formats
For completeness, we list the first metadata for existing Flink connectors and formats. The naming follows FLIP-122 which means:
- connector metadata has no prefix
- formats will have a prefix using the factory identifier
- key/value formats are prefixed with `key.` and `value.`
Note: We will use a relaxed data type validation (such as `LogicalTypeCasts#supportsAvoidingCast`) to make timestamp handling easier. So both TIMESTAMP and TIMESTAMP WITH LOCAL TIME ZONE will work in DDL.
Kafka
Key | Data type | Read/Write | Notes |
---|---|---|---|
topic | STRING | r | We don't allow writing to different topics for now. Maybe we will allow that in the future via a property. |
partition | INT | r | We don't allow writing to different partitions for now. Maybe we will allow that in the future via a property. |
headers | MAP<STRING, BYTES> | r/w | |
leader-epoch | INT | r | |
offset | BIGINT | r | |
timestamp | TIMESTAMP(3) [WITH LOCAL TIME ZONE] | r/w | |
timestamp-type | STRING | r | ['NoTimestampType', 'CreateTime', 'LogAppendTime'] |
Debezium
Key | Data type | Read/Write | Notes |
---|---|---|---|
debezium-json.schema | STRING | r | Pure JSON string, can be handled with Flink's built-in JSON SQL functions |
debezium-json.timestamp | TIMESTAMP(3) [WITH LOCAL TIME ZONE] | r | |
debezium-json.source | MAP<STRING, STRING> | r | It seems that this is always a property list. So we can make it available as a map for easier access. |
Reading and writing from key and value
...