Status
Google Doc handy
Motivation
Apache Thrift (along with protocol-buf ) is widely adopted as a de facto standard of high throughput network traffic protocol. Historically, Companies like Pinterest have been utilizing thrift to encode strongly typed Kafka messages, and persist to object storage as sequence files in the data warehouse.
Major benefits of this approach were that
- Versioned thrift schema files served as a schema registry where producers and consumers across languages could encode/decode with the latest schema.
- Minimize overhead of maintaining translation ETL jobs which flatten schema or adding additional accessory fields during ingestion
- Lower storage footprint
Other than missing out optimization comes with storage format conversion, running jobs against unsupported thrift format also poses a challenge of maintenance and upgrades flink jobs given
- lack of backward-compatible thrift encoding/decoding support in Flink
- lack of inference Table schema DDL support
Proposed Changes
Summary
Overall, we propose changes touch follow areas
- adding Flink-thrift in flink-format subproject including handling both data stream api and table api encoding/decoding as well as table ddl
- implement SupportsProjectionPushDown in KafkaDynamicSource, reconstruct valueDecodingFormat with partial deserialization
- support SERDE and SERDEPROPERTIES in hive connector (WIP)
- support catalog table / view support of schema inference and compatibility guarantee (WIP)
In a private forked repo, we made a few changes to support large scale Thrift formatted Flink DataStream and Flink SQL jobs in production. Following diagram 1 shows changes to support thrift format in Flink DataStream/SQL jobs. While we would like to discuss each area of work in a good amount of detail, we are also aware that changes such as inference DDL support or schema upgrade would require more discussions in a longer time frame.
Diagram 1. Project dependencies graph of support thrift format in Flink. Currently, the open source flink format doesn’t support thrift format serialization and deserialization.
Full Serialization/Deserialization
Naively, we started implementing ThriftSerializationSchema and ThriftDeserializationSchema to encode/decode thrift messages. Users could pass in to connectors in Datastream jobs.
public class ThriftSerializationSchema<T extends TBase> implements SerializationSchema<T> {
private static final Logger LOG = LoggerFactory.getLogger(ThriftSerializationSchema.class);
public ThriftSerializationSchema(Class<T> recordClazz) {
}
public byte[] serialize(T element) {
byte[] message = null;
TSerializer serializer = new TSerializer();
message = serializer.serialize(element);
return message;
}
}
Over time, we observed from time to time, corrupted thrift binary payload from both kafka topic as well as restored state, causing jobs to keep restarting until engineers step in and redeploy with newer kafka timestamp. To minimize production job disruption, we introduced a corrupt tolerant way to encode/decode thrift payload.
PinterestTBaseSerializer credit to Yu Yang
public class ThriftDeserializationSchema<T extends TBase> implements DeserializationSchema {
private Class<T> thriftClazz;
private ThriftCodeGenerator codeGenerator;
public ThriftDeserializationSchema(Class<T> recordClazz, ThriftCodeGenerator codeGenerator) {
this.thriftClazz = recordClazz;
this.codeGenerator = codeGenerator;
}
@Override
public T deserialize(byte[] message) throws IOException {
TDeserializer deserializer = new TDeserializer();
T instance = null;
instance = thriftClazz.newInstance();
deserializer.deserialize(instance, message);
return instance;
}
public boolean isEndOfStream(Object nextElement) {
return false;
}
public TypeInformation<T> getProducedType() {
return new ThriftTypeInfo<>(thriftClazz, codeGenerator);
}
}
env.addDefaultKryoSerializer(classOf[Event], classOf[PinterestTBaseSerializer[Event]])
Later, Teja Thotapalli and Lu Niu found the corruption was actually caused by the default checkpoint local directory pointing to EBS instead of the NVME drive in AWS, Jun Qin from Ververica has a great post that details root-causing steps. The Impact of Disks on RocksDB State Backend in Flink: A Case Study
Hive MetaStore thrift dependencies shading
We noticed HMS has introduced a specific thrift library version that is not compatible with our internal flink version. This led to connectivity issues in our Flink SQL jobs. In order to avoid other users hitting the same dependency issues.
We propose shading libthrift and fb303 in the flink hive connector, and move those two packages to the project root level maven config; users could place their own customized libthrift jar as well as thrift schema jar into /lib folder during release.
TBase & Row Type Mapping
In order to support FlinkSQL workload reading from the Kafka topic, we proposed following data type mapping from Thrift Type System to Flink Row type system. We are in favor of debugging and user readability as we map enum to string type.
bool | DataTypes.BOOLEAN() |
byte | DataTypes.TINYINT() |
i16 | DataTypes.SMALLINT() |
i32 | DataTypes.INT() |
i64 | DataTypes.BIGINT() |
double | DataTypes.DOUBLE() |
string | DataTypes.STRING() |
enum | DataTypes.STRING() |
list | DataTypes.ARRAY() |
set | DataTypes.ARRAY() |
map | DataTypes.MAP() |
struct | DataTypes.Row() |
public final class TType {
public static final byte STOP = 0;
public static final byte VOID = 1;
public static final byte BOOL = 2;
public static final byte BYTE = 3;
public static final byte DOUBLE = 4;
public static final byte I16 = 6;
public static final byte I32 = 8;
public static final byte I64 = 10;
public static final byte STRING = 11;
public static final byte STRUCT = 12;
public static final byte MAP = 13;
public static final byte SET = 14;
public static final byte LIST = 15;
public static final byte ENUM = 16;
public TType() {
}
}
TBase Field to Row Field Index Matching
Handling tbase payload and converting to Row type, Flinks needs to have deterministic mapping from each field in thrift payload to specific row index. We propose using ASC sorted thrift_id indexes (marked in brown below) when we map from tbase to row and vice versa. If a field is UNSET, in order to avoid UNSET fields causing mismatches, we use NULL as a placeholder.
struct Xtruct3
{
1: string string_thing,
4: i32 changed,
9: i32 i32_thing,
11: i64 i64_thing
}
[2] https://raw.githubusercontent.com/apache/thrift/master/test/ThriftTest.thrift
Example of index matching can be found below
Xtruct3 string_thing = “boo” changed = 0 i32_thing = unset i64_thing = -1 | Row <”boo”,0,null, -1> |
Note, from runtime performance consideration, we propose having a type to sort field information cache.
Row Field to TBase Field index matching
We propose a reverse approach above during row-to thrift payload conversion. If row arity is smaller than tbase schema fields, we propose adding null as placeholder.
Example of user update thrift schema with additional fields, Flink SQL jobs were deployed with new schema and restore with state using old schema (without new_thing), adding placeholder would avoid job fall into restart loop or skip messages.
struct Xtruct3
{
1: string string_thing,
4: i32 changed,
9: i32 i32_thing,
11: i64 i64_thing,
12: string new_thing
}
Xtruct3 string_thing = “boo” changed = 0 i32_thing = unset i64_thing = -1 | Row <”boo”,0,null, -1, null> |
Handling Nested Field in Row
It’s common to have thrift schemas highly nested and keep growing as more product features are added.In order to properly decode/encode thrift payload, we propose a recursive converter from nested thrift struct to Flink Row field. Based on each sub struct class schema type mapping along with field index mapping, this recursive converter could handle very sophisticated nested fields (more than 7 levels deep, 24k characters of schema string) thrift schema
case TType.STRUCT:
return getRow((TBase) val);
public Row getRow(TBase tBase) {
List<Map.Entry<? extends TFieldIdEnum, FieldMetaData>> entries = ….
// allocate row by thrift field size
Row result = new Row(entries.size());
int i = 0;
String fieldAnnotation = tBase.getClass().toString();
for (Map.Entry<? extends TFieldIdEnum, FieldMetaData> entry : entries) {
if (tBase.isSet(entry.getKey())) {
Object val = tBase.getFieldValue(entry.getKey());
result.setField(i, getPrimitiveValue(entry.getValue().valueMetaData, val,
fieldAnnotation + entry.getKey().getFieldName()));
} else {
result.setField(i, getDefaultValue(entry.getValue().valueMetaData,
fieldAnnotation + entry.getKey().getFieldName()));
}
i++;
}
return result;
}
Hive SequenceFile Table [WIP]
We propose implementing a thrift file system format factory via extending SequenceFileReader over a thrift-encoded payload. Our read path implementation follows the batch version of merced system Scalable and reliable data ingestion at Pinterest
We propose supporting SERDE and SERDEPROPERTIES in hive connector, user could customize
https://nightlies.apache.org/flink/flink-docs-release-1.11/dev/table/hive/hive_dialect.html
CREATE EXTERNAL TABLE xxx BY (dt string, hr string)
ROW FORMAT SERDE 'xxx.SafeStringEnumThriftSerDe'
WITH SERDEPROPERTIES(
"thrift_struct" = 'xxx.schemas.event.Event'
) STORED AS SEQUENCEFILE LOCATION '...';
Parietal Thrift Deserialization
Along with projection push-downs in kafka source and filesource, Parietal Thrift Deserialization allows Flink to skip thrift fields not used in query statements. We propose an additional change of skip construct tbase instance and instead use tbase field to row field indexing mapping directly write to each row field in one pass.
We propose implementing SupportsProjectionPushDown to KafkaDynamicSource so that partialthriftdeseriazationschema constructor limits the list of fields a kafka source instance needs to load and deserialize to row fields.
Partial Deserializer Credit to Bhalchandra Pandit
Table/View Inference DDL [WIP]
Thrift format Table (Kafka or FileSystem) schemas were inference based on thrift schema file definition. As users update thrift schema files and build new schema jars, those table/view definitions should evolve on the fly. There are several options to consider
Option 1: keep full schema mapping in hive metastore as flink properties
This is a less-change approach where we don’t need to run inference schema but keep a helper function or manual updater to alter tables with thrift class property definitions.
The disadvantage of this approach: the user needs to run a script periodically and manage schema upgrade and alert thrift format table DDL from time to time. The following example shows how option 1 DDL looks like
CREATE TABLE event_base(
`_time` BIGINT,
`eventType` STRING,
`userId` BIGINT,
`objectId` BIGINT,
rowTime AS CAST(from_unixtime(_time/1000000000) as timestamp),
WATERMARK FOR rowTime AS rowTime - INTERVAL '60' SECOND
)) WITH (
'connector.type' = 'kafka',
'connector.version' = 'universal',
'format.type' = 'thrift',
'format.thrift-class'='...schemas.event.Event',
'update-mode' = 'append'
);
Option2: hide thrift fields not store in Hive metastore, only keeps computed fields that can’t infer from thrift schema
This approach splits schema into inference in the dynamically generated section (e.g thrift field) and computed section. (e.g watermark). The Pro of this section is that the user doesn’t bother to write or update thrift schema fields during a schema upgrade. Each job should have the same view of how the table schema looks based on how the thrift schema jar gets loaded.
The disadvantage of this approach: splits table schema into two, and needs to discuss how to support inference schema support from the catalog table level. (TBD, might be worth another FLIP)
The following example shows what option 2 DDL looks like, HiveCatalog database only stores fields not available in the thrift schema.
CREATE TABLE event_base(
rowTime AS CAST(from_unixtime(_time/1000000000) as timestamp),
WATERMARK FOR rowTime AS rowTime - INTERVAL '60' SECOND
)) WITH (
'connector.type' = 'kafka',
'connector.version' = 'universal',
'format.type' = 'thrift',
'format.thrift-class'='....schemas.event.Event',
'update-mode' = 'append'
);
Public Interfaces
DataStream API
Including flink-thrift in flink-formats projects including three files to support datastream api encoding/decoding storing state with thrift format
flink-formats/flink-thrift/
ThriftDeserializationSchema
ThriftSerializationSchema
ThriftPartialDeserializationSchema
ThriftSerializationSchema
PinterestTBaseSerializer
In the flink-thrift pom.xml file, include thrift release defined in flink-parent <properties>
flink-thrift pom.xml
<dependency>
<groupId>org.apache.thrift</groupId>
<artifactId>libthrift</artifactId>
<version>${thrift.version}</version>
</dependency>
flink-parent pom.xml
<thrift.version>0.5.0-p6</thrift.version>
StreamExecutionEnvironment env = …
// user can opt-in to skip corrupted state
env.addDefaultKryoSerializer(Event.class, PinterestTBaseSerializer.class);
FlinkKafkaConsumer<Event> kafkaConsumer =
new FlinkKafkaConsumer<Event>(
topicName,
new ThriftDeserializationSchema<>(Event.class),
Configuration.configurationToProperties(kafkaConsumerConfig));
// user can opt-in to deserialize list of fields
FlinkKafkaConsumer<Event> kafkaConsumer =
new FlinkKafkaConsumer<Event>(
topicName,
new ThriftPartialDeserializationSchema<>(Event.class, <list of fields>),
Configuration.configurationToProperties(kafkaConsumerConfig));
FlinkKafkaProducer<Event> sink =
new FlinkKafkaProducer(topicName,
new ThriftSerializationSchema<Event>(topicName),
Configuration.configurationToProperties(kafkaProducerConfig),
FlinkKafkaProducer.Semantic.AT_LEAST_ONCE);
Metrics
We propose to introduce the following Gauge metrics:
- numFailedDeserializeRecord: The number of messages that failed to deserialize to specific thrift payload
- numFailedSerializeRecord: The number of messages that failed to serialize to specific thrift payload
- numFailedKryoThriftRecord: The number of state records that failed to parse to specific thrift class if user explicit use PinterestTBaseSerializer
Table API
We propose adding ThriftRowFormatFactory,ThriftToRowConverter and RowToThriftConverter
flink-formats/flink-thrift/
ThriftRowFormatFactory
ThriftToRowConverter
RowToThriftConverter
*/
@Internal
public class ThriftValidator extends FormatDescriptorValidator {
public static final String FORMAT_TYPE_VALUE = "thrift";
public static final String FORMAT_THRIFT_CLASS = "format.thrift-struct";
}
Table DDL
We propose adding format.type thrift and thrift-struct ub table DDL
CREATE TABLE event_base_kafka_prod(
`_time` BIGINT,
`eventType` STRING,
`userId` BIGINT,
...
rowTime AS CAST(from_unixtime(_time/1000000000) as timestamp),
WATERMARK FOR rowTime AS rowTime - INTERVAL '60' SECOND
) WITH (
'connector.type' = 'kafka',
'connector.version' = 'universal',
'format.type' = 'thrift',
'format.thrift-struct'='xxx.schemas.event.Event',
...
)
Metrics
We propose to introduce the following Gauge metrics:
- numFailedToRowDeserializeRecord: The number of messages that failed to deserialize to specific thrift payload
- numFailedToThriftSerializeRow: The number of messages that failed to serialize to specific thrift payload
Schema Compatibility
We propose datastream api leverage thrift procol already able to handle adding new field scenario full compatibility
- data encode with old thrift schema, decode with new thrift schema
- data encode with new thrift schema, decode with old thrift schema
For table api implementation, we propose changes to match data stream api full compatibility behavior. However, when the user schema introduces breaking changes, we propose showing metrics of failed messages.
KafkaDynamicSource
We propose kafka tablesource implements SupportsProjectionPushDown
public class KafkaDynamicSource
implements ScanTableSource, SupportsReadingMetadata, SupportsWatermarkPushDown,
SupportsProjectionPushDown {
In implementation, we only reconstruct DeserializationSchema that takes list of fields
@Override
public void applyProjection(int[][] projectedFields, DataType producedDataType) {
// check valueDecodingFormat is ThriftDeserializationSchema
// if so reconstruct DeserializationSchema that does partial deserialize
}
protected final DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat;
Table / View Inference DDL
TBD
Compatibility, Deprecation, and Migration Plan Test Plan
Thrift support is a less intrusive change, the user would opt-in with a set of configurations. We plan to follow best practices by adding unit tests and integration tests.
Table/View inference DDL might apply to other catalog table creations and fetching as well.
Rejected Alternatives
If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.