Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Most JSON data that utilizes precise decimal data represents it as a decimal number. Connect, on the other hand, only supports a binary HEX BASE64 string encoding (see example below). This KIP intends to support both representations so that it can better integrate with legacy systems (and make the internal topic data easier to read/debug):

...

  • serialize the decimal field "foo" with value "10.2345" with the BASE64 setting: {"foo": "D3J5"}
  • serialize the decimal field "foo" with value "10.2345" with the NUMERIC setting: {"foo": 10.2345}

Public Interfaces

A new configuration for producers json.decimal.serialization.fromat will be introduced to the JsonConverter configuration to help control whether to produce in numeric or binary formats. The valid values will be "BINARYHEX" (default, to maintain compatibility) and "NUMERIC".

...

The changes will be scoped nearly entirely to the JsonConverter, which will be able to deserialize a NumericNode DecimalNode when the schema is defined as a decimal. Namely, the converter will no longer throw an exception if the incoming data is a numeric node but the schema is specified decimal (logical type). If  If json.decimal.serialization.format is set to BINARYBASE64, the serialization path will remain the same. If it is set to NUMERIC, the JSON value being produced will be a number instead of a text value.

Furthermore, the JsonDeserializer will now default floating point deserialization to BigDecimal to avoid losing precision. This may impact performance when deserializing doubles.

Compatibility, Deprecation, and Migration Plan

...

  • Legacy Producer, Upgraded Consumer: this scenario is okay, as the upgraded consumer will be able to read the implicit BINARY format
  • Upgraded Producer with NUMERIC serialization, Upgraded Consumer: this scenario is okay, as the upgraded consumer will be able to read the numeric serialization
  • Upgraded Producer with BINARY BASE64 serialization, Legacy Consumer: this scenario is okay as the upgraded producer will be able to read binary as today
  • Upgraded Producer with NUMERIC serialization, Legacy Consumerthis is the only scenario that is not okay and will cause issues since the legacy consumers cannot consumer NUMERIC data. 

...

There is also concern of data changing in the middle of the stream:

  • Legacy → Upgraded BINARYBASE64this will not cause any change in the data in the topic
  • Legacy → Upgraded NUMERIC: this will cause a all new values to be serialized using NUMERIC format and will cause issues unless consumers are upgraded
  • Upgraded BINARY → Upgraded NUMERIC: this is identical to above
  • Upgraded NUMERIC → Upgraded BINARYBASE64: this will not cause a new issue since if the numeric format was already working, all consumers would be able to read binary format as well
  • Upgraded NUMERIC → (Rollback) Legacy: this is identical to above

...

  • The original KIP suggested supporting an additional representation - base10 encoded text (e.g. `{"asText":"10.2345"}`). While it is possible to automatically differentiate NUMERIC from TEXT BASE10 and BINARYBASE64, it is not always possible to differentiate between TEXT BASE10 from BINARYBASE64. Take, for example, the string "12" - this is both a valid decimal (12) and a valid hex string which represents a decimal (1.8). This causes issues because it is impossible to disambiguate between TEXT BASE10 and BINARY BASE64 without an additional config - furthermore, this makes the migration from one to the other nearly impossible because it would require that all consumers stop consuming and producers stop producing and atomically updating the config on all of them after deploying the new code, or waiting for the full retention period to pass - neither option is viable. The suggestion in the KIP is strictly an improvement over the existing behavior, even if it doesn't support all combinations.
  • Encoding the serialization in the schema for Decimal LogicalType. This is good because it means that the deserializer will be able to decode based on the schema and one converter can handle different topics encoded differently as long as the schema is in line. The problem is that this is specific to only JSON and changing the LogicalType is not the right place.