...
However, Kafka doesn’t provide any mechanism to delete data after data is consumed by downstream jobs. It provides only time-based and size-based log retention policy, both of which are agnostic to consumer’s behavior. If we set small time-based log retention for intermediate data, the data may be deleted even before it is consumed by downstream jobs. If we set large time-based log retention, the data will take large amount of disk space for a long time. Neither solution is good for Kafka users. To address this problem, we propose to add a new admin API which can be called by user to purge data that is no longer needed.
Note that this KIP is related to and overwrites KIP-47.
Public Interfaces
1) Java API
...
- Using committed offset instead of an extra API to trigger data purge operation. Purge data if its offset is smaller than committed offset of all consumer groups that need to consume from this partition.
purgeDataBefore
() can be called, which can be hard to do if there are multiple consumer groups interested in consuming this topic. The disadvantage of this approach is that it is less flexible than purgeDataBefore
() API because it re-uses committed offset to trigger data purge operation. Also, it adds complexity to broker implementation and would be more complex to implement than the purgeDataBefore
() API. An alternative approach is to implement this logic by running an external service which calls purgeDataBefore
() API based on committed offset of consumer groups.low_watermark
of all followers to increase above the cutoff offset