Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Remove DescribeTransactionState API and adress other review comments

...

Currently the producer attempts to handle this error by comparing the last acknowledged offset with the log start offset from the produce response with the error. If the last acknowledged offset is smaller than the log start offset, then the producer assumes that the error is spurious. It resets the sequence number to 0 and retries using the existing epoch.

There are several two main problems with this approach: 

  1. Resetting the sequence number is safe only if we are guaranteed that the produce request was not written to the log. If we had to retry the request–e.g. due to a disconnect–then we have no way to know whether the first attempt was successful or not. The only option at the moment is to treat this as a fatal error for the producer.
  2. Currently there are not any strict guarantees that the log start offset will increase monotonically. In particular, a new leader can be elected without having seen the latest log start offset from the previous leader. The impact for the producer is that state that was once lost may resurface after a leader failover. This can lead to an out of sequence error when the producer has already reset its sequence number. This is a fatal error for the producer.
  3. The broker accepts produce requests without prior state if the sequence number is 0. There is no guarantee that the producer hasn't been previously fenced. In the worst case, this can lead to a dangling transaction. For example, suppose a zombie producer has its transaction aborted by the coordinator and then a call to DeleteRecords causes this state to be lost. The fenced zombie may still be able to write with sequence 0, but it cannot complete the transaction since it is fenced and the transaction coordinator does not know about the write.

Resetting the sequence number is fundamentally unsafe because it violates the uniqueness of produced records. Additionally, the lack of validation on the first write of a producer introduces the possibility of non-monotonic updates and hence, dangling transactions. In this KIP, we propose to address these problems so 1) this error condition becomes rare, and 2) it is no longer fatal. For transactional producers, it will be possible to simply abort the current transaction and continue. We also make some effort to simplify error handling in the client handlingproducer.  

Proposed Changes

Our proposal has three parts: 1) safe epoch incrementing, 2) unknown prolonged producer fencingstate retention, and 3) simplified client error handling.

...

  1. No epoch is provided: the current epoch will be bumped and the last epoch will be set to -1.
  2. Epoch is provided:
    1. Epoch Provided epoch matches current epoch: the last epoch will be set to the current epoch is bumped, but and the instance current epoch will stay the samebe bumped .
    2. Epoch Provided epoch matches last epoch: the current epoch will be returned
    3. Else: return INVALID_PRODUCER_EPOCH

Unknown Producer Fencing: We propose to introduce a new DescribeTransactionState API which allows a broker to verify with the transaction coordinator whether a producer id has been fenced. This is used only when the broker sees a write with a sequence number 0 from an unknown producer. 

In practice, we expect the need for this API to be rare. Prolonged producer state retention: As proposed in KAFKA-7190, we will alter the behavior of the broker to retain the cached producer state even after it has been removed from the log. Previously we attempted to keep the producer state consistent with the the contents of the log so that we could rebuild it from the log . Instead it will be removed only when if needed. However, it is rarely necessary to rebuild producer state, and it is more useful to retain the state we have as long as possible. Here we propose to remove it only when the transactional id expiration time has passed. Under some circumstances we may have to rebuild the producer state using the log. One example is

Note that it is possible for some disagreement on the current producer state between the replicas. One example is following a partition reassignment. A new replica will only see the producers which have state in the log. If one of these replicas becomes a leader, we may see the , so this may lead to an unexpected UNKNOWN_PRODUCER_ID error , which will result in an epoch bump. But the monotonicity of producer writes will never be violated.Note that it is possible for a transaction to be completed while the DescribeTransactionState response is still inflight. The broker must verify after receiving the response that the producer state is still unknownif the new replica becomes a leader before any additional writes occur for that producer id. However, any inconsistency between the leader and follower about producer state does not cause any problems from the perspective of replication because followers always assume the leader's state is correct. Furthermore, since we will now have a way to safely bump the epoch on the producer when we see UNKNOWN_PRODUCER_ID, these edge cases are not fatal for the producer.

Simplified error handling: Much of the complexity in the error handling of the idempotent/transactional producer is a result of the UNKNOWN_PRODUCER_ID case. Since we are proposing to cache producer state for as long as the transactional id expiration time even after removal from the log, this should become a rare error, so we propose to simplify our handling of it. The current handling attempts to reason about the log start offset and whether or not the batch had been previously retried. If we are sure it is safe, then we attempt to adjust the sequence number of the failed request (and any inflight requests which followed). Not only is this behavior complex to implement, but continuing with subsequent batches introduces the potential for reordering. Currently there is no easy way to prevent this from happening.

...

As mentioned above, we will bump the version of the transaction state message to include the instance last epoch.

Code Block
Value => Version ProducerId CurrentEpoch ProducerEpochLastBump TxnTimeoutDuration TxnStatus [TxnPartitions] TxnEntryLastUpdateTime TxnStartTime
  Version => 1 (INT16)
  LastEpochProducerId => INT16  // NEW
  ProducerIdCurrentEpoch => INT16
  ProducerEpochLastEpoch => INT16  // NEW
  TxnTimeoutDuration => INT32
  TxnStatus => INT8
  TxnPartitions => [Topic [Partition]]
     Topic => STRING
     Partition => INT32
  TxnLastUpdateTime => INT64
  TxnStartTime => INT64

As described above, the last epoch is initialized based on the epoch provided in the InitProducerId call. For a new producer instance, the value will be -1.

Additionally, this proposal introduces a new API to query transaction state. This will be used to check whether a 

Code Block
DescribeTransactionState => [TransactionalId]
  TransactionalId => STRING

DescribeTransactionState => [Error ProducerId Epoch State Partitions]
  Error => INT16
  ProducerId => INT64
  Epoch => INT16
  State => STRING
  Partitions => [TopicName [PartitionId]]
    TopicName => STRING
    PartitionId => INT32

The response includes the latest producer id and the latest epoch. This API is analogous to the DescribeGroup API. The following errors are possible:

...

...

Compatibility, Deprecation, and Migration Plan

The main problem from a compatibility perspective is dealing with the existing producers which reset the sequence number to 0 but continue to use the same epoch. We believe that caching the producer state even after it is no longer retained in the log will make the UNKNOWN_PRODUCER_ID error unlikely in practice. Furthermore, even if the sequence number is reset, the fencing check should still be valid. So we expect the behavior to , so this resetting logic should be less frequently relied upon. When it is used, the broker will continue to work as expected even with the additional protection. 

One key question is how the producer will interoperate with older brokers which do not support The new GetTransactionState API and the new version of the transaction state message will not be used until the inter-broker version supports it. We expect the usual two rolling bounce process for updating the cluster`InitProducerId` request. For idempotent producers, we can safely bump the epoch without broker intervention, but there is no way to do do so for transactional producers. We propose in this case to immediately fail pending requests and enter the ABORTABLE_ERROR state. After the transaction is aborted, we will reset the sequence number to 0 and continue. So the only difference is that we skip the epoch bump.

Rejected Alternatives

  • We considered fixing this problem in streams by being less aggressive with record deletion for repartition topics. This might make the problem less likely, but it does not fix it and we would like to have a general solution for all EOS users.
  • When the broker has no state for a given producerId, it will only accept new messages as long as they begin with sequence=0. There is no guarantee that such messages aren't duplicates which were previously removed from the log or that the producer id hasn't been fenced. The initial version of this KIP attempted to add some additional validation in such cases to prevent these edge cases. After some discussion, we felt the proposed fix still had some holes and other changes in this KIP made this sufficiently unlikely in any case. This will be reconsidered in a future KIP.