You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Status

Current state: Accepted(KAFKA-1650), Under discussion(Kafka-1839, KAFKA-1840)

Discussion thread: here

JIRA: KAFKA-1650, KAFKA-1839, KAFKA-1840

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

The Mirror Maker has a potential dataloss issue as explained below:

1. Mirror Maker consumer consume some messages and put them into dataChannel.

2. Mirror Maker consumer commits the offsets

3. Mirror Maker somehow dies with some messages left in the dataChannel not yet successfully delivered by producer.

4. When Mirror Maker restarts, all the messages that has been consumed and committed by consumer but not delivered by producer yet will be lost.

 

Currently Mirror Maker does not ensure the topics in source and target clusters have same partitions, which is required in many cases.

Sometimes it is useful for Mirror Maker to do some simple process on messages (filter, message format change, etc.)

Public Interfaces

The no data loss solution uses the callback of new producer to determine the offsets to be committed.

A ConsumerRebalanceListener class is created and could be wired into ZookeeperConsumerConnector to avoid duplicate messages when consumer rebalance occurs in mirror maker.

Proposed Changes

The only source of truth for offsets in consume-then-send pattern is end user. The offsets should only be committed after having received the confirmation from the end user. For the new producer, the confirmation is the ACK from end user.

In that sense the consumer's offsets auto-commits should be turned off, and the offsets will be committed after a future associated with a message is received.

Maintaining message order

For both keyed and none-keyed messages, the TopicPartition's hash is used to decide which data channel queue they goes to. So the message order in data channel is the same as in source partitions.

Offset commit thread

The offsets commit thread should execute periodically. A Map[TopicPartition, UnackedOffsetList] will be needed to track the offsets that is ready for commit.

On consumer rebalance:

A new consumerRebalanceListener needs to be added to make sure there is no duplicates when consumer rebalance occurs. The callback beforeReleasingPartitions will be invoked when the callback is called on consumer rebalance. It should

1. clear the dataChannel queue (as those messages will be re-consumed after rebalance)

2. wait until the producer drains all remaining messages

3. commit the offsets

Ensure same partition number in source and target topics (KAFKA-1839)

Introduce a customRebalanceListener with the callback beforeStartFetchers(newPartitionAssignmentMap). The callback is invoked when consumer rebalance occurs (topic creation/deletion, partition expansion). It will compare the new topic partition info with the existing one to see what change occurs. If a new topic is created, the callback create the topic in target cluster with the same number of partitions. This callback exists in new consumer (onPartitionsAssigned)

Message handler in consumer thread (KAFKA-1840)

Sometimes it is useful for mirror maker to do some process on the messages it consumed such as filtering out some messages or format conversions. This could be done by adding a message handler in consumer thread. The interface could be

public interface MirrorMakerMessageHandler {
    /**
     * The message handler will be executed to handle messages in mirror maker
     * consumer thread before the messages go into data channel. The message handler should
     * not block and should handle all the exceptions by itself.
     */
    public List<MirrorMakerRecord> handle(MirrorMakerRecord record);
}
The default message handler just return a list with passed in record.

Compatibility, Deprecation, and Migration Plan

The changes are backward compatible.

Rejected Alternatives

None

  • No labels