You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

Status

Current state: Under discussion

Discussion thread: here

JIRA

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

The Mirror Maker has a potential dataloss issue as explained below:

1. Mirror Maker consumer consume some messages and put them into dataChannel.

2. Mirror Maker consumer commits the offsets

3. Mirror Maker somehow dies with some messages left in the data channel or producer cache not yet successfully delivered by producer.

4. When Mirror Maker restarts, all the messages that has been consumed and committed by consumer but not delivered by producer yet will be lost.

 

The new mirror maker design is trying to solve the above issue as well as enhance the the mirror maker with the following features.

1.Currently Mirror Maker does not ensure the topics in source and target clusters have same partitions, which is required in many cases.

2.Sometimes it is useful for Mirror Maker to do some simple process on messages (filter, message format change, etc.)

 

The new mirror maker is designed based on new consumer.

Public Interfaces

The no data loss solution uses the callback of new producer to determine the offsets to be committed.

For Mirror Maker, an internal rebalance listener which implements ConsumerRebalanceCallback is wired in by default to avoid duplicates on consumer rebalance. User could still specify a custom listener class in command line argument. The internal rebalance listener will call that custom listener after it finishes the default logic.

Proposed Changes

Proposal 1

  • A single consumer thread distribute messages into different data channel queues based on source target partition hash.
  • multiple producer threads take responsibility of handling message and produce them to target cluster.
  • Scale by creating more mirror maker instances or processing threads.

Pros: Decouple consume and process of messages. Allow adjustment on processing/consuming thread ratio.

Cons: More complicated structure than proposal 2.

Proposal 2

  • No data channel, each mirror maker thread consume messages, process them and then send them.
  • Scale by creating more mirror maker thread.

Pros: Simple structure.

Cons: consumer and producer are coupled, which is less flexible. More TCP connection and memory footprint.

Common in both proposals

The only source of truth for offsets in consume-then-send pattern is target cluster. The offsets should only be committed after having received the confirmation from the target cluster. For the new producer, the confirmation is the ACK from target cluster.

In that sense the consumer's offsets auto-commits should be turned off, and the offsets will be committed after a future associated with a message is received.

Offset commit thread

The offsets commit thread should execute periodically. Because the consumer offset is committed per SourcePartition/ConsumerGroup. A Map[TopicPartition, UnackedOffsetList] will be needed to track the offsets that is ready for commit.

Theoretically, we can just keep a Map[SourceTopicPartition/TargetTopicPartition, AckedOffset], but some scenarios, that might not work well. For example, if we have messages 1,2,3,4,5 from a source partition, message 1,2,4,5 are produced to destination partition 1, and message 3 is produced to destination partition 2. If we just use a Map[SourceTopicPartition/TargetTopicPartition, AckedOffset], it would hold { [(SourceTopicPartition=1, TargetTopicParitition=1),5], [(SourceTopicPartition=1, TargetTopicParitition=2),3] }. From the map itself, we cannot tell whether message 4 has been acked or not. And if there is no more message sent to target partition 2, the offset for source partition 1 will block on 3 forever.

The UnackedOffsetList is a raw linked list that keep the messages offsets per source partition in the same order as they are sent by producer. An offset will be removed from the list when its ack is received. The offset commit thread always commit the smallest offset in the list (which is not acked yet).

No data loss option

A no data loss option is provided with the following settings:

  1. For consumer: auto.commit.enabled=false
  2. For producer:
    1. max.in.flight.requests.per.connection=1
    2. retries=Int.MaxValue
    3. acks=-1
    4. block.on.buffer.full=true

The following actions will be taken by mirror maker:

  • Mirror maker will only send one request to a broker at any given point.
  • If any exception is caught in producer/consumer thread or OutOfMemory exception is caught in offset commit thread, mirror maker will try to commit the acked offsets then exit immediately.
  • For RetriableException in producer, producer will retry indefinitely. If retry did not work, eventually the entire mirror maker will block on producer buffer full.
  • For None-retriable exception, producer callback will record the message that was not successfully sent but let the mirror maker move on. In this case, that message will be lost in target cluster.

On consumer rebalance

A new consumerRebalanceListener needs to be added to make sure there is no duplicates when consumer rebalance occurs. The callback beforeReleasingPartitions will be invoked  on consumer rebalance. It should

1. clear the dataChannel queue (as those messages will be re-consumed after rebalance)

2. wait until the producer drains all remaining messages

3. commit the offsets

Ensure same partition number in source and target topics

Each consumer will have a local topic cache that records the topics it has ever seen.

When new topic is created in source cluster, consumer rebalance will be triggered. In the ConsumerRebalanceCallback:onPartitionAssigned:

For consumer that consumes partition 0 of a topic:

    1. Check the topic in both source and target cluster to see if it is a new topic.
    2. If new topic is detected, create the topic with the same partition number in target cluster as in source cluster

For consumers that saw topics that does not exist in local topic cache:

    1. Wait until the topic is seen on target cluster.

When a partition expansion occurs, consumer rebalance will also be triggered. Above approach does not take care of this case.

Message handler in consumer thread

Sometimes it is useful for mirror maker to do some process on the messages it consumed such as filtering out some messages or format conversions. This could be done by adding a message handler in consumer thread. The interface could be

public interface MirrorMakerMessageHandler {
    /**
     * The message handler will be executed to handle messages in mirror maker
     * consumer thread before the messages go into data channel. The message handler should
     * not block and should handle all the exceptions by itself.
     */
    public List<MirrorMakerRecord> handle(MirrorMakerRecord record);
}
The default message handler just return a list with passed in record.

Compatibility, Deprecation, and Migration Plan

The changes are backward compatible.

Rejected Alternatives

None

  • No labels