You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Proposers

Approvers

Status

Current state


Current State

UNDER DISCUSSION

(tick)

IN PROGRESS


ABANDONED


COMPLETED


INACTIVE


Discussion thread: here

JIRA: here

Released: <Hudi Version>

Abstract

The goal is to build a Kafka Connect Sink that can ingest/stream records from Apache Kafka to Hudi Tables. Since Hudi is a transaction based data lake platform, we have to overcome a few challenges to coordinate the transactions across the tasks and workers in the Kafka Connect framework. In addition, the Hudi platform runs multiple coordinated data and file management services and optimizations, that have to be coordinated with the write transactions.

To achieve this goal today, we can use the [deltastreamer](<https://hudi.apache.org/docs/writing_data/#deltastreamer>) tool provided with Hudi, which runs within the Spark Engine to pull records from Kafka, and ingests data to Hudi tables. Giving users the ability to ingest data via the Kafka connect framework has a few advantages. Current Connect users can readily ingest their Kafka data into Hudi tables, levering the power of Hudi's platform without the overhead of deploying a spark environment.

Background

To appreciate the design proposed in this RFC, it is important to understand Kafka Connect. It is a framework to stream data into and out of Apache Kafka. The core components of the Connect framework that are relevant to this RFC are connectors, tasks, and workers. The connector instance is a logical job that manages the copying of data from Kafka to another system. A connector instance manages a set of tasks that actually copy the data. Using multiple tasks allows for parallelism and scalable data copying. Connectors and tasks are logical execution units that are scheduled within workers. In distributed mode, workers are run across a cluster to provide scalability and fault tolerance. All workers can be configured with the same [group.id](<http://group.id>) and the connect framework automatically manages the execution of tasks across all available workers. As shown in the figure below, tasks are distributed across workers, and each task manages one or more distinct partitions of the Kafka topic.


On system initialization, the workers rebalance the set of tasks so that each worker has a similar amount of work. Dynamically, the system may rebalance when the number of partitions or tasks changes. In addition, on the failure of a worker, the tasks are re-assigned to the other workers to ensure fault tolerance as shown in the figure below.



Implementation

Rollout/Adoption Plan

  • <What impact (if any) will there be on existing users?>
  • <If we are changing behavior how will we phase out the older behavior?>
  • <If we need special migration tools, describe them here.>
  • <When will we remove the existing behavior?>

Test Plan

<Describe in few sentences how the RFC will be tested. How will we know that the implementation works as expected? How will we know nothing broke?>










  • No labels