Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: update title form

Table of Contents

Proposers

Approvers

Status

Current state: Under Discussion

Discussion thread: here

JIRA:

Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyHUDI-344

Released: <Hudi Version>


Abstract

A feature to snapshot a Hudi dataset and export the latest records to a set of external files (e.g., plain parquet files).

Background

The existing org.apache.hudi.utilities.HoodieSnapshotCopier performs a Hudi-to-Hudi copy that serves for backup purpose. To broaden the usability, the Copier could be potentially extended to perform exporting features to data formats, like plain parquet files, other than Hudi dataset.

Implementation

The proposed class is org.apache.hudi.utilities.HoodieSnapshotExporter , which serves as the main entry for snapshotting related work.

Definition of "Snapshot"

To snapshot is to get the records from a Hudi dataset at a particular point in time. Note that the data exported from MOR tables may not be the most up-to-date as RO query is used for retrieval, which omits the latest data in the log files.

Arguments


DescriptionRemark
--source-base-pathBase path for the source Hudi dataset to be snapshottedrequired
--target-base-pathBase path for the target output files (snapshots)required
--snapshot-prefixSnapshot prefix or directory under the target base path in order to segregate different snapshotsoptional; may default to provide a daily prefix at run time like 2019/11/12/ 
--output-format"HUDI", "PARQUET"required; When "HUDI", behaves the same as HoodieSnapshotCopier ; may support more data formats in the future
--output-partition-fieldA field to be used by Spark repartitioning

optional; Ignored when "HUDI" or when --output-partitioner is specified

The output dataset's default partition field will inherent from the source Hudi dataset.

When this argument is specified, the provided value will be used for both in-memory Spark repartitioning and output file partition.

Code Block
languagejava
String partitionField = // from the argument
df.repartition(df.col(partitionField))
  .write()
  .partitionBy(partitionField)
  .parquet(outputPath);

In case of more flexibility needed for repartitioning, use --output-partitioner 

--output-partitionerA class to facilitate custom repartitioning optional; Ignored when "HUDI"

Steps

Gliffy Diagram
nameRFC-9 snapshotter overview
pagePin2

  1. Read
    • Regardless of output format, always leverage on org.apache.hudi.common.table.view.HoodieTableFileSystemView to perform RO query for read
    • Specifically, data to be read is from the latest version of columnar files in the source dataset, up to the latest commit time, like what the existing HoodieSnapshotCopier does
  2. Transform
    • Output format "PARQUET"
      • Stripe Hudi metadata
      • Allow user to provide a field to do simple Spark repartitioning
      • Allow user to provide a class to do custom repartitioning
    • No transformation is needed for output format "HUDI"; just copy the original files, like what the existing HoodieSnapshotCopier does
  3. Write
    • Just need to provide the output directory and Spark shall handle the rest.

Rollout/Adoption Plan

  • No impact to existing users as this is a new independent utility tool.
  • Once this feature is GA'ed, we can mark HoodieSnapshotCopier as deprecated and suggest user to switch to this tool, which provides equivalent copying features.

Test Plan

  • Write similar tests like HoodieSnapshotCopier 
  • When testing end-to-end, we are to verify
    • number of records are matched
    • later snapshot reflect the latest info from the original dataset