Proposers
Approvers
- Vinoth Chandar : APPROVED
- Balaji Varadarajan : APPROVED
- Nishith Agarwal : APPROVED
Status
Current state: Under Discussion
Discussion thread: here
JIRA:
Released: <Hudi Version>
Abstract
A feature to snapshot a Hudi dataset and export the latest records to a set of external files (e.g., plain parquet files).
Background
The existing org.apache.hudi.utilities.HoodieSnapshotCopier
performs a Hudi-to-Hudi copy that serves for backup purpose. To broaden the usability, the Copier could be potentially extended to perform exporting features to data formats, like plain parquet files, other than Hudi dataset.
Implementation
The proposed class is org.apache.hudi.utilities.HoodieSnapshotExporter
, which serves as the main entry for snapshotting related work.
Definition of "Snapshot"
To snapshot is to get the most up-to-date records from a Hudi dataset at a particular point in time. Note that this could take longer for MOR tables as it involves merging the latest log files.
Arguments
Description | Remark | |
---|---|---|
--source-base-path | Base path for the source Hudi dataset to be snapshotted | required |
--target-base-path | Base path for the target output files (snapshots) | required |
--snapshot-prefix | Snapshot prefix or directory under the target base path in order to segregate different snapshots | optional; may default to provide a daily prefix at run time like 2019/11/12/ |
--output-format | "HUDI", "PARQUET" | required; When "HUDI", behaves the same as HoodieSnapshotCopier ; may support more data formats in the future |
--output-partition-field | A field to be used by Spark repartitioning | optional; Ignored when "HUDI" or when The output dataset's default partition field will inherent from the source Hudi dataset. When this argument is specified, the provided value will be used for both in-memory Spark repartitioning and output file partition. String partitionField = // from the argument df.repartition(df.col(partitionField)) .write() .partitionBy(partitionField) .parquet(outputPath); In case of more flexibility needed for repartitioning, use |
--output-partitioner | A class to facilitate custom repartitioning | optional; Ignored when "HUDI" |
Steps
- Read
- Output format "PARQUET": Leverage on
org.apache.hudi.common.table.view.HoodieTableFileSystemView
logic to get the latest records (RT query) - Output format "HUDI": we don't need RT query. Instead, we just use RO query to copy the latest parquet files, like what the existing
HoodieSnapshotCopier
does
- Output format "PARQUET": Leverage on
- Transform
- Output format "PARQUET"
- Stripe Hudi metadata
- Allow user to provide a field to do simple Spark repartitioning
- Allow user to provide a class to do custom repartitioning
- No transformation is needed for output format "HUDI"; just copy the original files, like what the existing
HoodieSnapshotCopier
does
- Output format "PARQUET"
- Write
- Just need to provide the output directory and Spark shall handle the rest.
Rollout/Adoption Plan
- No impact to existing users as this is a new independent utility tool.
- Once this feature is GA'ed, we can mark
HoodieSnapshotCopier
as deprecated and suggest user to switch to this tool, which provides equivalent copying features.
Test Plan
- Write similar tests like
HoodieSnapshotCopier
- When testing end-to-end, we are to verify
- number of records are matched
- later snapshot reflect the latest info from the original dataset