You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

Definition

A storage type where a dataset's commits are merged into dataset when read/viewed/queried

#todo improve to summarize semantics relative to commits lifecycle, before and after

Design details


In the Merge On Read (MOR) storage model, there are 2 logical components :

  1. a `Writer` for ingesting data (both inserts/updates) into the dataset 
  2. a `Compactor` for creating compacted views

At a high level, Merge On Read (MOR) writer goes through same stages as Copy On Write (COW) writer in ingesting data.

The key difference here is that updates are appended to latest log (delta) file belonging to the latest file slice without merging. For inserts, Hudi supports 2 modes:

  1. Inserts to Log Files - This is done for datasets that have an indexable log files (for eg global index)
  2. Inserts to parquet files - This is done for datasets that do not have indexable log files, for eg bloom index

embedded in parquer files. Hudi treats writing new records in the same way as inserting to Copy On Write (COW) files.

As in the case of Copy On Write (COW), the input tagged records are partitioned such that all upserts destined to a `file id` are grouped together. This upsert-batch is written as one or more log-blocks written to log-files. Hudi allows clients to control log file sizes (See [Storage Configs](../configurations))

The WriteClient API is same for both Copy On Write (COW) and Merge On Read (MOR) writers. With Merge On Read (MOR), several rounds of data-writes would have resulted in accumulation of one or more log-files. All these log-files along with base-parquet (if exists) constitute a `file slice` which represents one complete version of the file.

Kind of

Related concepts

  1. Merge On Read (MOR)


  • No labels