You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 45 Next »

Contributing

To contribute content to this FAQ, see here.

General 

When is Hudi a useful for me or my organization? 

If you are looking to quickly ingest data onto HDFS or cloud storage, Hudi can provide you tools to help. Also, if you have ETL/hive/spark jobs which are slow/taking up a lot of resources, Hudi can potentially help by providing an incremental approach to reading and writing data. 

As an organization, Hudi can help you build an efficient data lake, solving some of the most complex, low-level storage management problems, while putting data into hands of your data analysts, engineers and scientists much quicker.

What are some non-goals for Hudi? 

Hudi is not designed for any OLTP use-cases, where typically you are using existing NoSQL/RDBMS data stores. Hudi cannot replace your in-memory analytical database (at-least not yet!). Hudi support near-real time ingestion in the order of few minutes, trading off latency for efficient batching. If you truly desirable sub-minute processing delays, then stick with your favorite stream processing solution. 

What is incremental processing? Why does Hudi docs/talks keep talking about it? 

Incremental processing was first introduced by Vinoth Chandar, in the O'reilly blog, that set off most of this effort. In purely technical terms, incremental processing merely refers to writing mini-batch programs in streaming processing style. Typical batch jobs consume all input and recompute all output, every few hours. Typical stream processing jobs consume some new input and recompute new/changes to output, continuously/every few seconds. While recomputing all output in batch fashion can be simpler, it's wasteful and resource expensive. Hudi brings ability to author the same batch pipelines in streaming fashion, run every few minutes.


While we can merely refer to this as stream processing, we call it incremental processing, to distinguish from purely stream processing pipelines built using Apache Flink, Apache Apex or Apache Kafka Streams.

What is the difference between copy-on-write (COW) vs merge-on-read (MOR) storage types ?

Copy On Write - This storage type enables clients to ingest data on columnar file formats, currently parquet. Any new data that is written to the Hudi dataset using COW storage type, will write new parquet files. Updating an existing set of rows will result in a rewrite of the entire parquet files that collectively contain the affected rows being updated. Hence, all writes to such datasets are limited by parquet writing performance, the larger the parquet file, the higher is the time taken to ingest the data.

Merge On Read - This storage type enables clients to  ingest data quickly onto row based data format such as avro. Any new data that is written to the Hudi dataset using MOR table type, will write new log/delta files that internally store the data as avro encoded bytes. A compaction process (configured as inline or asynchronous) will convert log file format to columnar file format (parquet). Two different InputFormats expose 2 different views of this data, Read Optimized view exposes columnar parquet reading performance while Realtime View exposes columnar and/or log reading performance respectively. Updating an existing set of rows will result in either a) a companion log/delta file for an existing base parquet file generated from a previous compaction or b) an update written to a log/delta file in case no compaction ever happened for it. Hence, all writes to such datasets are limited by avro/log file writing performance, much faster than parquet. Although, there is a higher cost to pay to read log/delta files vs columnar (parquet) files.

More details can be found here.

How do I choose a storage type for my workload ?

A key goal of Hudi is to provide upsert functionality that is orders of magnitude faster than rewriting entire tables or partitions. 

Choose Copy-on-write storage if : 

  • You are looking for a simple alternative, that replaces your existing parquet tables without any need for real-time data.
  • Your current job is rewriting entire table/partition to deal with updates, while only a few files actually change in each partition.
  • You are happy keeping things operationally simpler (no compaction etc), with the ingestion/write performance bound by the parquet file size and the number of such files affected/dirtied by updates
  • Your workload is fairly well-understood and does not have sudden bursts of large amount of update or inserts to older partitions. COW absorbs all the merging cost on the writer side and thus these sudden changes can clog up your ingestion and interfere with meeting normal mode ingest latency targets.

Choose merge-on-read storage if :

  • You want the data to be ingested as quickly & queryable as much as possible.
  • Your workload can have sudden spikes/changes in pattern (e.g bulk updates to older transactions in upstream database causing lots of updates to old partitions on DFS). Asynchronous compaction helps amortize the write amplification caused by such scenarios, while normal ingestion keeps up with incoming stream of changes.

Immaterial of what you choose, Hudi provides 

  • Snapshot isolation and atomic write of batch of records
  • Incremental pulls
  • Ability to de-duplicate data

Find more here.

Is Hudi an analytical database? 

A typical database has a bunch of long running storage servers always running, which takes writes and reads. Hudi's architecture is very different and for good reasons. It's highly decoupled where writes and queries/reads can be scaled independently to be able to handle the scale challenges. So, it may not always seems like a database.

Nonetheless, Hudi is designed very much like a database and provides similar functionality (upserts, change capture) and semantics (transactional writes, snapshot isolated reads).

How do I model the data stored in Hudi? 

When writing data into Hudi, you model the records like how you would on a key-value store - specify a key field (unique for a single partition/across dataset), a partition field (denotes partition to place key into) and preCombine/combine logic that specifies how to handle duplicates in a batch of records written. This model enables Hudi to enforce primary key constraints like you would get on a database table. See here for an example.

When querying/reading data, Hudi just presents itself as a json-like hierarchical table, everyone is used to querying using Hive/Spark/Presto over Parquet/Json/Avro. 

Does Hudi support cloud storage/object stores?

Yes. Generally speaking, Hudi is able to provide its functionality on any Hadoop FileSystem implementation and thus can read and write datasets on Cloud stores (Amazon S3 or Microsoft Azure or Google Cloud Storage). Over time, Hudi has also incorporated specific design aspects that make building Hudi datasets on the cloud easy, such as consistency checks for s3, Zero moves/renames involved for data files.

What versions of Hive/Spark/Hadoop are support by Hudi? 

As of September 2019, Hudi can support Spark 2.1+, Hive 2.x, Hadoop 2.7+ (not Hadoop 3)

How does Hudi actually store data inside a dataset?

At a high level, Hudi is based on MVCC design that writes data to versioned parquet/base files and log files that contain changes to the base file. All the files are stored under a partitioning scheme for the dataset, which closely resembles how Apache Hive tables are laid out on DFS. Please refer here for more details.

Using Hudi

What are some ways to write a Hudi dataset? 

Typically, you obtain a set of partial updates/inserts from your source and issue write operations against a Hudi dataset.  If you ingesting data from any of the standard sources like Kafka, or tailing DFS, the delta streamer tool is invaluable and provides an easy, self-managed solution to getting data written into Hudi. You can also write your own code to capture data from a custom source using the Spark datasource API and use a Hudi datasource to write into Hudi. 

How is a Hudi job deployed? 

The nice thing about Hudi writing is that it just runs like any other spark job would on a YARN/Mesos or even a K8S cluster. So you could simply use the Spark UI to get visibility into write operations.

How can I now query the Hudi dataset I just wrote?

Unless Hive sync is enabled, the dataset written by Hudi using one of the methods above can simply be queries via the Spark datasource like any other source. 

val hoodieROView = spark.read.format("org.apache.hudi").load(basePath + "/path/to/partitions/*")
val hoodieIncViewDF = spark.read().format("org.apache.hudi")
     .option(DataSourceReadOptions.VIEW_TYPE_OPT_KEY(), DataSourceReadOptions.VIEW_TYPE_INCREMENTAL_OPT_VAL())
     .option(DataSourceReadOptions.BEGIN_INSTANTTIME_OPT_KEY(), <beginInstantTime>)
     .load(basePath);

Limitations

Note that currently the reading realtime view natively out of the Spark datasource is not supported. Please use the Hive path below


if Hive Sync is enabled in the deltastreamer tool or datasource, the dataset is available in Hive as a couple of tables, that can now be read using HiveQL, Presto or SparkSQL. See here for more.

How does Hudi handle duplicate record keys in an input? 

When issuing an `upsert` operation on a dataset and the batch of records provided contains multiple entries for a given key, then 

Can I implement my own logic for how input records are merged with record on storage? 

<Answer WIP>


What are different ways of running compaction for a MOR dataset?

<Answer WIP>

How do I migrate my data to Hudi?

<Answer WIP>

How can I pass hudi configurations to my spark job?

<Answer WIP> https://lists.apache.org/thread.html/ce35776e8899d620c095354c23933eb000eec63eaa70acfe60ae0b0c@<dev.hudi.apache.org

How do I delete records in the dataset using Hudi?

<Answer WIP>

Can I register my Hudi dataset with Apache Hive metastore?

<Answer WIP>

What does the Hudi cleaner do? 

<Answer WIP>

How can I restore my dataset to a known good point in time?

<Answer WIP>

How can I partition data stored in a Hudi dataset?

<Answer WIP>

What's Hudi's schema evolution story?

<Answer WIP>

Performance 

What performance can I expect for Hudi writing?

<Answer WIP>

What ingest latency can I expect out of Hudi? 

<Answer WIP>

What performance can I expect for Hudi reading/queries? 

<Answer WIP>

How do I improve the Hudi writing performance?

<Answer WIP>

How do I to avoid creating tons of small files?

HoodieWriteConfig exposes knobs to allow for such flexibility. 

DataSource Spark API users

HoodieDeltaStreamer users

HoodieWriteClient users

Contributing to FAQ 

A good and usable FAQ should be community-driven and crowd source questions/thoughts across everyone. 

You can improve the FAQ by the following processes 

  • Comment on the text to spot inaccuracies, typos and leave suggestions.
  • Propose new questions with answers under the comments section at the bottom of the page
  • Lean towards making it very understandable and simple, and heavily link to parts of documentation as needed
  • One committer on the project will review new questions and incorporate them upon review.





  • No labels