Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Parquet (http://parquet.io/) is an ecosystem wide columnar format for Hadoop. Read Demel made simple with Parquet for a good introduction to the format itselfwhile the Parquet project has an in-depth description of the format including motivations and diagramsAt the time of this writing Parquet supports the follow engines and data description languages:

...

The latest information on Parquet engine and data description support, please visit the Parquet-MR projects feature matrix.

Format

The parquet project has an in-depth description of the format including motivations and diagrams. 

Tip
titleParquet Motivation

We created Parquet to make the advantages of compressed, efficient columnar data representation available to any project in the Hadoop ecosystem.

Parquet is built from the ground up with complex nested data structures in mind, and uses the record shredding and assembly algorithm described in the Dremel paper. We believe this approach is superior to simple flattening of nested name spaces.

Parquet is built to support very efficient compression and encoding schemes. Multiple projects have demonstrated the performance impact of applying the right compression and encoding scheme to the data. Parquet allows compression schemes to be specified on a per-column level, and is future-proofed to allow adding more encodings as they are invented and implemented.

Parquet is built to be used by anyone. The Hadoop ecosystem is rich with data processing frameworks, and we are not interested in playing favorites. We believe that an efficient, well-implemented columnar storage substrate should be useful to all frameworks without the cost of extensive and difficult to set up dependencies.

...