You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Hadoop-compatible Input/Output Format for Hive

This is a proposal for adding API to hive which allows reading and writing with Hive using a Hadoop compatible API. Specifically, the interfaces being implemented are:

The classes will be named HiveApiInputFormat and HiveApiOutputFormat.

Hive Input

At the high level, to read from hive using this API:

  1. Create a HiveInputDescription object
  2. Fill it with information about the table to read from
  3. Initialize HiveApiInputFormat with that information
  4. Go to town using HiveApiInputFormat with your Hadoop-compatible reading system.

More detailed information:

  • The HiveInputDescription describes the database, table and columns to select. It also has a partition filter property that can be used to read from only the partitions that match the filter statement.
  • HiveApiInputFormat supports reading from multiple tables by having a concept of profiles. Each profile stores its input description in a separate section, and the HiveApiInputFormat has a member which tells it which profile to read from. When initializing the input data in HiveApiInputFormat you can pair it with a profile. If no profile is selected then a default profile is used.

Hive Output

TODO

  • No labels