Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

1

Table of Contents

Apache Hive

The Apache HiveTM data warehouse software facilitates querying reading, writing, and managing large datasets residing in distributed storage using SQL. Built on top of Apache HadoopTM, it provides

  • Tools to enable easy access to data via SQL, thus enabling data warehousing tasks such as extract/transform/load (ETL), reporting, and data analysis.
  • A mechanism to impose structure on a variety of data formats
  • Access to files stored either directly in Apache HDFSTM or in other data storage systems such as Apache HBaseTM 

  • Query execution via Apache TezTMApache SparkTM, or MapReduce

Hive defines a simple SQL-like query language, called QL, that enables users familiar with SQL to query the data. At the same time, this language also allows programmers who are familiar with the MapReduce framework to be able to plug in their custom mappers and reducers to perform more sophisticated analysis that may not be supported by the built-in capabilities of the language. QL can also be extended with custom scalar functions (UDF's), aggregations (UDAF's), and table functions (UDTF's).provides standard SQL functionality, including many of the later 2003 and 2011 features for analytics.  Hive's SQL can also be extended with user code via User Defined Functions (UDFs), user defined aggregates (UDAFs), and user defined table functions (UDTFs).

There is not a single "Hive format" in which data must be stored.  Hive comes with built in connectors for CSV text files, Apache ParquetTMApache ORCTM, and other formats.  Users can extend Hive with connectors for other formats.  Please Hive does not mandate read or written data be in the "Hive format" — there is no such thing. Hive works equally well on Thrift, control delimited, or your specialized data formats. Please see File Formats and Hive SerDe in the Developer Guide for details.

Hive is not designed for OLTP workloads and does not offer real-time queries or row-level updates. It is best used for batch jobs over large sets of append-only data (like web logs). What Hive values most are traditional data warehousing tasks.  Hive is designed to maximize scalability (scale out with more machines added dynamically to the Hadoop cluster), performance, extensibility (with MapReduce framework and UDF/UDAF/UDTF), fault-tolerance, and loose-coupling with its input formats.

...

User Documentation

...