Table of Contents |
---|
Installing, configuring and running Hive
You can install a stable release of Hive by downloading and unpacking a tarball, or you can download the source code and build Hive using Maven (release 3.6.3 and later).Hive installation has these requirements:
Prerequisites
- Java 8.
- Maven 3.6.3
- Protobuf 2.5
- Hadoop 3.3.6 (As a preparation, configure it in single-node cluster, pseudo-distributed mode)
- Tez. The default is MapReduce but we will change the execution engine to Tez.
- Hive is commonly used in production Linux environment. Mac is a commonly used development environment. The instructions in this document are applicable to Linux and Mac.
Install the prerequisites
Java 8
Building Hive requires JDK 8 installed. Some notes in case you have ARM chipset (Apple M1 or later).
You will have to build protobuf 2.5 later. And it doesn't compile with ARM JDK. So we will install intel architecture's Java with brew and configure maven with this. It will enable us to compile protobuf.
JDK install on apple ARM:
Code Block | ||
---|---|---|
| ||
brew install homebrew/cask-versions/adoptopenjdk8 --cask
brew untap adoptopenjdk/openjdk |
Maven:
Just install maven and configure the JAVA_HOME properly.
Notes for arm: after a proper configuration, you should see something like this:
Code Block | ||
---|---|---|
| ||
mvn -version
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Maven home: /Users/yourusername/programs/apache-maven-3.6.3
Java version: 1.8.0_292, vendor: AdoptOpenJDK, runtime: /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre
Default locale: en_HU, platform encoding: UTF-8
OS name: "mac os x", version: "10.16", arch: "x86_64", family: "mac" |
As you can see, even if it is an arm processor, maven thinks the architecture is Intel based.
Protobuf
You have to download and compile protobuf. And also, install it into the local maven repository. Protobuf 2.5.0 is not ready for ARM. On this chipset, you will need to do some extra steps.
Code Block | ||
---|---|---|
| ||
wget https://github.com/google/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.bz2
tar -xvf protobuf-2.5.0.tar.bz2
cd protobuf-2.5.0
./configure |
On ARM, edit the src/google/protobuf/stubs/platform_macros.h and add arm to the part, processor architecture detection, after the last elif branch:
Code Block | ||
---|---|---|
| ||
#elif defined(__arm64__)
#define GOOGLE_PROTOBUF_ARCH_ARM 1
#define GOOGLE_PROTOBUF_ARCH_64_BIT 1 |
Now, you can compile and install protobuf:
Code Block | ||
---|---|---|
| ||
make
make check
sudo make install |
You can validate your install:
Code Block | ||
---|---|---|
| ||
protoc --version |
Hadoop
Firstly, move through the instructions on the official documentation, single-node, pseudo-distributed configuration: https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation.
After that, set up HADOOP_HOME:
Code Block | ||
---|---|---|
| ||
export HADOOP_HOME=/yourpathtohadoop/hadoop-3.3.6 |
Tez
Tez will require some additional steps. Hadoop uses a tez tarball but it expects it in other compressed directory structure than it is released. So we will extract the tarball and compress again. And also, we will put the extracted jars into hdfs. After that we set the necessary environment variables.
Download tez, extract and re-compress the tar:
Code Block | ||
---|---|---|
| ||
wget https://dlcdn.apache.org/tez/0.10.2/apache-tez-0.10.2-bin.tar.gz
tar -xzvf apache-tez-0.10.2-bin.tar.gz
cd apache-tez-0.10.2-bin
tar zcvf ../apache-tez-0.10.2-bin.tar.gz * && cd .. |
Replace hdfs-client jar to 3.3.6 (tez is shipped with 3.3.1)
No Format |
---|
rm apache-tez-0.10.2-bin/lib/hadoop-hdfs-client-3.3.1.jar
cp $HADOOP_HOME/share/hadoop/hdfs/hadoop-hdfs-client-3.3.6.jar $TEZ_HOME/lib |
Add the necessary tez files to hdfs
Code Block | ||
---|---|---|
| ||
$HADOOP_HOME/sbin/start-dfs.sh # start hdfs
$HADOOP_HOME/bin/hadoop fs -mkdir -p /apps/tez
$HADOOP_HOME/bin/hadoop fs -put apache-tez-0.10.2-bin.tar.gz /apps/tez # copy the tarball
$HADOOP_HOME/bin/hadoop fs -put apache-tez-0.10.2-bin /apps/tez # copy the whole folder
$HADOOP_HOME/bin/hadoop fs -ls /apps/tez # verify
$HADOOP_HOME/sbin/stop-all.sh # stop hdfs |
Set up TEZ_HOME and HADOOP_CLASSPATH environment variables
Code Block | ||
---|---|---|
| ||
export TEZ_HOME=/yourpathtotez/apache-tez-0.10.2-bin
export HADOOP_CLASSPATH=$TEZ_HOME/*:$TEZ_HOME/conf |
Create a new config file for Tez: $TEZ_HOME/conf/tez-site.xml
Code Block | ||
---|---|---|
| ||
<configuration>
<property>
<name>tez.lib.uris</name>
<value>hdfs://localhost:9000/apps/tez/apache-tez-0.10.2-bin.tar.gz,hdfs://localhost:9000/apps/tez/apache-tez-0.10.2-bin/lib,hdfs://localhost:9000/apps/tez/apache-tez-0.10.2-bin</value>
</property>
</configuration> |
Extra hadoop configurations to make everything working
Modify $HADOOP_HOME/etc/hadoop/core-site.xml
Code Block | ||
---|---|---|
| ||
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.proxyuser.yourusername.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.yourusername.hosts</name>
<value>*</value>
</property>
</configuration> |
Modify $HADOOP_HOME/etc/hadoop/hadoop-env.sh
Code Block | ||
---|---|---|
| ||
# JAVA_HOME
export JAVA_HOME=/yourpathtojavahome/javahome
# tez
export TEZ_CONF_DIR=/yourpathtotezconf/conf
export TEZ_JARS=/yourpathtotez/apache-tez-0.10.2-bin
export HADOOP_CLASSPATH=${TEZ_CONF_DIR}:${TEZ_JARS}/*:${TEZ_JARS}/lib/*:${HADOOP_CLASSPATH}:
${JAVA_JDBC_LIBS}:${MAPREDUCE_LIBS} |
Modify $HADOOP_HOME/etc/hadoop/mapred-site.xml
Code Block | ||
---|---|---|
| ||
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>$HADOOP_CLASSPATH:$HADOOP_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
</property>
</configuration> |
Modify $HADOOP_HOME/etc/hadoop/yarn-site.xml
Code Block | ||
---|---|---|
| ||
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZ,HADOOP_MAPRED_HOME</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
</configuration> |
Installing Hive from a Tarball
Start by downloading the most recent stable release of Hive from one of the Apache download mirrors (see Hive Releases).
Next you need to unpack the tarball. This will result in the creation of a subdirectory named hive-x.y.z
(where x.y.z
is the release number):
No Format |
---|
$ wget https://dlcdn.apache.org/hive/hive-4.0.0-beta-1/apache-hive-4.0.0-beta-1-bin.tar.gz tar -xzvf apache-hive-x4.y.z0.0-beta-1-bin.tar.gz |
Set the environment variable HIVE_HOME
to point to the installation directory:
No Format |
---|
$ cd apache-hive-x4.y.z $ 0.0-beta-1-bin export HIVE_HOME={{pwd}} /yourpathtohive/apache-hive-4.0.0-beta-1-bin |
Add Finally, add $HIVE_HOME/bin
to your PATH
:
No Format |
---|
$ export PATH=$HIVE_HOME/bin:$PATH |
Create a directory for external tables:
No Format |
---|
mkdir /yourpathtoexternaltables/warehouse |
Create a new config file for Hive: $HIVE_HOME/conf/hive-site.xml
Code Block | ||
---|---|---|
| ||
<configuration>
<property>
<name>hive.tez.container.size</name>
<value>1024</value>
</property>
<property>
<name>hive.metastore.warehouse.external.dir</name>
<value>/yourpathtowarehousedirectory/warehouse</value>
</property>
<property>
<name>hive.execution.engine</name>
<value>tez</value>
</property>
<property>
<name>tez.lib.uris</name>
<value>hdfs://localhost:9000/apps/tez/apache-tez-0.10.2-bin.tar.gz,hdfs://localhost:9000/apps/tez/apache-tez-0.10.2-bin/lib,hdfs://localhost:9000/apps/tez/apache-tez-0.10.2-bin</value>
</property>
<property>
<name>tez.configuration</name>
<value>/yourpathtotez/apache-tez-0.10.2-bin/conf/tez-site.xml</value>
</property>
<property>
<name>tez.use.cluster.hadoop-libs</name>
<value>true</value>
</property>
</configuration> |
Initialize metastore schema. It will create a directore called metastore_db. It contains an embedded Derby database for metastore
Code Block | ||
---|---|---|
| ||
$HIVE_HOME/bin/schematool -dbType derby -initSchema --verbose |
Run HiveServer2
Code Block | ||
---|---|---|
| ||
$HIVE_HOME/bin/hiveserver2 |
Run beeline:
Code Block | ||
---|---|---|
| ||
bin/beeline -u 'jdbc:hive2://localhost:10000/' -n yourusername |
As a test, create a table insert some value
Code Block | ||
---|---|---|
| ||
create table test (message string);
insert into test values ('Hello, from Hive!'); |
Installing from Source Code
Configuring is the same as when we do it from tarball. The only difference is that we have to build Hive for ourself and we will find the compiled binaries in a different directory.
Hive is available via Git at https://github.com/apache/hive. You can download it by running the following command.
Code Block | ||
---|---|---|
| ||
$ git clone git@github.com:apache/hive.git |
In case you want to get a specific release branch, like 4.0.0, you can run that command:
Code Block | ||
---|---|---|
| ||
$ git clone -b branch-4.0 --single-branch git@github.com:apache/hive.git |
...
It will create the subdirectory packaging/target/apache-hive-<release_string>-bin/apache-hive-<release_string>-bin/with the following contents (example: packaging/target/apache-hive-4.0.0-beta-2-SNAPSHOT-bin/apache-hive-4.0.0-beta-2-SNAPSHOT-bin). That will be your HIVE_HOME directory.
It has a content like:
- bin/: directory containing all the shell scripts
- lib/: directory containing all required jar files
- conf/: directory with configuration files
- examples/: directory with sample input and query files
That directory should contain all the files necessary to run Hive. You can run it from there or copy it to a different location, if you prefer.
In order to run Hive, you must have Hadoop in your path or have defined the environment variable HADOOP_HOME with the Hadoop installation directory.
Moreover, we strongly advise users to create the HDFS directories /tmp and /user/hive/warehouse (also known as hive.metastore.warehouse.dir) and set them chmod g+w before tables are created in Hive.From now, you can follow the steps described in the section Installing Hive from a Tarball
Next Steps
You can begin using Hive as soon as it is installed, although you will probably want to configure it first.
Beeline CLI
...
it should be work on you computer. There are some extra information in the following sections.
Beeline CLI
HiveServer2 has a CLI called Beeline (see Beeline – New Command Line Shell). To use Beeline, execute the following command in the Hive home directory:
...
Hive Metastore
Metadata is stored in an embedded Derby database whose disk storage location is determined by the Hive configuration variable named javax.jdo.option.ConnectionURL. By default, this a relational database. In our example (and as a default) it is a Derby database. By default, it's location is ./metastore_db. (see See conf/hive-default.xml). You can change it by modifying the configuration variable javax.jdo.option.ConnectionURL.
Using Derby in embedded mode allows at most one user at a time. To configure Derby to run in server mode, see Hive Using Derby in Server Mode.
...
Next Step: Configuring Hive.
HCatalog and WebHCat
HCatalog
...
...
HCatalog is installed with Hive, starting with Hive release 0.11.0.
If you install Hive from the binary tarball, the hcat
command is available in the hcatalog/bin
directory. However, most hcat
commands can be issued as hive
commands except for "hcat -g
" and "hcat -p
". Note that the hcat
command uses the -p
flag for permissions but hive
uses it to specify a port number. The HCatalog CLI is documented here and the Hive CLI is documented here.
HCatalog installation is documented here.
WebHCat (Templeton)
...
title | Version |
---|
...
If you install Hive from the binary tarball, the WebHCat server command webhcat_server.sh
is in the hcatalog/sbin directory/webhcat/svr/src/main/bin/webhcat_server.sh directory.
WebHCat installation is documented here.