Hive on Spark: Getting Started
Hive on Spark is currently under development in a Hive branch. See HIVE-7292 and its subtasks and linked issues.
Spark Installation
Follow instructions to install Spark: https://spark.apache.org/docs/latest/spark-standalone.html. In particular:
- Install Spark (either download pre-built Spark, or build assembly from source).
- Install/build a compatible version. Hive root pom.xml's <spark.version> defines what version of Spark it was built/tested with.
- Install/build a compatible distribution. Each version of Spark has several distributions, corresponding with different versions of Hadoop.
- Once Spark is installed, find and keep note of the <spark-assembly-*.jar> location.
- Note that you must have a version of Spark which does not include the Hive jars. Meaning one which was not built with the Hive profile.
- Start Spark cluster (both standalone and Spark on YARN are supported).
- Keep note of the <Spark Master URL>. This can be found in Spark master WebUI.
Configuring Hive
- As Hive on Spark is still in development, currently only a Hive assembly built from the Hive/Spark development branch supports Spark execution. The development branch is located here: https://github.com/apache/hive/tree/spark. Checkout the branch and build the Hive assembly as described in https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ.
- If you download Spark, make sure you use a 1.2.x assembly: http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/spark-assembly-1.2.0-SNAPSHOT-hadoop2.3.0-cdh5.1.2.jar
There are several ways to add the Spark dependency to Hive:
Set the property 'spark.home' to point to the Spark installation:
hive> set spark.home=/location/to/sparkHome;
Set the spark-assembly jar on the Hive auxpath:
hive --auxpath /location/to/spark-assembly-*.jar
Add the spark-assembly jar for the current user session:
hive> add jar /location/to/spark-assembly-*.jar;
- Link the spark-assembly jar to HIVE_HOME/lib.
Configure Hive execution to Spark:
hive> set hive.execution.engine=spark;
Configure Spark-application configs for Hive. See: http://spark.apache.org/docs/latest/configuration.html. This can be done either by adding a file "spark-defaults.conf" with these properties to the Hive classpath, or by setting them on Hive configuration:
hive> set spark.master=<Spark Master URL> hive> set spark.eventLog.enabled=true; hive> set spark.executor.memory=512m; hive> set spark.serializer=org.apache.spark.serializer.KryoSerializer;
Common Issues (Green are resolved, will be removed from this list)
Issue | Cause | Resolution |
---|---|---|
Error: Could not find or load main class org.apache.spark.deploy.SparkSubmit | Spark dependency not correctly set. | Add Spark dependency to Hive, see Step 3 above. |
org.apache.spark.SparkException: Job aborted due to stage failure: Task 5.0:0 had a not serializable result: java.io.NotSerializableException: org.apache.hadoop.io.BytesWritable | Spark serializer not set to Kryo. | Set spark.serializer to be org.apache.spark.serializer.KryoSerializer, see Step 5 above. |
[ERROR] Terminal initialization failed; falling back to unsupported | Hive has upgraded to Jline2 but jline 0.94 exists in the Hadoop lib. |
|
java.lang.SecurityException: class "javax.servlet.DispatcherType"'s | Two versions of the servlet-api are in the classpath. |
|
Spark executor gets killed all the time and Spark keeps retrying the failed stage; you may find similar information in the YARN nodemanager log. WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Container [pid=217989,containerID=container_1421717252700_0716_01_50767235] is running beyond physical memory limits. Current usage: 43.1 GB of 43 GB physical memory used; 43.9 GB of 90.3 GB virtual memory used. Killing container. | For Spark on YARN, nodemanager would kill Spark executor if it used more memory than the configured size of "spark.executor.memory" + "spark.yarn.executor.memoryOverhead". | Increase "spark.yarn.executor.memoryOverhead" to make sure it covers the executor off-heap memory usage. |
Run query and get an error like: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask In Hive logs, it shows: java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy | Happens on Mac (not officially supported). This is a general Snappy issue with Mac and is not unique to Hive on Spark, but workaround is noted here because it is needed for startup of Spark client. | Run this command before starting Hive or HiveServer2: export HADOOP_OPTS="-Dorg.xerial.snappy.tempdir=/tmp -Dorg.xerial.snappy.lib.name=libsnappyjava.jnilib $HADOOP_OPTS" |
Stack trace: ExitCodeException exitCode=1: .../launch_container.sh: line 27: $PWD:$PWD/__spark__.jar:$HADOOP_CONF_DIR.../usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure:$PWD/__app__.jar:$PWD/*: bad substitution | The key mapreduce.application.classpath in /etc/hadoop/conf/mapred-site.xml contains a variable which is invalid in bash. | From mapreduce.application.classpath remove :/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar from /etc/hadoop/conf/mapred-site.xml
|
Recommended Configuration
# see HIVE-9153 mapreduce.input.fileinputformat.split.maxsize=750000000 hive.vectorized.execution.enabled=true hive.cbo.enable=true hive.optimize.reducededuplication.min.reducer=4 hive.optimize.reducededuplication=true hive.orc.splits.include.file.footer=false hive.merge.mapfiles=true hive.merge.mapredfiles=false hive.merge.smallfiles.avgsize=16000000 hive.merge.size.per.task=256000000 hive.merge.orcfile.stripe.level=true hive.auto.convert.join=true hive.auto.convert.join.noconditionaltask=true hive.auto.convert.join.noconditionaltask.size=894435328 hive.optimize.bucketmapjoin.sortedmerge=false hive.map.aggr.hash.percentmemory=0.5 hive.map.aggr=true hive.optimize.sort.dynamic.partition=false hive.stats.autogather=true hive.stats.fetch.column.stats=true hive.vectorized.execution.reduce.enabled=false hive.vectorized.groupby.checkinterval=4096 hive.vectorized.groupby.flush.percent=0.1 hive.compute.query.using.stats=true hive.limit.pushdown.memory.usage=0.4 hive.optimize.index.filter=true hive.exec.reducers.bytes.per.reducer=67108864 hive.smbjoin.cache.rows=10000 hive.exec.orc.default.stripe.size=67108864 hive.fetch.task.conversion=more