...
YARN Mode: http://spark.apache.org/docs/latest/running-on-yarn.html
Standalone Mode: https://spark.apache.org/docs/latest/spark-standalone.html
Hive on Spark supports Spark on YARN mode as default.
For the installation perform the following tasks:
...
yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
Configuring Hive
- There are several ways to
To add the Spark dependency to Hive:
Set the property 'spark.home' to point to the Spark installation:
Code Block set spark.home=/location/to/sparkHome;
Define the SPARK_HOME environment variable before starting Hive CLI/HiveServer2:
Code Block language bash export SPARK_HOME=/usr/lib/spark
- Prior to Hive 2.2.0, link the spark-assembly jar to
HIVE_HOME/lib
. - Since Hive 2.2.0, Hive on Spark runs with Spark 2.0.0 and above, which doesn't have an assembly jar.
- To run with YARN mode (either yarn-client or yarn-cluster), link the following jars to
HIVE_HOME/lib
.- scala-library
- spark-core
- spark-network-common
- To run with LOCAL mode (for debugging only), link the following jars in addition to those above to
HIVE_HOME/lib
.- chill-java chill jackson-module-paranamer jackson-module-scala jersey-container-servlet-core
- jersey-server json4s-ast kryo-shaded minlog scala-xml spark-launcher
- spark-network-shuffle spark-unsafe xbean-asm5-shaded
- To run with YARN mode (either yarn-client or yarn-cluster), link the following jars to
Configure Hive execution engine to use Spark:
Code Block set hive.execution.engine=spark;
See the Spark section of Hive Configuration Properties for other properties for configuring Hive and the Remote Spark Driver.
Configure Spark-application configs for Hive. See: http://spark.apache.org/docs/latest/configuration.html. This can be done either by adding a file "spark-defaults.conf" with these properties to the Hive classpath, or by setting them on Hive configuration (
hive-site.xml
). For instance:Code Block set spark.master=<Spark Master URL> set spark.eventLog.enabled=true; set spark.eventLog.dir=<Spark event log folder (must exist)> set spark.executor.memory=512m; set spark.serializer=org.apache.spark.serializer.KryoSerializer;
Configuration property details
spark.executor.memory
: Amount of memory to use per executor process.spark.executor.cores
: Number of cores per executor.spark.yarn.executor.memoryOverhead
: The amount of off heap memory (in megabytes) to be allocated per executor, when running Spark on Yarn. This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. In addition to the executor's memory, the container in which the executor is launched needs some extra memory for system processes, and this is what this overhead is for.spark.executor.instances
: The number of executors assigned to each application.spark.driver.memory
: The amount of memory assigned to the Remote Spark Context (RSC). We recommend 4GB.spark.yarn.driver.memoryOverhead
: We recommend 400 (MB).
HIVE_HOME/lib
....