一、安装hive-on-spark客户端
1、下载已编译好的spark安装包:sparkengine-2.3.4.tgz。
2、将该spark客户端,放到/usr/hdp/3.1.0.0-78/hive目录下,命名为sparkengine。只需要部署在hiveserver2节点即可。
3、配置conf/spark-default.conf和spark-env.sh
conf/spark-env.sh中增加:
export HADOOP_CONF_DIR=/usr/hdp/3.1.0.0-78/hadoop/conf
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
conf/spark-defaults.conf中增加:
spark.driver.extraJavaOptions -Dhdp.version=3.1.0.0-78
spark.yarn.am.extraJavaOptions -Dhdp.version=3.1.0.0-78
增加一个conf/java-opts文件:
echo "-Dhdp.version=3.1.0.0-78" >conf/java-opts
二、配置yarn的资源调度器
yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
三、配置hive:只需要部署在hiveserver2节点上即可。
1、在/usr/hdp/3.1.0.0-78/hive/lib中添加spark2的依赖包
sudo cp sparkengine-2.3.4/jars/scala-library*.jar hive/lib/
sudo cp sparkengine-2.3.4/jars/spark-core*.jar hive/lib/
sudo cp sparkengine-2.3.4/jars/spark-network-common*.jar hive/lib/
2、修改hive配置文件
(1)、在高级hive-env中配置spark-home:
export SPARK_HOME=${HIVE_HOME}/sparkengine-2.3.4
如果不设置SPARK_HOME,会使用HDP默认的SparkSubmit命令来提交job。
##INFO [HiveServer2-Background-Pool: Thread-4928]: client.SparkClientImpl (😦)) - No spark.home provided, calling SparkSubmit directly.
(2)、在【自定义hive-site】中,增加自定义属性:
##hive.execution.engine=spark ##如果默认使用spark引擎,可修改该属性。
spark.master=yarn
spark.driver.memory=4g
spark.executor.cores=2
spark.executor.memory=2g
spark.executor.instances=2
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs://d***:8020/hive-spark
spark.network.timeout=300
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.driver.extraJavaOptions=-Dhdp.version=3.1.0.0-78 ##如果不配置,spark executor无法启动。
spark.yarn.am.extraJavaOptions=-Dhdp.version=3.1.0.0-78
spark.yarn.jars=hdfs://demo2:8020/spark2-jars/*
spark.executor.extraJavaOptions=-XX:+UseG1GC -XX:+PrintFlagsFinal -XX:+PrintReferenceGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintAdaptiveSizePolicy -XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark -XX:InitiatingHeapOccupancyPercent=35
spark.dynamicAllocation.enabled=false ##资源动态分配
spark.dynamicAllocation.initialExecutors=2
spark.shuffle.service.enabled=false ###如果设置为true,需要yarn中配置相关参数。
spark.driver.memoryOverhead=400
spark.executor.memoryOverhead=400
set hive.merge.sparkfiles=true;--合并小文件
3、上传spark本身的包至hdfs:
hdfs dfs -mkdir /hive-spark ##spark.eventLog.dir
hdfs dfs -mkdir /spark2-jars ##spark.yarn.jars
hdfs dfs -put /usr/hdp/3.1.0.0-78/hive/sparkengine-2.3.4/jars/* /spark2-jars/
4、重启hive
五、测试hive on spark是否生效
hive --hiveconf hive.execution.engine=spark
set hive.execution.engine=spark;
登录hive:
beeline -u "jdbc:hive2://d***2:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2" --hiveconf hive.execution.engine=spark -n @@@ -p ***@123
使用稍微复杂点的sql进行测试:
select oper_type,count(distinct user_id),count(distinct item_id) from oper_test group by oper_type;
查看yarn里有无application启动。beeline中执行的sql有无查询结果。