记录一个hive中因没启yarn导致的spark引擎跑insert语句的报错

【背景说明】

刚在hive中配置了Spark引擎,在进行Hive on Spark测试时报错,

报错截图如下:

复制代码
[atguigu@hadoop102 conf]$ hive
which: no hbase in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/module/jdk1.8.0_212/bin:/opt/module/hadoop-3.3.4/bin:/opt/module/hadoop-3.3.4/sbin:/opt/module/hive-3.1.3/bin                          :/opt/module/kafka/bin:/opt/module/efak/bin:/home/atguigu/.local/bin:/home/atguigu/bin:/opt/module/jdk1.8.0_212/bin:/opt/module/hadoop-3.3.4/bin:/opt/module/hadoop-3.3.4/sbin:/opt/modu                          le/hive-3.1.3/bin:/opt/module/kafka/bin:/opt/module/efak/bin:/opt/module/spark/bin)
Hive Session ID = 4b43a439-6dee-4295-a467-7182adb64f04

Logging initialized using configuration in file:/opt/module/hive-3.1.3/conf/hive-log4j2.properties Async: true
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of                           the driver class is generally unnecessary.
Hive Session ID = 6dbba42a-f926-4cee-8368-646383608b57
hive (default)> create table student(id int, name string);
OK
Time taken: 0.948 seconds
hive (default)> insert into table student values(1,'abc');
Query ID = atguigu_20240420093653_68ffa538-97fa-4864-9d92-18dfc9def1c6
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session 885f9da9-d447-4d55-a411-aca9c832703b)'
FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 885f9da9-d447-4d55-a411-aca9c832703b

Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session 885f9da9-d447-4d55-a411-aca9c832703b)'
FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 885f9da9-d447-4d55-a411-aca9c832703b

【原因】

百度说是这个报错意味着Hive无法为Spark会话创建Spark客户端。可能是由于配置问题导致的。建议检查Hive配置文件中关于Spark的设置是否正确,特别是关于Spark执行引擎的配置。

【解决】

这次没有创建SparkClient失败是因为我的yarn没启,Spark运行需要yarn进行资源调度。好,启动yarn:start-yarn.sh

再跑:hive (default)> insert into table student values(1,'abc');

相关推荐
计艺回忆路5 分钟前
Hive自定义函数(UDF)开发和应用流程
hive·自定义函数·udf
万能小锦鲤14 小时前
《大数据技术原理与应用》实验报告三 熟悉HBase常用操作
java·hadoop·eclipse·hbase·shell·vmware·实验报告
天翼云开发者社区20 小时前
数据治理的长效机制
大数据·数据仓库
王小王-1231 天前
基于Hadoop与LightFM的美妆推荐系统设计与实现
大数据·hive·hadoop·大数据美妆推荐系统·美妆商品用户行为·美妆电商
一切顺势而行1 天前
hadoop 集群问题处理
大数据·hadoop·分布式
万能小锦鲤2 天前
《大数据技术原理与应用》实验报告七 熟悉 Spark 初级编程实践
hive·hadoop·ubuntu·flink·spark·vmware·实验报告
项目題供诗2 天前
Hadoop(二)
大数据·hadoop·分布式
Leo.yuan2 天前
ETL还是ELT,大数据处理怎么选更靠谱?
大数据·数据库·数据仓库·信息可视化·etl
万能小锦鲤2 天前
《大数据技术原理与应用》实验报告五 熟悉 Hive 的基本操作
hive·hadoop·ubuntu·eclipse·vmware·实验报告·hiveql
張萠飛2 天前
flink sql如何对hive string类型的时间戳进行排序
hive·sql·flink