启动spark历史服务失败问题处理

1.场景

执行启动spark历史服务器 start-history-server.sh报错

root@manager file\]# $SPARK_HOME/sbin/start-history-server.sh starting org.apache.spark.deploy.history.HistoryServer, logging to /home/bigdata/spark/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-manager.out \[root@manager file\]# more /home/bigdata/spark/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-manager.out Spark Command: /usr/java/jdk1.8.0_112/bin/java -cp /home/bigdata/spark/conf/:/home/bigdata/spark/jars/\*:/home/bigdata/hadoop/e tc/hadoop/ -Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=zook_node1:2181,zook_node2:2181,zook_node3:2181 -Dspark.deploy.zookeeper.dir=/home/bigdata/zookeeper -Xmx1g org.apache.spark.deploy.history.HistoryServer ======================================== 2023-09-18 16:46:29,971 INFO history.HistoryServer: Started daemon with process name: 18867@manager 2023-09-18 16:46:29,974 INFO util.SignalUtils: Registering signal handler for TERM 2023-09-18 16:46:29,975 INFO util.SignalUtils: Registering signal handler for HUP 2023-09-18 16:46:29,975 INFO util.SignalUtils: Registering signal handler for INT 2023-09-18 16:46:30,192 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-ja va classes where applicable 2023-09-18 16:46:30,271 INFO spark.SecurityManager: Changing view acls to: root 2023-09-18 16:46:30,272 INFO spark.SecurityManager: Changing modify acls to: root 2023-09-18 16:46:30,273 INFO spark.SecurityManager: Changing view acls groups to: 2023-09-18 16:46:30,273 INFO spark.SecurityManager: Changing modify acls groups to: 2023-09-18 16:46:30,274 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with vi ew permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set() 2023-09-18 16:46:30,360 INFO history.FsHistoryProvider: History server ui acls disabled; users with admin permissions: ; group s with admin permissions: 2023-09-18 16:46:31,011 INFO util.log: Logging initialized @1850ms to org.sparkproject.jetty.util.log.Slf4jLog 2023-09-18 16:46:31,062 INFO server.Server: jetty-9.4.44.v20210927; built: 2021-09-27T23:02:44.612Z; git: 8da83308eeca865e495e 53ef315a249d63ba9332; jvm 1.8.0_112-b15 2023-09-18 16:46:31,084 INFO server.Server: Started @1924ms 2023-09-18 16:46:31,130 INFO server.AbstractConnector: Started ServerConnector@5ae76500{HTTP/1.1, (http/1.1)}{0.0.0.0:18080} 2023-09-18 16:46:31,131 INFO util.Utils: Successfully started service 'HistoryServerUI' on port 18080. 2023-09-18 16:46:31,156 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7e094740{/,null,AVAILABLE,@Spark} 2023-09-18 16:46:31,157 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@24a1c17f{/json,null,AVAILABLE,@Spar k} 2023-09-18 16:46:31,158 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@73511076{/api,null,AVAILABLE,@Spark } 2023-09-18 16:46:31,167 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@514eedd8{/static,null,AVAILABLE,@Sp ark} 2023-09-18 16:46:31,168 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1a2e2935{/history,null,AVAILABLE,@S park} 2023-09-18 16:46:31,170 INFO history.HistoryServer: Bound HistoryServer to 0.0.0.0, and started at http://manager:18080 Exception in thread "main" java.io.FileNotFoundException: Log directory specified does not exist: hdfs://mycluster/spark_logs at org.apache.spark.deploy.history.FsHistoryProvider.startPolling(FsHistoryProvider.scala:291) at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:236) at org.apache.spark.deploy.history.FsHistoryProvider.start(FsHistoryProvider.scala:413) at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:311) at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) Caused by: java.io.FileNotFoundException: File does not exist: hdfs://mycluster/spark_logs at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1756) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764) at org.apache.spark.deploy.history.FsHistoryProvider.startPolling(FsHistoryProvider.scala:281) ... 4 more \[root@manager file\]# 2.日志分析 关键是这一句:Log directory specified does not exist: hdfs://mycluster/spark_logs 意思是:找不到指定文件 3.解决方案 需要预先创建文件即可 \[root@manager conf\]# hadoop dfs -mkdir hdfs://mycluster/spark_logs \[root@manager conf\]# hdfs dfs -chmod 777 /home/bigdata/spark/conf/spark_logs 4.重新启动验证 \[root@manager file\]# $SPARK_HOME/sbin/stop-history-server.sh stopping org.apache.spark.deploy.history.HistoryServer \[root@manager file\]# $SPARK_HOME/sbin/start-history-server.sh starting org.apache.spark.deploy.history.HistoryServer, logging to /home/bigdata/spark/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-manager.out \[root@manager file\]# tail -f /home/bigdata/spark/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-manager.out 2023-09-18 17:10:07,780 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@56102e1c{/api,null,AVAILABLE,@Spark} 2023-09-18 17:10:07,789 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@51f49060{/static,null,AVAILABLE,@Spark} 2023-09-18 17:10:07,790 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@687a762c{/history,null,AVAILABLE,@Spark} 2023-09-18 17:10:07,791 INFO history.HistoryServer: Bound HistoryServer to 0.0.0.0, and started at http://manager:18080 2023-09-18 17:10:07,952 INFO history.FsHistoryProvider: Parsing hdfs://mycluster/spark_logs/local-1695027842090.zstd for listing data... 2023-09-18 17:10:07,952 INFO history.FsHistoryProvider: Parsing hdfs://mycluster/spark_logs/local-1695027960132.zstd for listing data... 2023-09-18 17:10:08,694 INFO history.FsHistoryProvider: Finished parsing hdfs://mycluster/spark_logs/local-1695027842090.zstd 2023-09-18 17:10:08,694 INFO history.FsHistoryProvider: Finished parsing hdfs://mycluster/spark_logs/local-1695027960132.zstd 2023-09-18 17:10:08,700 INFO history.FsHistoryProvider: Parsing hdfs://mycluster/spark_logs/local-1695027671578.zstd.inprogress for listing data... 2023-09-18 17:10:08,863 INFO history.FsHistoryProvider: Finished parsing hdfs://mycluster/spark_logs/local-1695027671578.zstd.inprogress

相关推荐
代码匠心19 小时前
从零开始学Flink:Flink SQL四大Join解析
大数据·flink·flink sql·大数据处理
爱吃橘子橙子柚子1 天前
3CPU性能排查总结(超详细)【Linux性能优化】
运维·cpu
武子康2 天前
大数据-242 离线数仓 - DataX 实战:MySQL 全量/增量导入 HDFS + Hive 分区(离线数仓 ODS
大数据·后端·apache hive
舒一笑3 天前
程序员效率神器:一文掌握 tmux(服务器开发必备工具)
运维·后端·程序员
SelectDB3 天前
易车 × Apache Doris:构建湖仓一体新架构,加速 AI 业务融合实践
大数据·agent·mcp
NineData3 天前
数据库管理工具NineData,一年进化成为数万+开发者的首选数据库工具?
运维·数据结构·数据库
武子康3 天前
大数据-241 离线数仓 - 实战:电商核心交易数据模型与 MySQL 源表设计(订单/商品/品类/店铺/支付)
大数据·后端·mysql
IvanCodes3 天前
一、消息队列理论基础与Kafka架构价值解析
大数据·后端·kafka
梦想很大很大4 天前
拒绝“盲猜式”调优:在 Go Gin 项目中落地 OpenTelemetry 链路追踪
运维·后端·go
Sinclair4 天前
内网服务器离线安装 Nginx+PHP+MySQL 的方法
运维