如何启动spark

解决:spark的bin目录下,无法启动spark问题

root@hadoop7 sbin\]# ./start-all.sh ./start-all.sh:行29: /root/install/spark-2.4.0-bin-hadoop2.7/sbin/spark-config.sh: 没有那个文件或目录 ./start-all.sh:行32: /root/install/spark-2.4.0-bin-hadoop2.7/sbin/start-master.sh: 没有那个文件或目录 ./start-all.sh:行35: /root/install/spark-2.4.0-bin-hadoop2.7/sbin/start-slaves.sh: 没有那个文件或目录 *** ** * ** *** \[root@hadoop7 \~\]# jps 1357 Jps \[root@hadoop7 \~\]# start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on \[hadoop7

hadoop7: starting namenode, logging to /root/install/hadoop-2.7.7/logs/hadoop-root-namenode-hadoop7.out

localhost: starting datanode, logging to /root/install/hadoop-2.7.7/logs/hadoop-root-datanode-hadoop7.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to /root/install/hadoop-2.7.7/logs/hadoop-root-secondarynamenode-hadoop7.out

starting yarn daemons

starting resourcemanager, logging to /root/install/hadoop-2.7.7/logs/yarn-root-resourcemanager-hadoop7.out

localhost: starting nodemanager, logging to /root/install/hadoop-2.7.7/logs/yarn-root-nodemanager-hadoop7.out

root@hadoop7 \~\]# jps 2850 Jps 1763 DataNode 2515 NodeManager 1564 NameNode 2268 ResourceManager 2014 SecondaryNameNode \[root@hadoop7 \~\]# zkServer.sh start ZooKeeper JMX enabled by default Using config: /root/install/zookeeper-3.4.14/bin/../conf/zoo.cfg Starting zookeeper ... STARTED *** ** * ** *** \[root@hadoop7 spark-2.4.5-bin-hadoop2.7\]# cd conf/ \[root@hadoop7 conf\]# ls docker.properties.template hive-site.xml metrics.properties.template slaves.template spark-env.sh fairscheduler.xml.template log4j.properties.template slaves spark-defaults.conf.template spark-env.sh.template \[root@hadoop7 conf\]# vi spark-env.sh //把hadoop6 改为了hadoop7 \[root@hadoop7 conf\]# \[root@hadoop7 conf\]# cd .. \[root@hadoop7 spark-2.4.5-bin-hadoop2.7\]# cd logs/ \[root@hadoop7 logs\]# ls spark-root-org.apache.spark.deploy.master.Master-1-hadoop4.out spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop4.out spark-root-org.apache.spark.deploy.master.Master-1-hadoop4.out.1 spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop4.out.1 spark-root-org.apache.spark.deploy.master.Master-1-hadoop4.out.2 spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop4.out.2 spark-root-org.apache.spark.deploy.master.Master-1-hadoop4.out.3 spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop4.out.3 spark-root-org.apache.spark.deploy.master.Master-1-hadoop4.out.4 spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop5.out spark-root-org.apache.spark.deploy.master.Master-1-hadoop5.out spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop5.out.1 spark-root-org.apache.spark.deploy.master.Master-1-hadoop5.out.1 spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop5.out.2 spark-root-org.apache.spark.deploy.master.Master-1-hadoop5.out.2 spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop5.out.3 spark-root-org.apache.spark.deploy.master.Master-1-hadoop5.out.3 spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop5.out.4 spark-root-org.apache.spark.deploy.master.Master-1-hadoop5.out.4 spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop5.out.5 spark-root-org.apache.spark.deploy.master.Master-1-hadoop5.out.5 spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop6.out spark-root-org.apache.spark.deploy.master.Master-1-hadoop6.out spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop6.out.1 spark-root-org.apache.spark.deploy.master.Master-1-hadoop6.out.1 \[root@hadoop7 logs\]# rm -rf \* \[root@hadoop7 logs\]# cd .. \[root@hadoop7 spark-2.4.5-bin-hadoop2.7\]# cd sbin/ \[root@hadoop7 sbin\]# ls slaves.sh start-all.sh start-mesos-shuffle-service.sh start-thriftserver.sh stop-mesos-dispatcher.sh stop-slaves.sh spark-config.sh start-history-server.sh start-shuffle-service.sh stop-all.sh stop-mesos-shuffle-service.sh stop-thriftserver.sh spark-daemon.sh start-master.sh start-slave.sh stop-history-server.sh stop-shuffle-service.sh spark-daemons.sh start-mesos-dispatcher.sh start-slaves.sh stop-master.sh stop-slave.sh \[root@hadoop7 sbin\]# ./start-all.sh starting org.apache.spark.deploy.master.Master, logging to /root/install/spark-2.4.5-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-hadoop7.out localhost: starting org.apache.spark.deploy.worker.Worker, logging to /root/install/spark-2.4.5-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop7.out 就启动成功了

相关推荐
xiaobaibai153几秒前
营销自动化终极形态:AdAgent 自主闭环工作流全解析
大数据·人工智能·自动化
星辰_mya9 分钟前
Elasticsearch更新了分词器之后
大数据·elasticsearch·搜索引擎
xiaobaibai15313 分钟前
决策引擎深度拆解:AdAgent 用 CoT+RL 实现营销自主化决策
大数据·人工智能
一方热衷.20 分钟前
在线安装对应版本NVIDIA驱动
linux·运维·服务器
m0_6948455724 分钟前
tinylisp 是什么?超轻量 Lisp 解释器编译与运行教程
服务器·开发语言·云计算·github·lisp
*小海豚*27 分钟前
在linux服务器上DNS正常,但是java应用调用第三方解析域名报错
java·linux·服务器
悟纤29 分钟前
学习与专注音乐流派 (Study & Focus Music):AI 音乐创作终极指南 | Suno高级篇 | 第33篇
大数据·人工智能·深度学习·学习·suno·suno api
ESBK202532 分钟前
第四届移动互联网、云计算与信息安全国际会议(MICCIS 2026)二轮征稿启动,诚邀全球学者共赴学术盛宴
大数据·网络·物联网·网络安全·云计算·密码学·信息与通信
消失的旧时光-194343 分钟前
Linux 编辑器入门:nano 与 vim 的区别与选择指南
linux·运维·服务器
Elastic 中国社区官方博客1 小时前
Elasticsearch:Workflows 介绍 - 9.3
大数据·数据库·人工智能·elasticsearch·ai·全文检索