103-Spark之Standalone环境测试

bash 复制代码
[hadoop@node1 export]$ cd /export/server/spark/bin
[hadoop@node1 bin]$ ll
total 116
-rwxr-xrwx 1 hadoop hadoop  1089 Oct  6  2021 beeline
-rwxr-xrwx 1 hadoop hadoop  1064 Oct  6  2021 beeline.cmd
-rwxr-xrwx 1 hadoop hadoop 10965 Oct  6  2021 docker-image-tool.sh
-rwxr-xrwx 1 hadoop hadoop  1935 Oct  6  2021 find-spark-home
-rwxr-xrwx 1 hadoop hadoop  2685 Oct  6  2021 find-spark-home.cmd
-rwxr-xrwx 1 hadoop hadoop  2337 Oct  6  2021 load-spark-env.cmd
-rwxr-xrwx 1 hadoop hadoop  2435 Oct  6  2021 load-spark-env.sh
-rwxr-xrwx 1 hadoop hadoop  2636 Oct  6  2021 pyspark
-rwxr-xrwx 1 hadoop hadoop  1542 Oct  6  2021 pyspark2.cmd
-rwxr-xrwx 1 hadoop hadoop  1170 Oct  6  2021 pyspark.cmd
-rwxr-xrwx 1 hadoop hadoop  1030 Oct  6  2021 run-example
-rwxr-xrwx 1 hadoop hadoop  1223 Oct  6  2021 run-example.cmd
-rwxr-xrwx 1 hadoop hadoop  3539 Oct  6  2021 spark-class
-rwxr-xrwx 1 hadoop hadoop  2812 Oct  6  2021 spark-class2.cmd
-rwxr-xrwx 1 hadoop hadoop  1180 Oct  6  2021 spark-class.cmd
-rwxr-xrwx 1 hadoop hadoop  1039 Oct  6  2021 sparkR
-rwxr-xrwx 1 hadoop hadoop  1097 Oct  6  2021 sparkR2.cmd
-rwxr-xrwx 1 hadoop hadoop  1168 Oct  6  2021 sparkR.cmd
-rwxr-xrwx 1 hadoop hadoop  3122 Oct  6  2021 spark-shell
-rwxr-xrwx 1 hadoop hadoop  1818 Oct  6  2021 spark-shell2.cmd
-rwxr-xrwx 1 hadoop hadoop  1178 Oct  6  2021 spark-shell.cmd
-rwxr-xrwx 1 hadoop hadoop  1065 Oct  6  2021 spark-sql
-rwxr-xrwx 1 hadoop hadoop  1118 Oct  6  2021 spark-sql2.cmd
-rwxr-xrwx 1 hadoop hadoop  1173 Oct  6  2021 spark-sql.cmd
-rwxr-xrwx 1 hadoop hadoop  1040 Oct  6  2021 spark-submit
-rwxr-xrwx 1 hadoop hadoop  1155 Oct  6  2021 spark-submit2.cmd
-rwxr-xrwx 1 hadoop hadoop  1180 Oct  6  2021 spark-submit.cmd

-- 地址使用webUI中的地址  spark://node1:7077

[hadoop@node1 bin]$ ./pyspark --master spark://node1:7077
Python 3.8.20 (default, Oct  3 2024, 15:24:27) 
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
25/12/15 15:36:06 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 3.2.0
      /_/

Using Python version 3.8.20 (default, Oct  3 2024 15:24:27)
Spark context Web UI available at http://node1:4040
Spark context available as 'sc' (master = spark://node1:7077, app id = app-20251215153610-0000).
SparkSession available as 'spark'.
>>> sc.parallelize([1,2,3,4,5]).map(lambda x:x *10).collect()
[10, 20, 30, 40, 50]                                                            
>>> sc.parallelize([1,2,3,4,5]).map(lambda x:x *10).collect()
[10, 20, 30, 40, 50]
>>> 

通过ctrl+d退出程序

我们在浏览器中打开node1:4040 发现无法打开,因为刚刚听错ctrl+d退出了,我们通过node1:8080发现可以正常打开

通过测试我们发现 Standalone 环境和 Local环境完全不一样;因为Local将master和worker工作还有Driver的工作都做了;但是在 Standalone 中 master Driver worker都是独立的进程。当我们结束 ./pyspark的时候,仅仅是结束了Driver进程,其他的进程没有结束!

再一次测试:

bash 复制代码
[hadoop@node1 bin]$ ./spark-submit --master spark://node1:7077 /export/server/spark/examples/src/main/python/pi.py 100
25/12/15 16:40:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Pi is roughly 3.138920
[hadoop@node1 bin]$ 

注意检测端口是: node1:18080

停止spark服务:

bash 复制代码
(base) [root@node1 ~]# jps
72448 Jps
78550 HistoryServer
87066 Worker
86878 Master
(base) [root@node1 ~]# cd /export/server/spark
(base) [root@node1 spark]# sbin/stop-history-server.sh
no org.apache.spark.deploy.history.HistoryServer to stop
(base) [root@node1 spark]# su - hadoop
Last login: Mon Dec 15 17:43:32 CST 2025 on pts/0
[hadoop@node1 ~]$  cd /export/server/spark
[hadoop@node1 spark]$ sbin/stop-history-server.sh
stopping org.apache.spark.deploy.history.HistoryServer
[hadoop@node1 spark]$ sbin/stop-workers.sh
node1: stopping org.apache.spark.deploy.worker.Worker
node3: stopping org.apache.spark.deploy.worker.Worker
node2: stopping org.apache.spark.deploy.worker.Worker
[hadoop@node1 spark]$ jps
77864 Jps
86878 Master
[hadoop@node1 spark]$ sbin/stop-master.sh
stopping org.apache.spark.deploy.master.Master
[hadoop@node1 spark]$ jps
78695 Jps
[hadoop@node1 spark]$ 
相关推荐
2501_933670792 小时前
2026 高职大数据与会计专业零基础能考的证书有哪些?
大数据
ClouderaHadoop2 小时前
CDH集群机房搬迁方案
大数据·hadoop·cloudera·cdh
TTBIGDATA2 小时前
【Atlas】Ambari 中 开启 Kerberos + Ranger 后 Atlas Hook 无权限访问 Kafka Topic:ATLAS_HOOK
大数据·kafka·ambari·linq·ranger·knox·bigtop
程序员清洒2 小时前
CANN模型部署:从云端到端侧的全场景推理优化实战
大数据·人工智能
lili-felicity2 小时前
CANN多设备协同推理:从单机到集群的扩展之道
大数据·人工智能
pearbing4 小时前
天猫UV量提高实用指南:找准方向,稳步突破流量瓶颈
大数据·uv·天猫uv量提高·天猫uv量·uv量提高·天猫提高uv量
Dxy12393102165 小时前
Elasticsearch 索引与映射:为你的数据打造一个“智能仓库”
大数据·elasticsearch·搜索引擎
岁岁种桃花儿6 小时前
Kafka从入门到上天系列第一篇:kafka的安装和启动
大数据·中间件·kafka
Apache Flink6 小时前
Apache Flink Agents 0.2.0 发布公告
大数据·flink·apache
永霖光电_UVLED7 小时前
打造更优异的 UVB 激光器
大数据·制造·量子计算