Docker--Spark

What is Apache Spark™?

Apache Spark™ is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, pandas API on Spark for pandas workloads, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project web page⁠. This README file only contains basic setup instructions.

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

复制代码
docker run -it spark /opt/spark/bin/spark-shell

Try the following command, which should return 1,000,000,000:

复制代码
scala> spark.range(1000 * 1000 * 1000).count()
Interactive Python Shell

The easiest way to start using PySpark is through the Python shell:

复制代码
docker run -it spark:python3 /opt/spark/bin/pyspark

And run the following command, which should also return 1,000,000,000:

复制代码
>>> spark.range(1000 * 1000 * 1000).count()
Interactive R Shell

The easiest way to start using R on Spark is through the R shell:

复制代码
docker run -it spark:r /opt/spark/bin/sparkR
Running Spark on Kubernetes

https://spark.apache.org/docs/latest/running-on-kubernetes.html⁠

Configuration and environment variables

See more in https://github.com/apache/spark-docker/blob/master/OVERVIEW.md#environment-variable⁠

License

Apache Spark, Spark, Apache, the Apache feather logo, and the Apache Spark project logo are trademarks of The Apache Software Foundation.

Licensed under the Apache License, Version 2.0⁠.

As with all Docker images, these likely also contain other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained).

Some additional license information which was able to be auto-detected might be found in the repo-info repository's spark/ directory⁠.

As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.

相关推荐
鹏说大数据1 小时前
Spark 和 Hive 的关系与区别
大数据·hive·spark
B站计算机毕业设计超人1 小时前
计算机毕业设计Hadoop+Spark+Hive招聘推荐系统 招聘大数据分析 大数据毕业设计(源码+文档+PPT+ 讲解)
大数据·hive·hadoop·python·spark·毕业设计·课程设计
B站计算机毕业设计超人1 小时前
计算机毕业设计hadoop+spark+hive交通拥堵预测 交通流量预测 智慧城市交通大数据 交通客流量分析(源码+LW文档+PPT+讲解视频)
大数据·hive·hadoop·python·spark·毕业设计·课程设计
MonkeyKing_sunyuhua2 小时前
docker compose up -d --build 完全使用新代码打包的方法
docker·容器·eureka
醇氧3 小时前
【docker】mysql 8 的健康检查(Health Check)
mysql·docker·容器
技术路上的探险家3 小时前
Ubuntu下Docker与NVIDIA Container Toolkit完整安装教程(含国内源适配)
linux·ubuntu·docker
70asunflower7 小时前
用Docker创建不同的容器类型
运维·docker·容器
Lansonli7 小时前
大数据Spark(八十):Action行动算子fold和aggregate使用案例
大数据·分布式·spark
小Pawn爷7 小时前
3.Dockerfile
docker
CodeGolang7 小时前
Docker容器化部署Zabbix监控系统完整指南
docker·容器·zabbix