Kubernetes运行大数据组件-运行spark

部署组件

● spark-historyserver

● spark-client

配置文件

yaml 复制代码
kind: ConfigMap
apiVersion: v1
metadata:
  name: spark
data:
  spark-defaults.conf: |-
    spark.eventLog.enabled            true
    spark.eventLog.dir                hdfs://192.168.199.56:8020/eventLogs
    spark.eventLog.compress           true
    spark.serializer                  org.apache.spark.serializer.KryoSerializer
    spark.master                      yarn
    spark.driver.cores                1
    spark.driver.memory               2g
    spark.executor.cores              1
    spark.executor.memory             2g
    spark.executor.instances          1
    spark.sql.warehouse.dir           hdfs://192.168.199.56:8020/user/hive/warehouse
    spark.yarn.historyServer.address  192.168.199.58:18080
    spark.history.ui.port             18080
    spark.history.fs.logDirectory     hdfs://192.168.199.56:8020/eventLogs

spark-historyserver部署文件

yaml 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spark-historyserver
  labels:
    app: spark-historyserver
spec:
  selector:
    matchLabels:
      app: spark-historyserver
  replicas: 1
  template:
    metadata:
      labels:
        app: spark-historyserver
    spec:
      initContainers:
        - name: init-sparklogs
          image: spark:3.0.2
          imagePullPolicy: IfNotPresent
          command:
            - "sh"
            - "-c"
            - "hadoop fs -ls /sparklogs;if [ $? -ne 0 ];then hadoop fs -mkdir /sparklogs && echo 'Create /sparklogs';fi"
          volumeMounts:
            - name: localtime
              mountPath: /etc/localtime
            - name: hadoop-config
              mountPath: /opt/hadoop/etc/hadoop/core-site.xml
              subPath: core-site.xml
            - name: hadoop-config
              mountPath: /opt/hadoop/etc/hadoop/hdfs-site.xml
              subPath: hdfs-site.xml
      containers:
        - name: spark-historyserver
          image: spark:3.0.2
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              cpu: 1000m
              memory: 2Gi
          command:
            - "sh"
            - "-c"
            - "$SPARK_HOME/sbin/start-history-server.sh && tail -f $SPARK_HOME/logs/*"
          volumeMounts:
            - name: localtime
              mountPath: /etc/localtime
            - name: hadoop-config
              mountPath: /opt/hadoop/etc/hadoop/core-site.xml
              subPath: core-site.xml
            - name: hadoop-config
              mountPath: /opt/hadoop/etc/hadoop/hdfs-site.xml
              subPath: hdfs-site.xml
            - name: hadoop-config
              mountPath: /opt/hadoop/etc/hadoop/yarn-site.xml
              subPath: yarn-site.xml
            - name: hadoop-config
              mountPath: /opt/hadoop/etc/hadoop/mapred-site.xml
              subPath: mapred-site.xml
            - name: hive-config
              mountPath: /opt/spark/conf/hive-site.xml
              subPath: hive-site.xml
            - name: spark-config
              mountPath: /opt/spark/conf/spark-defaults.conf
              subPath: spark-defaults.conf
          lifecycle:
            preStop:
              exec:
                command:
                  - "sh"
                  - "-c"
                  - "$SPARK_HOME/sbin/stop-history-server.sh"
      volumes:
        - name: localtime
          hostPath:
            path: /usr/share/zoneinfo/Asia/Shanghai
        - name: hadoop-config
          configMap:
            name: hadoop
        - name: hive-config
          configMap:
            name: hive
        - name: spark-config
          configMap:
            name: spark
      restartPolicy: Always
      hostNetwork: true
      hostAliases:
        - ip: "192.168.199.56"
          hostnames:
            - "bigdata199056"
        - ip: "192.168.199.57"
          hostnames:
            - "bigdata199057"
        - ip: "192.168.199.58"
          hostnames:
            - "bigdata199058"
      nodeSelector:
        spark-historyserver: "true"
      tolerations:
        - key: "bigdata"
          value: "true"
          operator: "Equal"
          effect: "NoSchedule"

spark-client部署文件

yaml 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spark-client
  labels:
    app: spark-client
spec:
  selector:
    matchLabels:
      app: spark-client
  replicas: 1
  template:
    metadata:
      labels:
        app: spark-client
    spec:
      containers:
        - name: spark-client
          image: spark:3.0.2
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              cpu: 1000m
              memory: 2Gi
          volumeMounts:
            - name: localtime
              mountPath: /etc/localtime
            - name: hadoop-config
              mountPath: /opt/hadoop/etc/hadoop/core-site.xml
              subPath: core-site.xml
            - name: hadoop-config
              mountPath: /opt/hadoop/etc/hadoop/hdfs-site.xml
              subPath: hdfs-site.xml
            - name: hadoop-config
              mountPath: /opt/hadoop/etc/hadoop/yarn-site.xml
              subPath: yarn-site.xml
            - name: hadoop-config
              mountPath: /opt/hadoop/etc/hadoop/mapred-site.xml
              subPath: mapred-site.xml
            - name: hive-config
              mountPath: /opt/spark/conf/hive-site.xml
              subPath: hive-site.xml
            - name: spark-config
              mountPath: /opt/spark/conf/spark-defaults.conf
              subPath: spark-defaults.conf
      volumes:
        - name: localtime
          hostPath:
            path: /usr/share/zoneinfo/Asia/Shanghai
        - name: hadoop-config
          configMap:
            name: hadoop
        - name: hive-config
          configMap:
            name: hive
        - name: spark-config
          configMap:
            name: spark
      restartPolicy: Always
      hostNetwork: true
      hostAliases:
        - ip: "192.168.199.56"
          hostnames:
            - "bigdata199056"
        - ip: "192.168.199.57"
          hostnames:
            - "bigdata199057"
        - ip: "192.168.199.58"
          hostnames:
            - "bigdata199058"
      nodeSelector:
        spark: "true"
      tolerations:
        - key: "bigdata"
          value: "true"
          operator: "Equal"
          effect: "NoSchedule"

注意:spark-client一般是集成到应用服务中使用。

部署historyserver和client

shell 复制代码
> kubectl.exe apply -f .\spark\
deployment.apps/spark-client created
configmap/spark created
deployment.apps/spark-historyserver created

spark-historyserver日志和web服务


通过spark-client提交测试任务

yarn中查看spark任务

spark-historyserver中查看任务

相关推荐
九河云16 分钟前
5秒开服,你的应用部署还卡在“加载中”吗?
大数据·人工智能·安全·机器学习·华为云
Gain_chance29 分钟前
36-学习笔记尚硅谷数仓搭建-DWS层数据装载脚本
大数据·数据仓库·笔记·学习
每日新鲜事1 小时前
热销复盘:招商林屿缦岛203套售罄背后的客户逻辑分析
大数据·人工智能
Gold Steps.1 小时前
OpenEBS — 云原生 CNS 高性能存储
云原生·kubernetes·存储
大雨淅淅2 小时前
Eureka从入门到精通:开启微服务架构的钥匙
微服务·云原生·eureka·架构
AI架构全栈开发实战笔记2 小时前
Eureka 在大数据环境中的性能优化技巧
大数据·ai·eureka·性能优化
AI架构全栈开发实战笔记2 小时前
Eureka 对大数据领域服务依赖关系的梳理
大数据·ai·云原生·eureka
自挂东南枝�3 小时前
政企舆情大数据服务平台的“全域洞察中枢”
大数据
weisian1513 小时前
Elasticsearch-1--什么是ES?
大数据·elasticsearch·搜索引擎
LaughingZhu3 小时前
Product Hunt 每日热榜 | 2026-02-08
大数据·人工智能·经验分享·搜索引擎·产品运营