Flink K8s Operator 测试验证

basic.yaml

yaml 复制代码
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
  name: basic-example
spec:
  image: 10.177.85.101:8000/flink/flink:1.16
  flinkVersion: v1_16
  flinkConfiguration:
    taskmanager.numberOfTaskSlots: "2"
  serviceAccount: flink
  jobManager:
    resource:
      memory: "2048m"
      cpu: 1
  taskManager:
    resource:
      memory: "2048m"
      cpu: 1
  job:
    jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
    parallelism: 2
    upgradeMode: stateless

提交job:

sh 复制代码
kubectl create -f basic.yaml

To expose the Flink Dashboard you may add a port-forward rule or look the ingress configuration options:

sh 复制代码
kubectl port-forward --address 0.0.0.0 svc/basic-example-rest 8081 -n flink-operator

Now the Flink Dashboard is accessible at ip:8081.

删除job:

sh 复制代码
kubectl delete flinkdeployment/basic-example

二 HA and CheckPoint

basic-checkpoint-ha.yaml

sh 复制代码
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
  name: basic-checkpoint-ha-example
spec:
  image: 10.177.85.101:8000/flink/flink:1.16
  flinkVersion: v1_16
  flinkConfiguration:
    taskmanager.numberOfTaskSlots: "2"
    state.savepoints.dir: file:///flink-data/savepoints
    state.checkpoints.dir: file:///flink-data/checkpoints
    high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
    high-availability.storageDir: file:///flink-data/ha
  serviceAccount: flink
  jobManager:
    resource:
      memory: "2048m"
      cpu: 1
  taskManager:
    resource:
      memory: "2048m"
      cpu: 1
  podTemplate:
    spec:
      containers:
        - name: flink-main-container
          volumeMounts:
          - mountPath: /flink-data
            name: flink-volume
      volumes:
      - name: flink-volume
        hostPath:
          # directory location on host
          path: /tmp/flink
          # this field is optional
          type: Directory
  job:
    jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
    parallelism: 2
    upgradeMode: savepoint
    state: running
    savepointTriggerNonce: 0

提交job:

sh 复制代码
kubectl create -f basic-checkpoint-ha.yaml

相关推荐
zskj_zhyl5 分钟前
让科技之光,温暖银龄岁月——智绅科技“智慧养老进社区”星城国际站温情纪实
大数据·人工智能·科技·生活
LCY13318 分钟前
阿里云上进行k8s集群的配置
阿里云·kubernetes·云计算
不辉放弃1 小时前
Spark的累加器(Accumulator)
大数据·数据库·spark
梦想养猫开书店1 小时前
36、spark-measure 源码修改用于数据质量监控
大数据·分布式·spark
不辉放弃1 小时前
Spark的宽窄依赖
大数据·数据库·pyspark
大数据狂人1 小时前
深入剖析 StarRocks 与 Hive 的区别、使用场景及协同方案实践
大数据·starrocks·hive·数仓
Freed&1 小时前
Elasticsearch 从入门到精通:术语、索引、分片、读写流程与面试高频题一文搞懂
大数据·elasticsearch·面试
柏峰电子2 小时前
光伏气象监测系统:当阳光遇见科技
大数据·人工智能·科技
Leo.yuan2 小时前
国内数据集成厂商有哪些?如何选择最适合的数据集成平台?
大数据·人工智能·信息可视化·数据挖掘·数据分析
云诺2 小时前
Ambari3.0安装部署教程(手把手教学)
大数据·运维