1.查找镜像
dockerhub访问不了的可以访问这个查找镜像 https://docker.aityp.com/
在docker服务器上拉取flink镜像到本地
shell
拉取镜像到你的docker服务器本地
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/apache/flink:2.0.0-scala_2.12-java17
将docker服务器本地的镜像打上标签
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/apache/flink:2.0.0-scala_2.12-java17 10.31.68.12:5000/flink:2.0.0-scala_2.12-java17
将打好标签的镜像推送到docker服务器
docker push 10.31.68.12:5000/flink:2.0.0-scala_2.12-java17
做完之后docker镜像如下,后面需要镜像的地方就写它
shell
10.31.68.12:5000/flink:2.0.0-scala_2.12-java17
2.安装Flink Kubernetes Operator
安装证书以启用webhook组件
shell
kubectl create -f https://github.com/jetstack/cert-manager/releases/download/v1.8.2/cert-manager.yaml
如果yaml下载不成功,去 https://gitee.com/MiraculousWarmHeart/file-share/blob/master/flink/cert-manager.yaml复制,太长了就不贴这了,下到你的服务器上比如/data/flink/这个路径,执行
shell
kubectl create -f /data/flink/cert-manager.yaml
部署稳定Flink Kubernetes Operator,依次执行
shell
helm repo add flink-operator-repo https://downloads.apache.org/flink/flink-kubernetes-operator-1.11.0/
helm install flink-kubernetes-operator flink-operator-repo/flink-kubernetes-operator
装完验证
执行 kubectl get pods 结果如下
shell
NAME READY STATUS RESTARTS AGE
flink-kubernetes-operator-5c8975f64b-jdgc6 2/2 Running 0 81m
执行 helm list 结果如下
shell
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
flink-kubernetes-operator flink-prod 1 2025-05-28 15:08:00.683092663 +0800 CST deployed flink-kubernetes-operator-1.11.0 1.11.0
3.存储卷及NFS配置(非必须,有配置好的就不用做)
3.1. 创建存储类(StorageClass)
kubectl create -f flink-storage.yaml 也可以在kuboard上创建
shell
apiVersion: v1
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: flink-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
3.2. 创建 PV(PersistentVolume)
定义 NFS 类型的 PV,指定存储容量和访问模式:
kubectl create -f flink-trans-pv.yaml 也可以在kuboard上创建
shell
apiVersion: v1
kind: PersistentVolume
metadata:
name: flink-trans-pv
namespace: flink-prod
spec:
capacity:
storage: 20Gi # 存储容量
accessModes:
- ReadWriteMany # 支持多节点读写
storageClassName: nfs-flink
nfs:
server: 10.31.68.22 # NFS 服务器 IP
path: "/data/flink" # NFS 共享路径
persistentVolumeReclaimPolicy: Retain # 回收策略(保留数据)
3.3. 创建 PVC(PersistentVolumeClaim)
申领符合要求的 PV 资源:
kubectl create -f flink-trans-pvc.yaml 也可以在kuboard上创建
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: flink-trans-pvc
namespace: flink-prod
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi # 实际申请的容量(需 ≤ PV 配置)
storageClassName: nfs-flink
4.提交flink作业
4.1上传jar到nfs目录
scp ekrPatentDataProcessFlink-1.0.0.jar [email protected]:/flink/jobs/
或者本地直接上传到nfs目录
4.2 部署应用
shell
kubectl create -f flink-kafka-trans.yaml
flink-kafka-trans.yaml内容如下
shell
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-kafka-trans
namespace: flink-prod
spec:
image: 10.31.68.12:5000/flink:2.0.0-scala_2.12-java17
flinkVersion: v2_0
flinkConfiguration:
taskmanager.numberOfTaskSlots: "2"
serviceAccount: flink
podTemplate:
spec:
hostAliases:
- ip: "10.31.68.21"
hostnames:
- "k8smaster"
containers:
- name: flink-main-container
volumeMounts:
- name: nfs-jar-volume # 挂载 Jar 包存储
mountPath: /data/flink
resources:
requests:
memory: "2048Mi"
cpu: "1000m"
limits:
memory: "2048Mi"
cpu: "1000m"
volumes:
- name: nfs-jar-volume # 关联 PVC
persistentVolumeClaim:
claimName: flink-trans-pvc # 替换为你的 PVC 名称
jobManager:
resource:
memory: "2048m"
cpu: 1
taskManager:
resource:
memory: "2048m"
cpu: 1
job:
jarURI: local:///data/flink/jobs/ekrPatentDataProcessFlink-1.0.0.jar # 容器内挂载路径
entryClass: net.cnki.ekr.transfer.EkrFlinkTransApplication
args: []
parallelism: 2
upgradeMode: stateless
4.3 查看日志
shell
kubectl logs -f deploy/flink-kafka-trans
5.转发暴露端口(非必须,日志载k8s容器中可以看)
shell
指定映射端口
kubectl port-forward deployment/flink-kafka-trans 28081:8081
不指定映射端口,由kubectl来选择和分配本地端口
kubectl port-forward deployment/flink-kafka-trans :8081
或者kubectl port-forward service/flink-kafka-trans 28081:8081
6.要停止作业并删除FlinkDeployment
shell
kubectl delete flinkdeployment/flink-kafka-trans