flink任务提交example

(1) Start Kubernetes session

./bin/kubernetes-session.sh -Dkubernetes.cluster-id=session-first-flink-cluster

(2) Submit example job

./bin/flink run \ --target kubernetes-session \ -Dkubernetes.cluster-id=session-first-flink-cluster \ ./examples/streaming/TopSpeedWindowing.jar

(3) Stop Kubernetes session by deleting cluster deployment

kubectl delete deployment/session-first-flink-cluster

/home/flink-1.19.0/bin/flink run-application --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster -Dkubernetes.container.image.ref=flink:1.19.0-scala_2.12-java17 local:///home/flink-1.19.0/examples/streaming/SessionWindowing.jar

========================================================================================================================================================

List running job on the cluster

$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster

Cancel running job

$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>

./bin/flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=my-first-application-cluster \ -Dkubernetes.namespace=csg \ -Dkubernetes.service-account=csg \ -Dkubernetes.container.image=flink:1.19.0-scala_2.12-java17 \ -Dkubernetes.rest-service.exposed.type=NodePort \ -Dkubernetes.pod-template-file.jobmanager=/home/pod-template.yaml \ local:///opt/flink/jars/TopSpeedWindowing.jar

./bin/flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=my-first-application-cluster \ -Dkubernetes.namespace=csg \ -Dkubernetes.container.image=flink:1.19.0-scala_2.12-java17 \ -Dkubernetes.user.artifacts.raw-http-enabled=true \ -Dkubernetes.rest-service.exposed.type=NodePort \ https://github.com/turboic/myJar/blob/main/TopSpeedWindowing.jar ./bin/flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=my-first-application-cluster \ -Dkubernetes.namespace=csg \ -Dkubernetes.container.image=flink:1.19.0-scala_2.12-java17 \ -Dkubernetes.rest-service.exposed.type=NodePort \ -Dkubernetes.pod-template-file.jobmanager=/home/pod-template.yaml \ http://file-service:8080/file/download/TopSpeedWindowing.jar

========================================================================================================================================================

./bin/flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=first-application-cluster \ -Dkubernetes.container.image=docker-dev/job:4.1.0-202107011136-e2e38 \ -Dkubernetes.container.image.pull-secrets=arm-repo-secret \ -Dkubernetes.rest-service.exposed.type=NodePort \ -Dkubernetes.pod-template-file.jobmanager=./job-template.yaml \ -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory \ -Dhigh-availability.storageDir=file:///tmp/flink \ -Dclassloader.resolve-order=parent-first \ -c com.ericsson.cv.reactor.job.ReactorJobApplication \ local:///opt/flink/usrlib/job-4.1.0-202107011136-e2e38-with-dependency.jar --config.path "/opt/flink/usrconf/config.properties" --dispatcher.config.path "/opt/flink/usrconf/dispatcher-config.json" 例子 flink run-application -p 2 -t kubernetes-application \ -Dkubernetes.cluster-id={kubernetes_cluster_id} \\ -Dkubernetes.container.image={ACCOUNT_ID}.dkr.ecr.{AWS_REGION}.amazonaws.com/flink-demo:latest \\ -Dkubernetes.container.image.pull-policy=Always \\ -Dkubernetes.jobmanager.service-account=flink-service-account \\ -Dkubernetes.pod-template-file.jobmanager=./jobmanager-pod-template.yaml \\ -Dkubernetes.rest-service.exposed.type=LoadBalancer \\ -Dkubernetes.rest-service.annotations=service.beta.kubernetes.io/aws-load-balancer-security-groups:{EKS_EXTERNAL_SG} \ -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory\ -Dhigh-availability.cluster-id={kubernetes_cluster_id} \\ -Dhigh-availability.storageDir=s3://{FLINK_S3_BUCKET}/recovery \ -Dstate.savepoints.dir=s3://{FLINK_S3_BUCKET}/savepoints/{kubernetes_cluster_id} \ -Dkubernetes.taskmanager.service-account=flink-service-account \ -Dkubernetes.taskmanager.cpu=1 \ -Dtaskmanager.memory.process.size=4096m \ -Dtaskmanager.numberOfTaskSlots=2 \ local:///opt/flink/usrlib/aws-kinesis-analytics-java-apps-1.0.jar \ --inputStreamName {FLINK_INPUT_STREAM} --region {AWS_REGION} --s3SinkPath s3://{FLINK_S3_BUCKET}/data --checkpoint-dir s3://{FLINK_S3_BUCKET}/recovery

https://github.com/aws-samples/cost-optimized-flink-on-kubernetes

相关推荐
半夏陌离38 分钟前
SQL 实战指南:电商订单数据分析(订单 / 用户 / 商品表关联 + 统计需求)
java·大数据·前端
成长之路5142 小时前
【面板数据】各省制造业出口技术复杂度数据集(2010-2023年)
大数据
翰林小院2 小时前
【大数据专栏】大数据框架-Apache Druid Overview
大数据·durid
Learn Beyond Limits4 小时前
Error metrics for skewed datasets|倾斜数据集的误差指标
大数据·人工智能·python·深度学习·机器学习·ai·吴恩达
IT研究室5 小时前
大数据毕业设计选题推荐-基于大数据的宫颈癌风险因素分析与可视化系统-Spark-Hadoop-Bigdata
大数据·hadoop·spark·毕业设计·源码·数据可视化·bigdata
武子康5 小时前
Java-118 深入浅出 MySQL ShardingSphere 分片剖析:SQL 支持范围、限制与优化实践
java·大数据·数据库·分布式·sql·mysql·性能优化
IT毕设梦工厂5 小时前
大数据毕业设计选题推荐-基于大数据的高级大豆农业数据分析与可视化系统-Hadoop-Spark-数据可视化-BigData
大数据·数据分析·课程设计
专注数据的痴汉6 小时前
「数据获取」《中国服务业统计与服务业发展(2014)》
大数据·人工智能
镜舟科技6 小时前
告别 Hadoop,拥抱 StarRocks!政采云数据平台升级之路
大数据·starrocks·数据仓库·hadoop·存算分离
毕设源码-赖学姐6 小时前
【开题答辩全过程】以 基于Hadoop电商数据的可视化分析为例,包含答辩的问题和答案
大数据·hadoop·分布式