flink任务提交example

(1) Start Kubernetes session

./bin/kubernetes-session.sh -Dkubernetes.cluster-id=session-first-flink-cluster

(2) Submit example job

./bin/flink run \ --target kubernetes-session \ -Dkubernetes.cluster-id=session-first-flink-cluster \ ./examples/streaming/TopSpeedWindowing.jar

(3) Stop Kubernetes session by deleting cluster deployment

kubectl delete deployment/session-first-flink-cluster

/home/flink-1.19.0/bin/flink run-application --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster -Dkubernetes.container.image.ref=flink:1.19.0-scala_2.12-java17 local:///home/flink-1.19.0/examples/streaming/SessionWindowing.jar

========================================================================================================================================================

List running job on the cluster

$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster

Cancel running job

$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>

./bin/flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=my-first-application-cluster \ -Dkubernetes.namespace=csg \ -Dkubernetes.service-account=csg \ -Dkubernetes.container.image=flink:1.19.0-scala_2.12-java17 \ -Dkubernetes.rest-service.exposed.type=NodePort \ -Dkubernetes.pod-template-file.jobmanager=/home/pod-template.yaml \ local:///opt/flink/jars/TopSpeedWindowing.jar

./bin/flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=my-first-application-cluster \ -Dkubernetes.namespace=csg \ -Dkubernetes.container.image=flink:1.19.0-scala_2.12-java17 \ -Dkubernetes.user.artifacts.raw-http-enabled=true \ -Dkubernetes.rest-service.exposed.type=NodePort \ https://github.com/turboic/myJar/blob/main/TopSpeedWindowing.jar ./bin/flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=my-first-application-cluster \ -Dkubernetes.namespace=csg \ -Dkubernetes.container.image=flink:1.19.0-scala_2.12-java17 \ -Dkubernetes.rest-service.exposed.type=NodePort \ -Dkubernetes.pod-template-file.jobmanager=/home/pod-template.yaml \ http://file-service:8080/file/download/TopSpeedWindowing.jar

========================================================================================================================================================

./bin/flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=first-application-cluster \ -Dkubernetes.container.image=docker-dev/job:4.1.0-202107011136-e2e38 \ -Dkubernetes.container.image.pull-secrets=arm-repo-secret \ -Dkubernetes.rest-service.exposed.type=NodePort \ -Dkubernetes.pod-template-file.jobmanager=./job-template.yaml \ -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory \ -Dhigh-availability.storageDir=file:///tmp/flink \ -Dclassloader.resolve-order=parent-first \ -c com.ericsson.cv.reactor.job.ReactorJobApplication \ local:///opt/flink/usrlib/job-4.1.0-202107011136-e2e38-with-dependency.jar --config.path "/opt/flink/usrconf/config.properties" --dispatcher.config.path "/opt/flink/usrconf/dispatcher-config.json" 例子 flink run-application -p 2 -t kubernetes-application \ -Dkubernetes.cluster-id={kubernetes_cluster_id} \\ -Dkubernetes.container.image={ACCOUNT_ID}.dkr.ecr.{AWS_REGION}.amazonaws.com/flink-demo:latest \\ -Dkubernetes.container.image.pull-policy=Always \\ -Dkubernetes.jobmanager.service-account=flink-service-account \\ -Dkubernetes.pod-template-file.jobmanager=./jobmanager-pod-template.yaml \\ -Dkubernetes.rest-service.exposed.type=LoadBalancer \\ -Dkubernetes.rest-service.annotations=service.beta.kubernetes.io/aws-load-balancer-security-groups:{EKS_EXTERNAL_SG} \ -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory\ -Dhigh-availability.cluster-id={kubernetes_cluster_id} \\ -Dhigh-availability.storageDir=s3://{FLINK_S3_BUCKET}/recovery \ -Dstate.savepoints.dir=s3://{FLINK_S3_BUCKET}/savepoints/{kubernetes_cluster_id} \ -Dkubernetes.taskmanager.service-account=flink-service-account \ -Dkubernetes.taskmanager.cpu=1 \ -Dtaskmanager.memory.process.size=4096m \ -Dtaskmanager.numberOfTaskSlots=2 \ local:///opt/flink/usrlib/aws-kinesis-analytics-java-apps-1.0.jar \ --inputStreamName {FLINK_INPUT_STREAM} --region {AWS_REGION} --s3SinkPath s3://{FLINK_S3_BUCKET}/data --checkpoint-dir s3://{FLINK_S3_BUCKET}/recovery

https://github.com/aws-samples/cost-optimized-flink-on-kubernetes

相关推荐
计算机毕设定制辅导-无忧学长39 分钟前
TDengine 性能监控与调优实战指南(一)
大数据·时序数据库·tdengine
TMT星球1 小时前
快手本地生活2024年GMV同增200%,“新线城市+AI”将成增长引擎
大数据·人工智能·生活
神奇的黄豆5 小时前
Spark-sql编程
大数据·sql·spark
Cachel wood6 小时前
大数据开发知识1:数据仓库
android·大数据·数据仓库·sql·mysql·算法·ab测试
TDengine (老段)6 小时前
TDengine 整体构架
大数据·数据库·物联网·时序数据库·tdengine·iotdb
lilye667 小时前
精益数据分析(7/126):打破创业幻想,拥抱数据驱动
大数据·人工智能·数据分析
脑瓜凉7 小时前
使用datax通过HbaseShell封装_同步hbase数据到瀚高数据库highgo_踩坑_细节总结---大数据之DataX工作笔记009
大数据·datax同步hbase到瀚高·hbase同步数据到瀚高
直裾7 小时前
HADOOP——序列化
大数据·hadoop·分布式