Flink CDC (session模式)

1、

复制代码
# Start YARN session
./bin/yarn-session.sh --detached

2、配置文件:

复制代码
rest.bind-port: {{REST_PORT}}
rest.address: {{NODE_IP}}
execution.target: yarn-session
yarn.application.id: {{YARN_APPLICATION_ID}}

3、mysql-doris.yml

复制代码
source:
 type: mysql
 hostname: localhost
 port: 3306
 username: root
 password: 123456
 tables: app_db.\.*
 server-id: 5400-5404
 server-time-zone: UTC

sink:
 type: doris
 fenodes: 127.0.0.1:8030
 username: root
 password: ""

pipeline:
 name: Sync MySQL Database to Doris
 parallelism: 2

4、运行测试:

复制代码
./bin/flink-cdc.sh mysql-to-doris.yaml

5、相关问题:

1、Caused by: java.lang.ClassCastException: cannot assign instance of com.starrocks.connector.flink.catalog.StarRocksCatalog to field org.apache.flink.cdc.connectors.starrocks.sink.StarRocksMetadataApplier.catalog of type com.starrocks.connector.flink.catalog.StarRocksCatalog in instance of org.apache.flink.cdc.connectors.starrocks.sink.StarRocksMetadataApplier

解决方案:

classloader.resolve-order: parent-first

2、

Caused by: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: Flink CDC Event Source: mysql -> SchemaOperator -> PrePartition

at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.initialize(ExecutionJobVertex.java:229)

at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.initializeJobVertex(DefaultExecutionGraph.java:914)

at org.apache.flink.runtime.executiongraph.ExecutionGraph.initializeJobVertex(ExecutionGraph.java:218)

at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.initializeJobVertices(DefaultExecutionGraph.java:896)

at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:852)

at org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:207)

at org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:163)

at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:365)

at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:210)

at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:136)

at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:152)

at org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:119)

at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:371)

at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:348)

at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:123)

at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambdacreateJobMasterService0(DefaultJobMasterServiceFactory.java:95)

at org.apache.flink.util.function.FunctionUtils.lambdauncheckedSupplier4(FunctionUtils.java:112)

... 4 more

Caused by: java.lang.NoClassDefFoundError: org/apache/flink/cdc/runtime/typeutils/EventTypeInfo

at java.lang.Class.getDeclaredFields0(Native Method)

at java.lang.Class.privateGetDeclaredFields(Class.java:2583)

解决方案:

cp /BigData/run/flink-cdc/lib/flink-cdc-dist-3.1.0.jar /BigData/run/flink/lib/

3、问题3

Caused by: org.apache.flink.table.api.ValidationException: The MySQL server has a timezone offset (28800 seconds ahead of UTC) which does not match the configured timezone UTC. Specify the right server-time-zone to avoid inconsistencies for time-related fields.

解决方案1:

jdbc:mysql://hostname:port/database?serverTimezone=UTC

解决方案2:

SET table.local-time-zone = 'Asia/Shanghai'; -- 或者你所需的时区

解决方案3:

CREATE TABLE my_table (

...

) WITH (

'connector' = 'mysql-cdc',

'hostname' = 'localhost',

'port' = '3306',

...

'server-time-zone' = 'Asia/Shanghai' -- 或者你所需的时区

);

相关推荐
rui锐rui2 分钟前
大数据学习2:HIve
大数据·hive·学习
G皮T16 分钟前
【Elasticsearch】检索高亮
大数据·elasticsearch·搜索引擎·全文检索·kibana·检索·高亮
zskj_zhyl5 小时前
智慧养老丨从依赖式养老到自主式养老:如何重构晚年生活新范式
大数据·人工智能·物联网
哲科软件5 小时前
从“电话催维修“到“手机看进度“——售后服务系统开发如何重构客户体验
大数据·智能手机·重构
zzywxc7875 小时前
AI 正在深度重构软件开发的底层逻辑和全生命周期,从技术演进、流程重构和未来趋势三个维度进行系统性分析
java·大数据·开发语言·人工智能·spring
专注API从业者5 小时前
构建淘宝评论监控系统:API 接口开发与实时数据采集教程
大数据·前端·数据库·oracle
一瓣橙子7 小时前
缺少关键的 MapReduce 框架文件
大数据·mapreduce
永洪科技14 小时前
永洪科技荣获商业智能品牌影响力奖,全力打造”AI+决策”引擎
大数据·人工智能·科技·数据分析·数据可视化·bi
weixin_3077791314 小时前
Hive集群之间迁移的Linux Shell脚本
大数据·linux·hive·bash·迁移学习
上海锝秉工控17 小时前
防爆拉线位移传感器:工业安全的“隐形守护者”
大数据·人工智能·安全