flink cdc 应用

SQLServer

1. The db history topic or its content is fully or partially missing. Please check database history topic configuration and re-execute the snapshot.

遇到了一下问题,多次尝试,最终发现是数据库大小写要一致。

复制代码
Caused by: io.debezium.DebeziumException: The db history topic or its content is fully or partially missing. Please check database history topic configuration and re-execute the snapshot.
	at io.debezium.relational.HistorizedRelationalDatabaseSchema.recover(HistorizedRelationalDatabaseSchema.java:59) ~[flink-sql-connector-sqlserver-cdc-3.2.0.jar:3.2.0]
	at io.debezium.schema.HistorizedDatabaseSchema.recover(HistorizedDatabaseSchema.java:38) ~[flink-sql-connector-sqlserver-cdc-3.2.0.jar:3.2.0]
	at org.apache.flink.cdc.connectors.sqlserver.source.reader.fetch.SqlServerSourceFetchTaskContext.validateAndLoadDatabaseHistory(SqlServerSourceFetchTaskContext.java:187) ~[flink-sql-connector-sqlserver-cdc-3.2.0.jar:3.2.0]
	at org.apache.flink.cdc.connectors.sqlserver.source.reader.fetch.SqlServerSourceFetchTaskContext.configure(SqlServerSourceFetchTaskContext.java:130) ~[flink-sql-connector-sqlserver-cdc-3.2.0.jar:3.2.0]
	at org.apache.flink.cdc.connectors.base.source.reader.external.IncrementalSourceStreamFetcher.submitTask(IncrementalSourceStreamFetcher.java:84) ~[flink-sql-connector-sqlserver-cdc-3.2.0.jar:3.2.0]
	at org.apache.flink.cdc.connectors.base.source.reader.IncrementalSourceSplitReader.submitStreamSplit(IncrementalSourceSplitReader.java:261) ~[flink-sql-connector-sqlserver-cdc-3.2.0.jar:3.2.0]
	at org.apache.flink.cdc.connectors.base.source.reader.IncrementalSourceSplitReader.pollSplitRecords(IncrementalSourceSplitReader.java:153) ~[flink-sql-connector-sqlserver-cdc-3.2.0.jar:3.2.0]
	at org.apache.flink.cdc.connectors.base.source.reader.IncrementalSourceSplitReader.fetch(IncrementalSourceSplitReader.java:98) ~[flink-sql-connector-sqlserver-cdc-3.2.0.jar:3.2.0]
	at org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58) ~[flink-connector-files-1.20.0.jar:1.20.0]
	at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:165) ~[flink-connector-files-1.20.0.jar:1.20.0]
	at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:117) ~[flink-connector-files-1.20.0.jar:1.20.0]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
sql 复制代码
  CREATE TABLE Member_Extend (ID INT, MemberID INT,  PRIMARY KEY (ID) NOT ENFORCED
        ) WITH (
            'connector' = 'sqlserver-cdc',
            'hostname' = '192.168.1.3',
            'port' = '1433',
            'username' = 'test',
            'password' = 'test',
            'database-name' = 'CrmExtend',
            'table-name' = 'dbo.Member_Extend'
        );

作业安全启停

shell 复制代码
show jobs
Flink SQL> show jobs;
+----------------------------------+------------------------------------------------------------------+----------+-------------------------+
|                           job id |                                                         job name |   status |              start time |
+----------------------------------+------------------------------------------------------------------+----------+-------------------------+
| ce5a0e938563cf52317c5b9055ad102f| testjob |  RUNNING | 2024-11-15T03:38:47.919 |

+----------------------------------+------------------------------------------------------------------+----------+-------------------------+
4 rows in set



SET state.checkpoints.dir='s3://flink/cdc-1.20/savepoints';
stop job 'ce5a0e938563cf52317c5b9055ad102f' with savepoint;

Flink SQL> stop job 'ce5a0e938563cf52317c5b9055ad102f' with savepoint;
+--------------------------------------------------------------+
|                                               savepoint path |
+--------------------------------------------------------------+
| s3://flink/cdc-1.20/savepoints/savepoint-ce5a0e-2935055bb307 |
+--------------------------------------------------------------+
1 row in set

SET execution.savepoint.path='s3://flink/cdc-1.20/savepoints/savepoiznt-ce5a0e-2935055bb307';  
set 'execution.savepoint.ignore-unclaimed-state' = 'true'; 


重新执行原有sql

insert into flink_user select * from user ;
相关推荐
湘-枫叶情缘10 分钟前
“智律提效”AI数字化运营落地项目可行性方案
大数据·人工智能·产品运营
Blossom.1181 小时前
大模型推理优化实战:连续批处理与PagedAttention性能提升300%
大数据·人工智能·python·神经网络·算法·机器学习·php
F36_9_2 小时前
数字化项目管理系统分享:7款助力企业实现项目智能化协同的工具精选
大数据
qq_12498707532 小时前
基于协同过滤算法的在线教育资源推荐平台的设计与实现(源码+论文+部署+安装)
java·大数据·人工智能·spring boot·spring·毕业设计
Hello.Reader2 小时前
Flink SQL 的 RESET 语句一键回到默认配置(SQL CLI 实战)
数据库·sql·flink
程途拾光1583 小时前
发展中国家的AI弯道超车:医疗AI的低成本本土化之路
大数据·人工智能
Mr-Apple3 小时前
记录一次git commit --amend的误操作
大数据·git·elasticsearch
寰天柚子4 小时前
大模型时代的技术从业者:核心能力重构与实践路径
大数据·人工智能
成长之路5144 小时前
【工具变量】上市公司西部陆海新通道DID数据(2010-2024年)
大数据
Hello.Reader5 小时前
Flink SQL UPDATE 语句批模式行级更新、连接器能力要求与实战避坑
大数据·sql·flink