作者: Billmay表妹 原文来源: https://tidb.net/blog/c7445005
背景
本次实践围绕 OceanBase Binlog Server + Canal + Canal Adapter 实现 OB 增量数据到 TiDB 的同步,核心流程涵盖搭建部署、配置调整、服务启动及同步验证等环节,具体如下
搭建 OceanBase Binlog Server
前提条件
在部署 Binlog Server(即 obbinlog )之前,请确保满足以下条件:
- OceanBase 集群已配置
obconfig_url登录 OceanBase 集群后执行:
SQL
SHOW PARAMETERS LIKE 'obconfig_url';
- 若未配置,需手动安装
obconfigserver并设置。具体方法参见: 使用命令行部署 obconfigserver 。 - ODP(OBProxy)已部署且版本兼容 Binlog 服务依赖 ODP 提供连接支持,并要求 ODP 和 OceanBase 数据库版本在支持范围内。参见: 版本发布记录 。
- 网络互通性 确保 Binlog Server 能访问 OceanBase 实例的 SQL/RPC 端口、元数据库端口,同时 ODP 能访问
binlog_service_ip。
步骤一:安装
- 社区版安装方式(以 yum 安装为例)
Bash
# 添加软件源后安装
yum install -y obbinlog
安装完成后,默认路径为 /home/ds/oblogproxy 。
注意:企业版用户需联系 OceanBase 技术支持获取安装包。详情见: Binlog 服务介绍
- 手动解压部署(可选)
也可下载 RPM 包后使用 rpm2cpio 解压至指定目录。
步骤二:初始化与启动节点
首次启动时需要初始化元数据表,后续节点无需重复初始化。
启动后可通过以下命令查询节点状态:
SQL
SHOW NODES;
详细说明见: 节点管理
OceanBase 租户如何订阅 Binlog Server
步骤:创建 Binlog 任务
首先确认租户信息:
SQL
-- 查看集群名
SHOW PARAMETERS LIKE 'cluster';
-- 获取 config_url
SHOW PARAMETERS LIKE 'obconfig_url';
然后在 Binlog Server 上执行 CREATE BINLOG 命令,示例如下:
SQL
CREATE BINLOG INSTANCE binlog1 FOR `demo`.`obmysql` CLUSTER_URL='http://1xx.xx.xx.1:8080/services?Action=ObRootServiceInfo';
参数说明:
${cluster_name}:实际集群名${tenant_name}:租户名称${config_url}:通过SHOW PARAMETERS LIKE 'obconfig_url'获取的 value 值
参考文档: 创建 Binlog 实例
如何检查 OceanBase 实例是否正常生成 Binlog
方法一:通过日志检查
查看 obbinlog 的运行日志,通常位于:
Plain
/home/ds/oblogproxy/log/logproxy.log
搜索关键错误或状态信息,例如是否有拉取 clog 成功的日志。
若出现资源不足报错,如:
Plain
[error] selection_strategy.cpp(519): [ResourcesFilter] The resource threshold of node ... does not meet requirements
请检查 CPU、内存、磁盘使用率是否超限。
详见: 问题排查手册
方法二:监控与诊断工具
可使用 obdiag 工具进行一键诊断,收集集群和 Binlog 相关状态信息。
如何进入 OceanBase Binlog Server 的安装目录和 run 子目录并检查包含的文件
默认安装路径
社区版默认安装路径为:
Plain
/home/ds/oblogproxy
进入 run 目录并查看文件
Bash
cd /home/ds/oblogproxy/run
ls -la
常见子目录和文件包括:
bin/:可执行程序,如logproxy主进程conf/:配置文件目录log/:日志文件,特别是logproxy.logrun/:运行时产生的 PID 文件、socket 文件等lib/:依赖库文件
你可以查看当前运行的进程:
Bash
ps -ef | grep logproxy
补充说明
- 不适用场景 :OceanBase 的 Binlog 服务暂不适用于主备搭建和增量恢复等场景。参见: Binlog 服务介绍
- 版本兼容性 :不同版本的
obbinlog支持不同的 OceanBase 版本。如果版本不在支持范围,可手动安装对应版本的obcdc依赖。参见: obbinlog V4.3.2
总结
| 操作项 | 关键命令/路径 |
| 检查 obconfig_url | SHOW PARAMETERS LIKE 'obconfig_url'; |
| 创建 Binlog 实例 | CREATE BINLOG INSTANCE ... FOR \cluster`.`tenant` CLUSTER_URL='...'` |
| 安装目录 | /home/ds/oblogproxy |
| 日志路径 | /home/ds/oblogproxy/log/logproxy.log |
| 查询节点 | SHOW NODES; |
建议结合 OCP 或 obd 工具进行可视化管理和自动化部署,提升运维效率。
更多详情请参考官方文档:
安装 zookeeper
kafka 也是基于 zk 的,而这个包能直接把 zk 拉起来
Plain
wget https://archive.apache.org/dist/kafka/3.9.0/kafka_2.13-3.9.0.tgz
tar zxvf kafka_2.13-3.9.0.tgz
cd kafka_2.13-3.9.0
bin/zookeeper-server-start.sh config/zookeeper.properties
安装 java
Plain
yum -y install java
java --version
结果输出
Plain
#openjdk 11.0.21 2023-10-17
#OpenJDK Runtime Environment Bisheng (build 11.0.21+9)
#OpenJDK 64-Bit Server VM Bisheng (build 11.0.21+9, mixed mode, sharing)
安装 canal
安装 canal.deployer-1.1.8.tar.gz 、canal.adapter-1.1.8.tar.gz
Plain
wget https://github.com/alibaba/canal/releases/download/canal-1.1.8/canal.deployer-1.1.8.tar.gz
wget https://github.com/alibaba/canal/releases/download/canal-1.1.8/canal.adapter-1.1.8.tar.gz
修改 deployer 的配置
需要修改两个配置文件 canal.properties 、instance.properties
配置 canal.properties 文件
Plain
vi /root/canal-for-ob-1.1.8/conf/canal.properties
canal.properties 配置文件
Shell
#################################################
common argument
#################################################
tcp bind ip
canal.ip =
register ip to zookeeper
canal.register.ip =
canal.port = 11111
canal.metrics.pull.port = 11112
canal instance user/passwd
canal.user = canal
canal.passwd =
canal admin config
#canal.admin.manager = 127.0.0.1:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd =
admin auto register
#canal.admin.register.auto = true
#canal.admin.register.cluster =
#canal.admin.register.name =
canal.zkServers = 127.0.0.1:2181 <--- 填上 zk 的地址
flush data to zk
canal.zookeeper.flush.period = 1000
canal.withoutNetty = false
tcp, kafka, rocketMQ, rabbitMQ, pulsarMQ
canal.serverMode = tcp <--- 填上 tcp
flush meta cursor/parse position to file
canal.file.data.dir = ${canal.conf.dir}
canal.file.flush.period = 1000
memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size = 16384
memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit = 1024
meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE
canal.instance.memory.rawEntry = true
detecing config
canal.instance.detecting.enable = false
#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
canal.instance.detecting.sql = select 1
canal.instance.detecting.interval.time = 3
canal.instance.detecting.retry.threshold = 3
canal.instance.detecting.heartbeatHaEnable = false
support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size = 1024
mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds = 60
network config
canal.instance.network.receiveBufferSize = 16384
canal.instance.network.sendBufferSize = 16384
canal.instance.network.soTimeout = 30
binlog filter config
canal.instance.filter.druid.ddl = true
canal.instance.filter.query.dcl = false
canal.instance.filter.query.dml = false
canal.instance.filter.query.ddl = false
canal.instance.filter.table.error = false
canal.instance.filter.rows = false
canal.instance.filter.transaction.entry = false
canal.instance.filter.dml.insert = false
canal.instance.filter.dml.update = false
canal.instance.filter.dml.delete = false
binlog format/image check
canal.instance.binlog.format = ROW,STATEMENT,MIXED
canal.instance.binlog.image = FULL,MINIMAL,NOBLOB
binlog ddl isolation
canal.instance.get.ddl.isolation = false
parallel parser config
canal.instance.parser.parallel = true
concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()
#canal.instance.parser.parallelThreadSize = 16
disruptor ringbuffer size, must be power of 2
canal.instance.parser.parallelBufferSize = 256
table meta tsdb info
canal.instance.tsdb.enable = true
canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:}
canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;
canal.instance.tsdb.dbUsername = canal
canal.instance.tsdb.dbPassword = canal
dump snapshot interval, default 24 hour
canal.instance.tsdb.snapshot.interval = 24
purge snapshot expire , default 360 hour(15 days)
canal.instance.tsdb.snapshot.expire = 360
#################################################
destinations
#################################################
canal.destinations = example
conf root dir
canal.conf.dir = ../conf
auto scan instance dir add/remove and start/stop instance
canal.auto.scan = true
canal.auto.scan.interval = 5
set this value to 'true' means that when binlog pos not found, skip to latest.
WARN: pls keep 'false' in production env, or if you know what you want.
canal.auto.reset.latest.pos.mode = false
canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml
#canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml
canal.instance.global.mode = spring
canal.instance.global.lazy = false
canal.instance.global.manager.address = ${canal.admin.manager}
#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
#canal.instance.global.spring.xml = classpath:spring/file-instance.xml
canal.instance.global.spring.xml = classpath:spring/default-instance.xml
#canal.instance.global.spring.xml = classpath:spring/ob-default-instance.xml
##################################################
MQ Properties
##################################################
aliyun ak/sk , support rds/mq
canal.aliyun.accessKey =
canal.aliyun.secretKey =
canal.aliyun.uid=
canal.mq.flatMessage = true
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.accessChannel = local
canal.mq.database.hash = true
canal.mq.send.thread.size = 30
canal.mq.build.thread.size = 8
##################################################
Kafka
##################################################
kafka.bootstrap.servers = 127.0.0.1:9092
kafka.acks = all
kafka.compression.type = none
kafka.batch.size = 16384
kafka.linger.ms = 1
kafka.max.request.size = 1048576
kafka.buffer.memory = 33554432
kafka.max.in.flight.requests.per.connection = 1
kafka.retries = 0
kafka.kerberos.enable = false
kafka.kerberos.krb5.file = ../conf/kerberos/krb5.conf
kafka.kerberos.jaas.file = ../conf/kerberos/jaas.conf
sasl demo
kafka.sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \\n username=\"alice\" \\npassword="alice-secret\";
kafka.sasl.mechanism = SCRAM-SHA-512
kafka.security.protocol = SASL_PLAINTEXT
##################################################
RocketMQ
##################################################
rocketmq.producer.group = test
rocketmq.enable.message.trace = false
rocketmq.customized.trace.topic =
rocketmq.namespace =
rocketmq.namesrv.addr = 127.0.0.1:9876
rocketmq.retry.times.when.send.failed = 0
rocketmq.vip.channel.enabled = false
rocketmq.tag =
##################################################
RabbitMQ
##################################################
rabbitmq.host =
rabbitmq.virtual.host =
rabbitmq.exchange =
rabbitmq.username =
rabbitmq.password =
rabbitmq.queue =
rabbitmq.routingKey =
rabbitmq.deliveryMode =
##################################################
Pulsar
##################################################
pulsarmq.serverUrl =
pulsarmq.roleToken =
pulsarmq.topicTenantPrefix =
配置 instance.properties 文件
Plain
vi /root/canal-for-ob-1.1.8/conf/example/instance.properties
配置文件参数配置,注意必选参数
Bash
#################################################
mysql serverId , v1.0.26+ will autoGen
canal.instance.mysql.slaveId=0
enable gtid use true/false
canal.instance.gtidon=false
rds oss binlog
canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=
position info
canal.instance.master.address=10.10.10.101:2883 <--- obproxy 的地址
canal.instance.master.journal.name=
canal.instance.master.position=
canal.instance.master.timestamp=
canal.instance.master.gtid=
multi stream for polardbx
canal.instance.multi.stream.on=false
ssl
#canal.instance.master.sslMode=DISABLED
#canal.instance.master.tlsVersions=
#canal.instance.master.trustCertificateKeyStoreType=
#canal.instance.master.trustCertificateKeyStoreUrl=
#canal.instance.master.trustCertificateKeyStorePassword=
#canal.instance.master.clientCertificateKeyStoreType=
#canal.instance.master.clientCertificateKeyStoreUrl=
#canal.instance.master.clientCertificateKeyStorePassword=
table meta tsdb info
canal.instance.tsdb.enable=true
#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb
#canal.instance.tsdb.dbUsername=canal
#canal.instance.tsdb.dbPassword=canal
#canal.instance.standby.address =
#canal.instance.standby.journal.name =
#canal.instance.standby.position =
#canal.instance.standby.timestamp =
#canal.instance.standby.gtid=
username/password
canal.instance.dbUsername=root@ob_user1#ob_test1 <--- ob user
canal.instance.dbPassword=PassworD123 <--- ob password
canal.instance.connectionCharset = UTF-8
enable druid Decrypt database password
canal.instance.enableDruid=false
#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==
table regex
canal.instance.filter.regex=.\\..
table black regex
canal.instance.filter.black.regex=mysql\\.slave_.*
table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch
table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch
mq config
canal.mq.topic=example
dynamic topic route by schema or table regex
#canal.mq.dynamicTopic=mytest1.user,topic2:mytest2\\..*,.*\\..*
canal.mq.partition=0
hash partition config
#canal.mq.enableDynamicQueuePartition=false
#canal.mq.partitionsNum=3
#canal.mq.dynamicTopicPartitionNum=test.*:4,mycanal:6
#canal.mq.partitionHash=test.table:id^name,.\\..
#################################################
启动 canal server
Plain
sh /root/canal-for-ob-1.1.8/bin/startup.sh
正常日志输出没有报错,如有建议分析解决
YAML
2025-12-11 17:18:50.995 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler
2025-12-11 17:18:51.001 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations
2025-12-11 17:18:51.008 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## start the canal server.
2025-12-11 17:18:51.089 [main] INFO com.alibaba.otter.canal.deployer.CanalController - ## start the canal server[172.17.0.1(172.17.0.1):11111]
2025-12-11 17:18:52.038 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## the canal server is running now ......
修改 canal adapter 的配置
Plain
vi /root/canal-for-adapter-ob-1.1.8/conf/application.yml
配置文件 canal adapter
YAML
zaserver:
port: 8081
spring:
jackson:
date-format: yyyy-MM-dd HH:mm:ss
time-zone: GMT+8
default-property-inclusion: non_null
canal.conf:
mode: tcp #tcp kafka rocketMQ rabbitMQ
flatMessage: true
zookeeperHosts:
syncBatchSize: 1000
retries: -1
timeout:
accessKey:
secretKey:
consumerProperties:
# canal tcp consumer
canal.tcp.server.host: 127.0.0.1:11111 <--- canalserver 的地址
canal.tcp.zookeeper.hosts:
canal.tcp.batch.size: 500
canal.tcp.username:
canal.tcp.password:
# kafka consumer
# rocketMQ consumer
# rabbitMQ consumer
srcDataSources:
defaultDS:
url: jdbc:mysql://xx.xxx.xx.203:2883/db1?useUnicode=true <-- 源端的地址
username: root@ob_user1#ob_test1 <-- ob 用户名
password: PassworD123 <-- ob 密码
canalAdapters:
instance: example # canal instance Name or mq topic name
groups:
groupId: g1
outerAdapters:
name: rdb
key: mysql1 <--- 这个名字要记住,因为在后面的配置文件中要用到
properties:
jdbc.driverClassName: com.mysql.jdbc.Driver
jdbc.url: jdbc:mysql://xx.xxx.xxx.247:4000/db1?useUnicode=true <-- 目标端的地址
jdbc.username: tidb_test1
jdbc.password: PassworD123
修改 mytest_user.yml 配置订阅同步配置
Plain
vi /root/canal-for-adapter-ob-1.1.8/conf/rdb/mytest_user.yml
mytest_user.yml 配置参数
YAML
dataSourceKey: defaultDS
destination: example
groupId: g1
outerAdapterKey: mysql1 <--- 这个名字和前面的要一致
concurrent: true
dbMapping:
mirrorDb: true
database: db1
启动 canal-adapter
Plain
sh /root/canal-for-adapter-ob-1.1.8/bin/startup.sh
日志相关信息无报错
Plain
2025-12-11 15:30:28.800 [SpringApplicationShutdownHook] INFO ru.yandex.clickhouse.ClickHouseDriver - Driver registered
2025-12-11 15:30:29.885 [SpringApplicationShutdownHook] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## stop the canal client adapters
2025-12-11 15:30:29.886 [pool-9-thread-1] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - destination example is waiting for adapters' worker thread die!
2025-12-11 15:30:29.961 [pool-9-thread-1] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - destination example adapters worker thread dead!
2025-12-11 15:30:30.158 [pool-9-thread-1] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-2} closing ...
2025-12-11 15:30:30.162 [pool-9-thread-1] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-2} closed
2025-12-11 15:30:30.162 [pool-9-thread-1] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - destination example all adapters destroyed!
2025-12-11 15:30:30.162 [SpringApplicationShutdownHook] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - All canal adapters destroyed
2025-12-11 15:30:30.162 [SpringApplicationShutdownHook] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} closing ...
2025-12-11 15:30:30.162 [SpringApplicationShutdownHook] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} closed
2025-12-11 15:30:30.163 [SpringApplicationShutdownHook] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## canal client adapters are down.
2025-12-11 17:26:01.842 [main] INFO c.a.otter.canal.adapter.launcher.CanalAdapterApplication - Starting CanalAdapterApplication using Java xx.0.21 on tidbxxx.xxx.xxx.xxx.net with PID 3965171 (/root/canal-for-adapter-ob-1.1.8/lib/client-adapter.launcher-1.1.8.jar started by root in /root/canal-for-adapter-ob-1.1.8/bin)
2025-12-11 17:26:01.847 [main] INFO c.a.otter.canal.adapter.launcher.CanalAdapterApplication - No active profile set, falling back to 1 default profile: "default"
2025-12-11 17:26:02.300 [main] INFO org.springframework.cloud.context.scope.GenericScope - BeanFactory id=d4f2b56b-aacd-327d-9217-5ce4cfc37805
2025-12-11 17:26:02.480 [main] INFO o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port(s): 8081 (http)
2025-12-11 17:26:02.487 [main] INFO org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-8081"]
2025-12-11 17:26:02.487 [main] INFO org.apache.catalina.core.StandardService - Starting service [Tomcat]
2025-12-11 17:26:02.487 [main] INFO org.apache.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.75]
2025-12-11 17:26:02.570 [main] INFO o.a.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext
2025-12-11 17:26:02.570 [main] INFO o.s.b.w.s.context.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 692 ms
2025-12-11 17:26:02.806 [main] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} inited
2025-12-11 17:26:03.104 [main] INFO org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8081"]
2025-12-11 17:26:03.115 [main] INFO o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port(s): 8081 (http) with context path ''
2025-12-11 17:26:03.118 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## syncSwitch refreshed.
2025-12-11 17:26:03.118 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## start the canal client adapters.
2025-12-11 17:26:03.119 [main] INFO c.a.otter.canal.client.adapter.support.ExtensionLoader - extension classpath dir: /root/canal-for-adapter-ob-1.1.8/plugin
2025-12-11 17:26:03.166 [main] INFO c.a.otter.canal.client.adapter.rdb.config.ConfigLoader - ## Start loading rdb mapping config ...
2025-12-11 17:26:03.174 [main] INFO c.a.otter.canal.client.adapter.rdb.config.ConfigLoader - ## Rdb mapping config loaded
2025-12-11 17:26:03.198 [main] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-2} inited
2025-12-11 17:26:03.202 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Load canal adapter: rdb succeed
2025-12-11 17:26:03.207 [main] INFO c.alibaba.otter.canal.connector.core.spi.ExtensionLoader - extension classpath dir: /root/canal-for-adapter-ob-1.1.8/plugin
2025-12-11 17:26:03.221 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Start adapter for canal-client mq topic: example-g1 succeed
2025-12-11 17:26:03.222 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## the canal client adapters are running now ......
2025-12-11 17:26:03.222 [Thread-3] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - =============> Start to connect destination: example <=============
2025-12-11 17:26:03.228 [main] INFO c.a.otter.canal.adapter.launcher.CanalAdapterApplication - Started CanalAdapterApplication in 1.697 seconds (JVM running for 2.164)
2025-12-11 17:26:03.354 [Thread-3] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - =============> Subscribe destination: example succeed <=============
验证 Oceanbase 增量同步成功
OB 插入数据验证增量数据同步
SQL
mysql> select version();
+------------------------------+
| version() |
+------------------------------+
| 5.7.25-OceanBase_CE-v4.3.5.4 |
+------------------------------+
1 row in set (0.00 sec)
mysql> use db1;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> show tables;
+---------------+
| Tables_in_db1 |
+---------------+
| t1 |
+---------------+
1 row in set (0.00 sec)
mysql> desc t1;
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| id | int(11) | NO | PRI | NULL | |
| col1 | varchar(20) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
2 rows in set (0.01 sec)
mysql> select * from t1;
+----+------+
| id | col1 |
+----+------+
| 1 | ccc |
| 2 | ccc |
| 3 | ccc |
+----+------+
3 rows in set (0.00 sec)
mysql> insert into \c
mysql> insert into t1 (id,col1) values (4,'ddd');
Query OK, 1 row affected (0.01 sec)
mysql> select * from t1;
+----+------+
| id | col1 |
+----+------+
| 1 | ccc |
| 2 | ccc |
| 3 | ccc |
| 4 | ddd |
+----+------+
4 rows in set (0.00 sec)
tidb 同步数据
mysql> select version();
+--------------------+
| version() |
+--------------------+
| 8.0.11-TiDB-v7.5.5 |
+--------------------+
1 row in set (0.00 sec)
mysql> use db1;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> select * from t1;
+----+------+
| id | col1 |
+----+------+
| 1 | ccc |
| 2 | ccc |
| 3 | ccc |
| 4 | ddd |
+----+------+
4 rows in set (0.00 sec)
注意事项
- 版本兼容性:确保
obbinlog、OB 集群、ODP、Canal 版本匹配; - 日志监控:定期检查
logproxy.log、Canal Server/Adapter 日志,及时排查资源不足(CPU/内存/磁盘)或连接异常; - 运维效率:建议结合 OCP 或
obd工具实现可视化管理和自动化部署。
总结
TiDB 与 OceanBase 作为国产分布式数据库的代表性产品,均凭借各自技术特性成为运维 DBA 的优选工具。近年来,越来越多的 OceanBase 用户选择 TiDB 作为下游数据库,这一趋势反映了两者在功能、生态及用户需求适配性上的差异。 OceanBase 用户选择 TiDB 作为下游的核心动因,如技术栈简化与运维降本、TiDB 对业务友好性与开发适配、跨城同步与稳定性需求 、活跃的 社区与长期发展 。
随着企业对技术灵活性、运维效率及长期成本的关注,TiDB 凭借兼容性、扩展性与生态优势,正成为 OceanBase 用户拓展技术栈、降低绑定风险的优选下游数据库。这一趋势不仅体现了分布式数据库市场的多元化需求,也验证了 TiDB 在复杂场景下的综合竞争力。