整合版canal ha搭建--基于1.1.4版本

开启MySql Binlog
(1)修改MySql配置文件
(2)重启MySql服务,查看配置是否生效
(3)配置起效果后,创建canal用户,并赋予权限

安装canal-admin
(1)解压

canal.admin-1.1.4.tar.gz(36.8 MB)

(2)初始化元数据库
(3)启动canal-admin
(4)查看日志
(5)访问hadoop102 8089端口

(6)默认账号密码是admin /123456进行登录

(6)关闭cancal-admin
暂时先不要关,直接在页面上配置

canal-server安装
(1)上传解压,配置几台节点解压几个

canal.deployer-1.1.4.tar.gz(49.4 MB)

(2)修改配置canal_local.properties
/opt/apps/canal-server/conf/canal_local.properties
每台节点都要改
(3)每台节点启动canal
(4)查看ui界面上会自动注册节点,这个页面截图,之后会作对照

(5)先把所有server删除

(6)新建集群
填入名字和zookeeper集群地址

(7)配置集群节点统一配置

选择载入模板,修改配置如下

#################################################
######### 		common argument		#############
#################################################
# tcp bind ip
canal.ip =
# register ip to zookeeper
canal.register.ip =
canal.port = 11111
canal.metrics.pull.port = 11112
# canal instance user/passwd
canal.user = canal
canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458

# canal admin config
canal.admin.manager = 192.168.6.32:8089  ----改为自己的
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441

canal.zkServers = 192.168.6.82:2181  --自己的zookeeper集群地址
# flush data to zk
canal.zookeeper.flush.period = 1000
canal.withoutNetty = false
# tcp, kafka, RocketMQ
canal.serverMode = kafka    ---改为kafka
# flush meta cursor/parse position to file
canal.file.data.dir = ${canal.conf.dir}
canal.file.flush.period = 1000
## memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size = 16384
## memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit = 1024 
## meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE
canal.instance.memory.rawEntry = true

## detecing config
canal.instance.detecting.enable = false
#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
canal.instance.detecting.sql = select 1
canal.instance.detecting.interval.time = 3
canal.instance.detecting.retry.threshold = 3
canal.instance.detecting.heartbeatHaEnable = false

# support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size =  1024
# mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds = 60

# network config
canal.instance.network.receiveBufferSize = 16384
canal.instance.network.sendBufferSize = 16384
canal.instance.network.soTimeout = 30

# binlog filter config
canal.instance.filter.druid.ddl = true
canal.instance.filter.query.dcl = false
canal.instance.filter.query.dml = false
canal.instance.filter.query.ddl = false
canal.instance.filter.table.error = false
canal.instance.filter.rows = false
canal.instance.filter.transaction.entry = false

# binlog format/image check
canal.instance.binlog.format = ROW,STATEMENT,MIXED 
canal.instance.binlog.image = FULL,MINIMAL,NOBLOB

# binlog ddl isolation
canal.instance.get.ddl.isolation = false

# parallel parser config
canal.instance.parser.parallel = true
## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()
#canal.instance.parser.parallelThreadSize = 16
## disruptor ringbuffer size, must be power of 2
canal.instance.parser.parallelBufferSize = 256

# table meta tsdb info
canal.instance.tsdb.enable = true
canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:}
canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;
canal.instance.tsdb.dbUsername = canal
canal.instance.tsdb.dbPassword = canal
# dump snapshot interval, default 24 hour
canal.instance.tsdb.snapshot.interval = 24
# purge snapshot expire , default 360 hour(15 days)
canal.instance.tsdb.snapshot.expire = 360

# aliyun ak/sk , support rds/mq
canal.aliyun.accessKey =
canal.aliyun.secretKey =

#################################################
######### 		destinations		#############
#################################################
canal.destinations =
# conf root dir
canal.conf.dir = ../conf
# auto scan instance dir add/remove and start/stop instance
canal.auto.scan = true
canal.auto.scan.interval = 5

canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml
#canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml

canal.instance.global.mode = manager
canal.instance.global.lazy = false
canal.instance.global.manager.address = ${canal.admin.manager}
#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
#canal.instance.global.spring.xml = classpath:spring/file-instance.xml   ----注释掉这行
canal.instance.global.spring.xml = classpath:spring/default-instance.xml  ----打开这行

##################################################
######### 		     MQ 		     #############
##################################################
canal.mq.servers = 192.168.6.82:9092   ---要发送数据的kafka集群地址
canal.mq.retries = 0
canal.mq.batchSize = 16384
canal.mq.maxRequestSize = 1048576
canal.mq.lingerMs = 100
canal.mq.bufferMemory = 33554432
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
canal.mq.flatMessage = true
canal.mq.compressionType = none
canal.mq.acks = all
#canal.mq.properties. =
canal.mq.producerGroup = test
# Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.accessChannel = local
# aliyun mq namespace
#canal.mq.namespace =
canal.mq.topic=canal

##################################################
#########     Kafka Kerberos Info    #############
##################################################
canal.mq.kafka.kerberos.enable = false
canal.mq.kafka.kerberos.krb5FilePath = "../conf/kerberos/krb5.conf"
canal.mq.kafka.kerberos.jaasFilePath = "../conf/kerberos/jaas.conf"

(8)把之前删除的server添加到集群中

集群:自己配的那个
其他的对照之前截图的填就可以

(9)配置instance
新建instance

修改配置,载入模板后修改配置如下

canal.instance.mysql.slaveId=1234 ---不能与mysql的server_id重复

canal.instance.master.address=192.168.6.32:3306 ----改为自己的地址

canal.instance.filter.regex=.*\\..* ----配置自己需要监控mysql的东西,我这里是所有库所有表

canal.mq.topic=canal ----未匹配规则的数据发往kafka的哪个topic

canal.mq.dynamicTopic=.*\\..* ---动态匹配规则发送到动态topic

canal.instance.filter.regex 、canal.mq.topic 和 canal.mq.dynamicTopic参数参照官网配置,我这里的配置只是样例。
保存
查看zookeeper

可以看到自己创建的集群和instance ha

相关推荐
Super丶病毒22 天前
Docker 中使用 PHP 通过 Canal 同步 Mysql 数据到 ElasticSearch
mysql·elasticsearch·docker·canal·php
ps酷教程1 个月前
springboot整合canal
canal
idealzouhu2 个月前
【canal 中间件】canal 实时监听 binlog
mysql·canal
idealzouhu2 个月前
【Canal 中间件】Canal 实现 MySQL 增量数据的异步缓存更新
mysql·缓存·中间件·canal
idealzouhu2 个月前
【Canal 中间件】Canal使用原理与基本组件概述
mysql·canal
这孩子叫逆3 个月前
Canal 扩展篇(阿里开源用于数据同步备份,监控表和表字段(日志))
mysql·canal·日志·阿里·binary_log
cyt涛4 个月前
搜索功能技术方案
mysql·elasticsearch·全文检索·canal·索引·数据同步·搜索
cyt涛4 个月前
Canal+RabbitMQ数据同步环境配置
数据库·分布式·mysql·rabbitmq·canal·数据同步·主从同步
ZZDICT5 个月前
开源的数据库增量订阅和消费的中间件——Cancl
canal