目录
- [一 环境搭建](#一 环境搭建)
-
- [1.1 Mysql 开启BinLog日志](#1.1 Mysql 开启BinLog日志)
- [1.2 Window canal配置安装](#1.2 Window canal配置安装)
- [1.3 win10 安装rabbitMQ](#1.3 win10 安装rabbitMQ)
- [二 SpringBoot +MQ+Canal](#二 SpringBoot +MQ+Canal)
-
- [2.1 基本实现](#2.1 基本实现)
- [2.2 操作日志实现(一)](#2.2 操作日志实现(一))
- [2.3 操作日志实现(二)](#2.3 操作日志实现(二))
前言
一 环境搭建
业务开发中经常需要根据一些数据变更实现相对应的操作。例如,一些用户注销自己的账户,系统可以给用户自动发短信确认,这时有两种解决方案,一种是耦合到业务系统中,当用户执行注销操作的时候,执行发短信的操作,既是是通过MQ也是要耦合业务代码的,第二种方案基于数据库层面的操作,通过监听binlog实现自动发短信操作,这样就可以与业务系统解耦。
1.1 Mysql 开启BinLog日志
- 查看开启状态,如果log_bin的值为OFF是未开启,为ON是已开启。
sql
-- 开启log日志
SHOW VARIABLES LIKE '%log_bin%'
- 配置开启:修改my.cnf文件
sql
[mysqld]
log-bin=mysql-bin
binlog-format=ROW
server_id=1
- 准备同步的mysql账号
sql
-- 新建 canal 用户, 密码为 canal
CREATE USER canal IDENTIFIED BY '123456';
-- 新建 canal 数据库并给 canal 用户授予权限
CREATE DATABASE canal CHARACTER SET utf8mb4;
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
GRANT ALL PRIVILEGES ON canal.* TO 'canal'@'%';
FLUSH PRIVILEGES;
- 准备一张表测试
sql
CREATE TABLE `user` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '用户 id',
`username` varchar(50) DEFAULT NULL COMMENT '用户名',
`password` varchar(50) DEFAULT NULL COMMENT '密码',
`email` varchar(45) DEFAULT NULL COMMENT '邮箱',
`phone` varchar(15) DEFAULT NULL COMMENT '手机号码',
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8mb4;
INSERT INTO `canal`.`user` (`id`, `username`, `password`, `email`, `phone`) VALUES
(1, '姜磊', 'k0VP$l@ru', 't.wazmcs@qxfvsstyo.uy', '18145206808'),
(2, '丁洋', '8pig73*dW', 'h.wsecj@wmlp.li', '19832458514'),
(3, '邱秀兰', '5G)c@7RyV', 'c.afkrfcr@rnhewu.org.cn', '18656022523'),
(4, '孔洋', 'KjvLG*BP', 'r.tbnmdyh@pzzuo.jo', '18674498531'),
(5, '董霞', '%fqmhybp3', 'o.hnlu@hhyvqxbv.eg', '18192674843');
- 常用命令
properties
#查看日志开启状态
show variables like 'log_%';
#查看所有binlog日志列表
show master logs;
#查看最新一个binlog日志的编号名称,及其最后一个操作事件结束点
show master status;
#刷新log日志,立刻产生一个新编号的binlog日志文件,跟重启一个效果
flush logs;
#清空所有binlog日志
reset master;
1.2 Window canal配置安装
第一步:修改conf/example下的instance.properties启动能否监听到日志
properties
#################################################
## mysql serverId , v1.0.26+ will autoGen
# canal.instance.mysql.slaveId=0
# enable gtid use true/false
canal.instance.gtidon=false
# 要监听的数据库ip地址和端口号,ip地址用真实ip,不要用localhost或127.0.0.1
canal.instance.master.address=192.168.0.111:3306
# binlog的名称,canalv1.1.5不需要设置日志名称和偏移量,canal会自动识别
canal.instance.master.journal.name=
# 偏移量
canal.instance.master.position=
canal.instance.master.timestamp=
canal.instance.master.gtid=
# rds oss binlog
canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=
# table meta tsdb info
canal.instance.tsdb.enable=false
#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb
#canal.instance.tsdb.dbUsername=canal
#canal.instance.tsdb.dbPassword=canal
#canal.instance.standby.address =
#canal.instance.standby.journal.name =
#canal.instance.standby.position =
#canal.instance.standby.timestamp =
#canal.instance.standby.gtid=
# username/password,MySQL服务器授权的账号密码
canal.instance.dbUsername=canal
canal.instance.dbPassword=canal
canal.instance.connectionCharset = UTF-8
# enable druid Decrypt database password
canal.instance.enableDruid=false
#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==
# table regex
canal.instance.filter.regex=test.customer,test.fault
# table black regex
canal.instance.filter.black.regex=
# table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch
# table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch
# mq config
canal.mq.topic=example
# dynamic topic route by schema or table regex
#canal.mq.dynamicTopic=mytest1.user,mytest2\\..*,.*\\..*
canal.mq.partition=0
# hash partition config
#canal.mq.partitionsNum=3
#canal.mq.partitionHash=test.table:id^name,.*\\..*
#################################################
- 查看日志
这样就成功了
第二步:整合rabbitMQ
- 修改配置文件conf目录下的canal.properties
-
rabbitMQ配置
-
在virtualHost:/ 下新增Exchanges: canal.topic
- 新增队列:test.queue, 绑定canal.topic, RoutingKey:test.routingKey
- 启动测试
日志目录:
1.3 win10 安装rabbitMQ
-
rabbitMQ安装程序下载路径:https://www.rabbitmq.com/install-windows.html
-
erlang环境安装程序下载路径:https://www.erlang.org/downloads
- 配置erlang环境变量
- 安装完成之后,需要我们激活rabbitmq_management,打开cmd,进到sbin目录下,运行命令
properties
rabbitmq-plugins enable rabbitmq_management
- 查看页面:可以通过http://localhost:15672来访问web端的管理界面
- 其他命令
properties
net start RabbitMQ
net stop RabbitMQ
rabbitmqctl status
二 SpringBoot +MQ+Canal
2.1 基本实现
- 依赖
xml
<amqp.version>2.3.4.RELEASE</amqp.version>
<canal.version>1.1.5</canal.version>
<!--消息队列-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
<version>${amqp.version}</version>
</dependency>
<!--canal-->
<dependency>
<groupId>com.alibaba.otter</groupId>
<artifactId>canal.client</artifactId>
<version>${canal.version} </version>
</dependency>
- 配置文件
yaml
spring: #springboot的配置
rabbitmq:
host: 127.0.0.1
port: 5672
username: guest
password: guest
# 消息确认配置项
# 确认消息已发送到交换机(Exchange)
publisher-confirm-type: correlated
# 确认消息已发送到队列(Queue)
publisher-returns: true
- 实体类
java
package com.pafx.mq;
import com.alibaba.fastjson.JSONArray;
import com.alibaba.fastjson.JSONObject;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* @Description binlog实体
* @Author EasonShu
* @Data 2025/1/3 下午4:48
*/
@Data
@AllArgsConstructor
@NoArgsConstructor
public class BinLogEntity {
/**
* 数据库
*/
private String database;
/**
* 表
*/
private String table;
/**
* 操作类型
*/
private String type;
/**
* 操作数据
*/
private JSONArray data;
/**
* 变更前数据
*/
private JSONArray old;
/**
* 主键名称
*/
private JSONArray pkNames;
/**
* 执行sql语句
*/
private String sql;
private Long es;
private String gtid;
private Long id;
private Boolean isDdl;
private JSONObject mysqlType;
private JSONObject sqlType;
private Long ts;
public <T> List<T> getData(Class<T> clazz) {
if (this.data == null || this.data.size() == 0) {
return null;
}
return this.data.toJavaList(clazz);
}
public <T> List<T> getOld(Class<T> clazz) {
if (this.old == null || this.old.size() == 0) {
return null;
}
return this.old.toJavaList(clazz);
}
public List<String> getPkNames() {
if (this.pkNames == null || this.pkNames.size() == 0) {
return null;
}
List<String> pkNames = new ArrayList<>();
for (Object pkName : this.pkNames){
pkNames.add(pkName.toString());
}
return pkNames;
}
public Map<String, String> getMysqlType() {
if(this.mysqlType == null){
return null;
}
Map<String, String> mysqlTypeMap = new HashMap<>();
this.mysqlType.forEach((k, v) -> {
mysqlTypeMap.put(k, v.toString());
});
return mysqlTypeMap;
}
public Map<String, Integer> getSqlType() {
if(this.sqlType == null){
return null;
}
Map<String, Integer> sqlTypeMap = new HashMap<>();
this.sqlType.forEach((k, v) -> {
sqlTypeMap.put(k, Integer.valueOf(v.toString()));
});
return sqlTypeMap;
}
}
- MQ监听器
java
package com.pafx.mq;
import com.alibaba.fastjson.JSON;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.amqp.core.Message;
import org.springframework.amqp.rabbit.annotation.Exchange;
import org.springframework.amqp.rabbit.annotation.Queue;
import org.springframework.amqp.rabbit.annotation.QueueBinding;
import org.springframework.amqp.rabbit.annotation.RabbitListener;
import org.springframework.messaging.handler.annotation.Payload;
import org.springframework.stereotype.Component;
import java.nio.charset.StandardCharsets;
/**
* @Description
* @Author EasonShu
* @Data 2025/1/3 下午4:37
*/
@Component
public class CanalListener {
private final static Logger logger = LoggerFactory.getLogger(CanalListener.class);
/**
* RabbitMQ监听器
* @param message
*/
@RabbitListener(bindings = @QueueBinding(exchange = @Exchange(value = "canal.topic"), value = @Queue(value = "test.queue"), key = "test.routingKey"))
public void receiveMessage(@Payload Message message) {
// 获取消息内容
String content = new String(message.getBody(), StandardCharsets.UTF_8);
// 反序列化
BinLogEntity binLog = JSON.parseObject(content, BinLogEntity.class);
logger.info("binLog: {}", binLog);
}
}
- 测试,我们发现我们能够看到底层数据的变化, 但是还不能达到我们开发的需要
2.2 操作日志实现(一)
- 想法一:要知道谁变成了谁,不就要知道这个字段代表啥意思?我的想法数据库设计的时候就的备注不就定义了这个字段啥意思吗?
- 编写字段隐射类
java
package com.pafx.mq;
import java.sql.*;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
/**
* @Description 字段名称映射
* @Author EasonShu
* @Data 2025/1/3 下午5:45
*/
public class FieldNameMapper {
private static final String JDBC_URL = "jdbc:mysql://127.0.0.1:3306";
private static final String USERNAME = "root";
private static final String PASSWORD = "qwx234";
private static final String DATABASE = "canal";
// 嵌套 Map:表名 -> (字段名 -> 字段描述)
private static final Map<String, Map<String, String>> FIELD_NAME_MAP = new HashMap<>();
private static final ExecutorService EXECUTOR_SERVICE = Executors.newSingleThreadExecutor();
/**
* 异步加载字段名称映射
*/
public static void asyncLoadFieldNameMappings() {
EXECUTOR_SERVICE.submit(() -> loadFieldNameMappings(JDBC_URL, USERNAME, PASSWORD));
}
/**
* 加载字段名称映射(仅限指定数据库)
*/
private static void loadFieldNameMappings(String jdbcUrl, String username, String password) {
try (Connection connection = DriverManager.getConnection(jdbcUrl, username, password)) {
DatabaseMetaData metaData = connection.getMetaData();
ResultSet tables = metaData.getTables(DATABASE, null, "%", new String[]{"TABLE"});
while (tables.next()) {
String tableName = tables.getString("TABLE_NAME");
// 初始化字段映射 Map
Map<String, String> columnMap = new HashMap<>();
// 获取字段信息
ResultSet columns = metaData.getColumns(DATABASE, null, tableName, "%");
while (columns.next()) {
String columnName = columns.getString("COLUMN_NAME");
String remarks = columns.getString("REMARKS");
if (remarks == null || remarks.isEmpty()) {
remarks = columnName;
}
columnMap.put(columnName, remarks);
}
// 将字段映射存入总 Map
FIELD_NAME_MAP.put(tableName, columnMap);
System.out.println("加载字段映射:" + tableName + " -> " + columnMap);
}
} catch (SQLException e) {
e.printStackTrace();
}
}
/**
* 获取字段名称映射
*/
public static Map<String, Map<String, String>> getFieldNameMap() {
return FIELD_NAME_MAP;
}
/**
* 初始化
*/
static {
try {
Class.forName("com.mysql.cj.jdbc.Driver");
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
// 异步加载字段名称映射
asyncLoadFieldNameMappings();
}
/**
* 关闭线程池
*/
public static void shutdown() {
EXECUTOR_SERVICE.shutdown();
}
}
- 改写MQ监听器
java
package com.pafx.mq;
import com.alibaba.fastjson.JSON;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.amqp.rabbit.annotation.Exchange;
import org.springframework.amqp.rabbit.annotation.Queue;
import org.springframework.amqp.rabbit.annotation.QueueBinding;
import org.springframework.amqp.rabbit.annotation.RabbitListener;
import org.springframework.messaging.handler.annotation.Payload;
import org.springframework.stereotype.Component;
import java.nio.charset.StandardCharsets;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
/**
* @Description RabbitMQ监听Canal日志
* @Author EasonShu
* @Data 2025/1/3 下午4:37
*/
@Component
public class CanalListener {
private static final Logger logger = LoggerFactory.getLogger(CanalListener.class);
FieldNameMapper fieldNameMapper = new FieldNameMapper();
/**
* RabbitMQ监听器
*
* @param message 消息
*/
@RabbitListener(bindings = @QueueBinding(
exchange = @Exchange(value = "canal.topic"),
value = @Queue(value = "test.queue"),
key = "test.routingKey"
))
public void receiveMessage(@Payload org.springframework.amqp.core.Message message) {
String content = new String(message.getBody(), StandardCharsets.UTF_8);
BinLogEntity binLog = JSON.parseObject(content, BinLogEntity.class);
logger.info("接收到binLog: {}", binLog);
// 根据操作类型处理
switch (binLog.getType()) {
case "INSERT":
handleInsert(binLog);
break;
case "UPDATE":
handleUpdate(binLog);
break;
case "DELETE":
handleDelete(binLog);
break;
default:
logger.warn("未识别的操作类型: {}", binLog.getType());
break;
}
}
private void handleInsert(BinLogEntity binLog) {
String tableName = binLog.getTable();
logger.info("表 [{}] 新增记录: {}", tableName, binLog.getData(Map.class));
}
private void handleUpdate(BinLogEntity binLog) {
String tableName = binLog.getTable();
Map<String, Object> newData = binLog.getData(Map.class).get(0);
Map<String, Object> oldData = binLog.getOld(Map.class).get(0);
// 动态获取该表的字段名称映射
Map<String, String> fieldMap = fieldNameMapper.getFieldNameMap().getOrDefault(tableName, new HashMap<>());
StringBuilder log = new StringBuilder("表 [")
.append(tableName)
.append("] ")
.append(", 修改内容: ");
boolean hasChanges = false;
for (Map.Entry<String, Object> entry : newData.entrySet()) {
String key = entry.getKey();
Object newValue = entry.getValue();
Object oldValue = oldData.get(key);
String fieldName = fieldMap.getOrDefault(key, key);
if (oldValue != null && !oldValue.equals(newValue)) {
hasChanges = true;
log.append(String.format("%s 从 '%s' 变为 '%s'; ", fieldName, oldValue, newValue));
}
}
if (hasChanges) {
log.append("操作时间: ")
.append(new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(new Date(binLog.getTs())));
logger.info(log.toString());
} else {
logger.info("表 [{}] 无字段发生变化", tableName);
}
}
private void handleDelete(BinLogEntity binLog) {
String tableName = binLog.getTable();
logger.info("表 [{}] 删除记录: {}", tableName, binLog.getData(Map.class));
}
}
- 测试
但是这也是有问题的,我们知道谁操作了那个模块这又该咋做?
2.3 操作日志实现(二)
- 思考中