【Mysql】通过Keepalived搭建mysql双主高可用集群

一、环境信息

|----------|----------------|-------------|---------|----------------|
| 主机名 | ip | 操作系统 | mysql版本 | VIP(虚拟ip) |
| hadoop01 | 192.168.10.200 | centos7_x86 | 5.7 | 192.168.10.253 |
| hadoop03 | 192.168.10.202 | centos7_x86 | 5.7 | 192.168.10.253 |

二、mysql集群搭建

两台节点,如果未部署mysql服务,部署文档请看【Mysql】mysql三种安装方式(二进制、yum、docker)-CSDN博客

三、配置文件修改

hadoop01节点的my.cnf配置,默认情况下MySQL的配置文件是/etc/my.cnf

[mysqld_safe]
pid-file=/data/mysql5.7/logs/mysqld.pid
[mysqld]
basedir=/usr/local/mysql5.7
datadir=/data/mysql5.7/data
socket=/data/mysql5.7/run/mysql.sock
log_error=/data/mysql5.7/logs/alert.log
pid_file=/data/mysql5.7/logs/mysqld.pid


##新增内容如下
#设置server-id,两节点必须不一样
server-id=1
#开启binlog,并指定存储位置
log-bin=/data/mysql5.7/logs/mysql-bin
#中继日志
relay-log =/data/mysql5.7/logs/mysql-relay-bin
#忽略同步的库
replicate-ignore-db = mysql
replicate-ignore-db = information_schema
replicate-ignore-db = performance_schema
replicate-ignore-db = sys

hadoop03节点的my.cnf配置

[mysqld_safe]
pid-file=/data/mysql5.7/logs/mysqld.pid
[mysqld]
basedir=/usr/local/mysql5.7
datadir=/data/mysql5.7/data
socket=/data/mysql5.7/run/mysql.sock
log_error=/data/mysql5.7/logs/alert.log
pid_file=/data/mysql5.7/logs/mysqld.pid

##新增内容如下
#设置server-id,两节点必须不一样
server-id=2
#开启binlog
log-bin=/data/mysql5.7/logs/mysql-bin
#中继日志
relay-log=/data/mysql5.7/logs/mysql-relay-bin
#忽略同步的库
replicate-ignore-db = mysql
replicate-ignore-db = information_schema
replicate-ignore-db = performance_schema
replicate-ignore-db = sys

四、数据保持一致性

如果Hadoop01上已经有mysql数据,那么在执行主主互备之前,需要将mysql的数据保持同步,首先在Hadoop01上备份 mysql数据,执行sql命令:FLUSH TABLES WITH READ LOCK; 不要退出这个终端,否则这个锁就失效了。

在不退出终端的情况下,使用 mysqldump工具来导出数据,再导入宁外一个数据库中。

重启数据库,使之前配置文件参数生效;

五、主主互备模式配置

1)配置hadoop01主,hadoop02从

bash 复制代码
#登录hadoop01数据库,创建同步用户
mysql> grant replication slave on *.* to 'cuixiaoqin'@'192.168.10.%' identified by '520_1314';
#刷新权限表
mysql> flush privileges;
#查看hadoop01的master_log_file和master_log_pos
mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |      454 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)


#登录hadoop02数据库,配置主从,master_log_file和master_log_pos对应hadoop01的master信息
mysql> change master to master_host='192.168.10.200',master_user='cuixiaoqin',master_password='520_1314',master_log_file='mysql-bin.000001',master_log_pos=454;
#开启从
mysql> start slave;
#查看主从状态,Slave_IO_Running和Slave_SQL_Running都为yes代表正常
mysql> show slave status\G

2)配置hadoop02主,hadoop01从

bash 复制代码
#登录hadoop02数据库,创建同步用户
mysql> grant replication slave on *.* to 'cuixiaoqin'@'192.168.10.%' identified by '520_1314';
#刷新权限表
mysql> flush privileges;
#查看hadoop02的master_log_file和master_log_pos
mysql> show master status;
mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |      606 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)


#登录hadoop01数据库,配置主从,master_log_file和master_log_pos对应hadoop01的master信息
mysql> change master to master_host='192.168.10.202',master_user='cuixiaoqin',master_password='520_1314',master_log_file='mysql-bin.000001',master_log_pos=606;
#开启从
mysql> start slave;
#查看主从状态,Slave_IO_Running和Slave_SQL_Running都为yes代表正常
mysql> show slave status\G

六、配置KeepAlived实现MySQL双主高可用

hadoop01和hadoop03都执行相同操作,keepalived官网Keepalived for Linux

bash 复制代码
#安装keepalived依赖
[root@hadoop01 ~]# yum install gcc gcc-c++ openssl-devel -y
#离线安装,也可以yum install安装
[root@hadoop01 ~]# wget -c https://keepalived.org/software/keepalived-2.1.5.tar.gz
[root@hadoop01 ~]# tar -xvf keepalived-2.1.5.tar.gz
[root@hadoop03 ~]# mv keepalived-2.1.5 keepalived
[root@hadoop03 ~]# mv keepalived /usr/local/
[root@hadoop01 ~]# cd /usr/local/keepalived
[root@hadoop01 keepalived]# ./configure --prefix=/usr/local/keepalived
[root@hadoop01 keepalived]# make && make install

#编写监控mysql状态脚本
[root@hadoop01 ~]# vi /etc/keepalived/check_mysql.sh
#!/bin/bash
#检查3306端口是否存在
ss -lntup|grep 3306 > /dev/null 2>&1
if [[ $? -eq 0 ]]; then
    exit 0
else
    systemctl stop keepalived
fi
[root@hadoop01 ~]# chmod +x /etc/keepalived/check_mysql.sh

hadoop01的keepalived.conf配置文件设置

bash 复制代码
[root@hadoop01 keepalived]# mkdir /etc/keepalived
[root@hadoop01 keepalived]# vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
    #在 keepalived 状态变化时接收通知邮件的地址
    notification_email {
        csdn@126.com
    }
    #邮件的发件人地址
    notification_email_from ops@wangshibo.cn
    #发送通知邮件所使用的 SMTP 服务器地址
    smtp_server 127.0.0.1
    #连接到 SMTP 服务器的超时时间
    smtp_connect_timeout 30
    #为 VRRP 实例指定了一个唯一的标识符
    router_id MYSQL-01
    #启用脚本安全性检查
    enable_script_security
}

#定义了一个 VRRP 脚本 chk_mysql,用于监控脚本的执行状态
vrrp_script chk_mysql {
    #指定了要执行的脚本路径
    script "/etc/keepalived/check_mysql.sh"
    #设定了脚本执行的时间间隔,单位是秒
    interval 2
    #指定用户
    user root
}

#定义了一个 VRRP 实例,VI_1 是这个实例的名称
vrrp_instance VI_1 {
    #设定角色为BACKUP,都是设成BACKUP则以优先级为主要参考
    state BACKUP
    #指定了用于 VRRP 的网络接口
    interface ens33
    #指定了 VRRP 实例的虚拟路由器 ID,这个 ID 在相同的 VRRP 组中必须唯一。
    virtual_router_id 51
    #设定节点的优先级
    priority 101
    #指定了主节点向备份节点广播状态信息的间隔时间,单位秒
    advert_int 1
    #设置认证方式,auth_type PASS 表示使用简单的密码认证,auth_pass 1111 是认证密码
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    #虚拟 IP 地址
    virtual_ipaddress {
        192.168.10.253
    }
    #用于跟踪脚本的状态,chk_mysql 是定义的脚本名称,keepalived 会根据这个脚本的执行结果来决定是否调整状态
    track_script {
        chk_mysql
    }
}

hadoop02的keepalived.conf配置文件设置

bash 复制代码
[root@hadoop03 ~]# mkdir /etc/keepalived
[root@hadoop03 ~]# vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
    notification_email {
        csdn@126.com
    }
    notification_email_from ops@wangshibo.cn
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    #设置 VRRP 实例的一个唯一的标识符
    router_id MYSQL-02
    enable_script_security
}

vrrp_script chk_mysql {
    script "/etc/keepalived/check_mysql.sh"
    interval 2
    user root
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    #设置优先级为100
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.10.253
    }
    track_script {
        chk_mysql
    }
}

hadoop01和hadoop02重启keepalived服务

bash 复制代码
systemctl start keepalived
systemctl status keepalived

七、验证keepalived虚拟ip地址是否正常

#目前vip在Hadoop01节点

#将hadoop01节点mysql停掉,查看虚拟ip地址是否在hadoop02上

相关推荐
Mr.1332 分钟前
数据库的三范式是什么?
数据库
Cachel wood39 分钟前
python round四舍五入和decimal库精确四舍五入
java·linux·前端·数据库·vue.js·python·前端框架
Python之栈1 小时前
【无标题】
数据库·python·mysql
风_流沙1 小时前
java 对ElasticSearch数据库操作封装工具类(对你是否适用嘞)
java·数据库·elasticsearch
亽仒凣凣1 小时前
Windows安装Redis图文教程
数据库·windows·redis
亦世凡华、1 小时前
MySQL--》如何在MySQL中打造高效优化索引
数据库·经验分享·mysql·索引·性能分析
YashanDB1 小时前
【YashanDB知识库】Mybatis-Plus调用YashanDB怎么设置分页
数据库·yashandb·崖山数据库
ProtonBase1 小时前
如何从 0 到 1 ,打造全新一代分布式数据架构
java·网络·数据库·数据仓库·分布式·云原生·架构
云和数据.ChenGuang7 小时前
Django 应用安装脚本 – 如何将应用添加到 INSTALLED_APPS 设置中 原创
数据库·django·sqlite
woshilys7 小时前
sql server 查询对象的修改时间
运维·数据库·sqlserver