Linux云计算 |【第四阶段】PROJECT2-DAY2

综合项目内容:

升级网站运行平台、部署Redis内存存储服务集群、数据迁移、部署PXCMySQL实现强同步、部署LB和HA集群

一、项目拓扑结构

PROJECT2-DAY1回顾:

服务架构缺点分析:

① 数据存储结构存在单点故障(需增调度器)

② 网络服务每次都是从源站提取数据

③ 将tomcat网站服务升级网站运行平台LNMP;

项目需求:

① 升级网站运行平台(LNMP)

② 部署内存存储服务器(redis集群)

③ 部署PXC集群;

④ 解决数据库服务负载问题;

⑤ 解决调度器单点故障问题;

PROJECT2-DAY2:

实验延续PROJECT2-DAY1,服务器启动并检查服务正常,确保关闭所有主机的防火墙和SELinux;

案例1:升级网站运行平台

具体配置如下:

① 清除当前配置(tomcat、tomcat的共享存储)

② 部署LNMP(源码安装Nginx、php-fpm、php及php-mysql)

③ 测试配置(编写php脚本)

补充:因数据不存储在Nginx服务器数据库(可选安装mariadb-server及mariadb、mariadb-devel)


步骤1:清除当前WEB服务器配置

web33操作

① 停止Tomcat网站服务

bash 复制代码
[root@web33 ~]# /usr/local/tomcat/bin/shutdown.sh   //停止tomcat服务
[root@web33 ~]# vim /etc/rc.local   //注释服务开机自启
# usr/local/tomcat/bin/startup.sh

② 卸载NFS共享存储

bash 复制代码
[root@web33 ~]# df -h | grep /sitedir   //查看挂载点
192.168.4.30:/sitedir    3.0G   32M  3.0G    2% /usr/local/tomcat/webapps/ROOT
[root@web33 ~]# umount /usr/local/tomcat/webapps/ROOT/   //卸载tomcat网站的挂载点
[root@web33 ~]# vim /etc/fstab    //注释共享挂载
# 192.168.4.30:/sitedir /usr/local/tomcat/webapps/ROOT nfs defaults 0 0

步骤2:部署LNMP平台架构

web33操作

① 安装Nginx和所需环境及依赖包

bash 复制代码
[root@web33 ~]# yum -y install gcc pcre-devel zlib-devel

源码安装Nginx(参考:lnmp_soft/nginx-1.12.2.tar.gz )

bash 复制代码
[root@web33 lnmp_soft]# tar -xf nginx-1.12.2.tar.gz
[root@web33 lnmp_soft]# cd nginx-1.12.2/
[root@web33 nginx-1.12.2]# ./configure    //配置Makefile
[root@web33 nginx-1.12.2]# make && make install   //编译安装
[root@web33 nginx-1.12.2]# ls /usr/local/nginx/
conf  html  logs  sbin

安装PHP及连接MySQL依赖包

bash 复制代码
[root@web33 nginx-1.12.2]# yum -y install php php-mysql php-fpm

② 挂载NFS共享存储

bash 复制代码
[root@web33 ~]# showmount -e 192.168.4.30      //查看NFS服务器的共享
Export list for 192.168.4.30:
/sitedir *
[root@web33 ~]# vim /etc/fstab     //Nginx挂载NFS共享存储
192.168.4.30:/sitedir /usr/local/nginx/html nfs defaults 0 0
[root@web33 ~]# mount -a
[root@web33 ~]# mount | grep "/usr/local/nginx/html"
192.168.4.30:/sitedir on /usr/local/nginx/html type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.4.33,local_lock=none,addr=192.168.4.30)

③ 启动Nginx服务

bash 复制代码
[root@web33 ~]# vim +65 /usr/local/nginx/conf/nginx.conf     //修改配置文件,开启动态解析功能;
...
        location ~ \.php$ {
            root           html;
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_index  index.php;
        #   fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
            include        fastcgi.conf;
        }
...
[root@web33 ~]# /usr/local/nginx/sbin/nginx    //启动服务
[root@web33 ~]# echo "/usr/local/nginx/sbin/nginx" >> /etc/rc.local   //开机自启
[root@web33 ~]# ss -nlptu | grep :80
tcp    LISTEN     0      128       *:80                    *:*                   users:(("nginx",pid=14279,fd=6),("nginx",pid=14278,fd=6))

④ 启动PHP-FPM服务

bash 复制代码
[root@web33 ~]# systemctl start php-fpm.service
[root@web33 ~]# systemctl enable php-fpm.service
[root@web33 ~]# ss -nlptu | grep :9000
tcp    LISTEN     0      128    127.0.0.1:9000                  *:*                   users:(("php-fpm",pid=14312,fd=0),("php-fpm",pid=14311,fd=0),("php-fpm",pid=14310,fd=0),("php-fpm",pid=14309,fd=0),("php-fpm",pid=14308,fd=0),("php-fpm",pid=14306,fd=6))

步骤3:测试配置

nfs30操作

① 编写连接Mysql11数据库服务器PHP脚本

bash 复制代码
[root@nfs30 ~]# vim /sitedir/linkdb2.php
<?php
$conn=mysql_connect("192.168.4.11","army","123qqq...A");
mysql_select_db("gamedb");
$sql = 'insert into user (name) values ("Jackie")';   //存入数据"Jackie"
mysql_query($sql);
mysql_close();
echo "save data ok";
?>

WEB服务器上查看挂载的网页目录是否增加php文件

bash 复制代码
[root@web33 ~]# ls /usr/local/nginx/html/
linkdb2.php  linkdb.jsp  test.html

② 使用浏览器访问脚本

③ 在mysql11主数据库服务器查看数据

bash 复制代码
[root@mysql11 ~]# mysql -uarmy -p123qqq...A
mysql> select * from gamedb.user;
+--------+
| name   |
+--------+
| jack   |
| AnJ    |
| lucy   |
| Jackie |
+--------+
5 rows in set (0.00 sec)

案例2:部署Redis内存存储服务

具体操作如下:

① 部署6台redis服务器

② 创建redis集群(1个管理主机 --> 3个主从节点的redis服务器)

③ 配置Nginx网站服务器支持php和redis;

④ 测试配置

PROJECT2-DAY2拓扑:

服务器IP及角色规划:

|--------|-------------------|----------|
| 主机名 | IP地址:端口 | 角色 |
| redisA | 192.168.4.51:6379 | Redis服务器 |
| redisB | 192.168.4.52:6379 | Redis服务器 |
| redisC | 192.168.4.53:6379 | Redis服务器 |
| redisD | 192.168.4.54:6379 | Redis服务器 |
| redisE | 192.168.4.55:6379 | Redis服务器 |
| redisF | 192.168.4.56:6379 | Redis服务器 |
| mgm | 192.168.4.57 | 管理集群主机 |


步骤1:部署Redis服务器

redisA、redisB、redisC、redisD、redisE、redisF操作

① 搭建redis服务器(命令行方式,以redisA为例)

bash 复制代码
[root@redisA ~]# rpm -q gcc || yum -y install gcc     //检查编译环境
[root@redisA ~]# tar -zxf redis-4.0.8.tar.gz    //解压软件包
[root@redisA ~]# cd redis-4.0.8/
[root@redisA redis-4.0.8]# make install    //安装(默认已有Makefile)
[root@redisA redis-4.0.8]# ./utils/install_server.sh    //初始化配置
bash 复制代码
[root@redisA redis-4.0.8]# /etc/init.d/redis_6379 stop    //停止服务
Stopping ...
Redis stopped

[root@redisA redis-4.0.8]# vim /etc/redis/6379.conf     //修改配置文件,启用集群配置
70 # bind 127.0.0.1
89 protected-mode no    //关闭保护机制
815 cluster-enabled yes
823 cluster-config-file nodes-6379.conf
829 cluster-node-timeout 5000

[root@redisA redis-4.0.8]# /etc/init.d/redis_6379 start    //启动服务
Starting Redis server...
[root@redisA redis-4.0.8]# ss -nlptu | grep redis-server    //查看服务端口
tcp    LISTEN     0      128       *:6379                  *:*                   users:(("redis-server",pid=14455,fd=7))
tcp    LISTEN     0      128       *:16379                 *:*                   users:(("redis-server",pid=14455,fd=10))
tcp    LISTEN     0      128      :::6379                 :::*                   users:(("redis-server",pid=14455,fd=6))
tcp    LISTEN     0      128      :::16379                :::*                   users:(("redis-server",pid=14455,fd=9))

扩展:自定义脚本,部署redis服务器并开启集群功能(注意:需提前准备redis软件包)

bash 复制代码
[root@redisA ~]# cat setup_redis.sh
#!/bin/bash
# 描述:该脚本用来部署Redis服务器,并启用集群配置
 
# 检查编译环境
rpm -q gcc || yum -y install gcc
 
# 解压软件包(注释:提前准备软件包)
[ -e "redis-4.0.8"] && echo "目录存在" || tar -zxvf redis-4.0.8.tar.gz
 
# 编译安装
cd redis-4.0.8/
make install
 
# 初始化配置(注意:服务启动需要些许时间)
cd utils/
echo | source install_server.sh
sleep 5     //休眠5秒
 
# 停止redis服务
/etc/init.d/redis_6379 stop
 
# 启用集群配置
sed -i '89s/yes/no/' /etc/redis/6379.conf   # 关闭保护 protected-mode no
sed -i '70s/^/# /' /etc/redis/6379.conf   # 注释 bind 127.0.0.1
sed -i '815s/# //' /etc/redis/6379.conf   # cluster-enabled yes
sed -i '823s/# //' /etc/redis/6379.conf   # cluster-config-file nodes-6379.conf
sed -i '829s/# //' /etc/redis/6379.conf   # cluster-node-timeout 15000
sed -i '829s/15000/5000/' /etc/redis/6379.conf  # cluster-node-timeout 5000
 
# 清除数据目录
rm -rf /var/lib/redis/6379/*
 
# 启动redis服务
service redis_6379 start

② 使用脚本为其它节点主机部署Redis服务及开启集群功能(脚本方式,以redisB为例)

bash 复制代码
[root@redisB ~]# chmod +x setup_redis.sh
[root@redisB ~]# ./setup_redis.sh
[root@redisB ~]# ss -nlptu | grep redis-server
tcp    LISTEN     0      128       *:6379                  *:*                   users:(("redis-server",pid=4841,fd=7))
tcp    LISTEN     0      128       *:16379                 *:*                   users:(("redis-server",pid=4841,fd=10))
tcp    LISTEN     0      128      :::6379                 :::*                   users:(("redis-server",pid=4841,fd=6))
tcp    LISTEN     0      128      :::16379                :::*                   users:(("redis-server",pid=4841,fd=9))

提示:使用脚本完成其它Redis服务器的部署及集群功能启用,执行完成后,检查redis-server的服务端口是都正常;

步骤2:创建redis集群

mgm操作

① 配置管理主机,安装软件包

bash 复制代码
[root@mgm ~]# yum -y install ruby rubygems   //安装依赖(提供gem)
[root@mgm ~]# gem install redis-3.2.1.gem  //安装依赖软件gem程序
Successfully installed redis-3.2.1
Parsing documentation for redis-3.2.1
Installing ri documentation for redis-3.2.1
1 gem installed
  • 补充:RubyGems 是 Ruby 的一个包管理器,它提供一个分发 Ruby 程序和库的标准格式,还提供一个管理程序包安装的工具gem;
  • 补充:redis-3.2.1.gem,搭建redis集群所需要的依赖软件(redis和ruby接口相关)

② 拷贝源码包中的redis-trib.rb集群管理脚本

bash 复制代码
[root@mgm ~]# mkdir /root/bin
[root@mgm ~]# tar -zxf redis-4.0.8.tar.gz
[root@mgm ~]# cp redis-4.0.8/src/redis-trib.rb /root/bin/   //拷贝集群管理脚本
[root@mgm ~]# chmod +x /root/bin/redis-trib.rb       //确保脚本有执行权限
[root@mgm ~]# redis-trib.rb help    //查看帮助

③ 创建集群

bash 复制代码
[root@mgm ~]# redis-trib.rb create --replicas 1 \
> 192.168.4.51:6379 192.168.4.52:6379 192.168.4.53:6379 \
> 192.168.4.54:6379 192.168.4.55:6379 192.168.4.56:6379
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.4.51:6379
192.168.4.52:6379
192.168.4.53:6379
Adding replica 192.168.4.55:6379 to 192.168.4.51:6379
Adding replica 192.168.4.56:6379 to 192.168.4.52:6379
Adding replica 192.168.4.54:6379 to 192.168.4.53:6379
M: e8e65683bf1e972091fce5f14ec1553e8b0cb56d 192.168.4.51:6379
   slots:0-5460 (5461 slots) master
M: d86b0c0a530a2532b3e170972d34c5f42ad328fa 192.168.4.52:6379
   slots:5461-10922 (5462 slots) master
M: 333a311d39cbee56e46a3cf2b7bfb5b43f53e0e2 192.168.4.53:6379
   slots:10923-16383 (5461 slots) master
S: c0ef41c30a8303150240750a227449d97be7415e 192.168.4.54:6379
   replicates 333a311d39cbee56e46a3cf2b7bfb5b43f53e0e2
S: 4b44c1dfdaead8d54f4fd8e914accc059dd6bfc0 192.168.4.55:6379
   replicates e8e65683bf1e972091fce5f14ec1553e8b0cb56d
S: 8f506e5cf8316226d981f203cc47b3eef0fca23d 192.168.4.56:6379
   replicates d86b0c0a530a2532b3e170972d34c5f42ad328fa
Can I set the above configuration? (type 'yes' to accept): yes   //同意以上配置
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join....
>>> Performing Cluster Check (using node 192.168.4.51:6379)
M: e8e65683bf1e972091fce5f14ec1553e8b0cb56d 192.168.4.51:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 333a311d39cbee56e46a3cf2b7bfb5b43f53e0e2 192.168.4.53:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 4b44c1dfdaead8d54f4fd8e914accc059dd6bfc0 192.168.4.55:6379
   slots: (0 slots) slave
   replicates e8e65683bf1e972091fce5f14ec1553e8b0cb56d
S: c0ef41c30a8303150240750a227449d97be7415e 192.168.4.54:6379
   slots: (0 slots) slave
   replicates 333a311d39cbee56e46a3cf2b7bfb5b43f53e0e2
M: d86b0c0a530a2532b3e170972d34c5f42ad328fa 192.168.4.52:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 8f506e5cf8316226d981f203cc47b3eef0fca23d 192.168.4.56:6379
   slots: (0 slots) slave
   replicates d86b0c0a530a2532b3e170972d34c5f42ad328fa
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

④ 查看集群信息

bash 复制代码
[root@mgm ~]# redis-trib.rb info 192.168.4.51:6379
192.168.4.51:6379 (e8e65683...) -> 0 keys | 5461 slots | 1 slaves.   //分配哈希槽5461
192.168.4.53:6379 (333a311d...) -> 0 keys | 5461 slots | 1 slaves.   //分配哈希槽5461
192.168.4.52:6379 (d86b0c0a...) -> 0 keys | 5462 slots | 1 slaves.   //分配哈希槽5462
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.

[root@mgm ~]# redis-trib.rb check 192.168.4.51:6379
>>> Performing Cluster Check (using node 192.168.4.51:6379)
M: e8e65683bf1e972091fce5f14ec1553e8b0cb56d 192.168.4.51:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 333a311d39cbee56e46a3cf2b7bfb5b43f53e0e2 192.168.4.53:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 4b44c1dfdaead8d54f4fd8e914accc059dd6bfc0 192.168.4.55:6379
   slots: (0 slots) slave
   replicates e8e65683bf1e972091fce5f14ec1553e8b0cb56d
S: c0ef41c30a8303150240750a227449d97be7415e 192.168.4.54:6379
   slots: (0 slots) slave
   replicates 333a311d39cbee56e46a3cf2b7bfb5b43f53e0e2
M: d86b0c0a530a2532b3e170972d34c5f42ad328fa 192.168.4.52:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 8f506e5cf8316226d981f203cc47b3eef0fca23d 192.168.4.56:6379
   slots: (0 slots) slave
   replicates d86b0c0a530a2532b3e170972d34c5f42ad328fa
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

⑤ 测试配置(在客户端连接集群中的任意一台服务器存取数据)

bash 复制代码
[root@redisA ~]# scp /usr/local/bin/redis-cli 192.168.4.5:/usr/local/bin/
[root@localhost ~]# redis-cli -c -h 192.168.4.51 -p 6379
192.168.4.51:6379> ping
PONG

192.168.4.51:6379> SET x 100
-> Redirected to slot [16287] located at 192.168.4.53:6379  //提示存储在53节点
OK
192.168.4.53:6379> KEYS *
1) "x"

192.168.4.53:6379> SET y 200
OK
192.168.4.53:6379> KEYS *
1) "x"
2) "y"

192.168.4.53:6379> SET z 300
-> Redirected to slot [8157] located at 192.168.4.52:6379   //提示存储在52节点
OK
192.168.4.52:6379> KEYS *     //在52节点查看数据,只有变量z
1) "z"

192.168.4.52:6379> GET x
-> Redirected to slot [16287] located at 192.168.4.53:6379  //连接53节点获取数据
"100"

192.168.4.53:6379> GET y
"200"

192.168.4.53:6379> GET z
-> Redirected to slot [8157] located at 192.168.4.52:6379   //提示存储在52节点
"300"

192.168.4.52:6379> SET i 400
-> Redirected to slot [15759] located at 192.168.4.53:6379  //提示存储在53节点
OK

192.168.4.53:6379> SET j 500
-> Redirected to slot [3564] located at 192.168.4.51:6379   //提示存储在51节点
OK

步骤3:配置Nginx网站服务器支持php和Redis

web33操作

① 安装php扩展依赖包

bash 复制代码
[root@web33 ~]# yum -y install php-devel

② 安装php-redis(参考:/linux-soft/4/redis-cluster-4.3.0.tgz ,注意软件包版本)

bash 复制代码
[root@web33 ~]# tar -zxf redis-cluster-4.3.0.tgz
[root@web33 ~]# cd redis-4.3.0/      //切换到软件包目录
[root@web33 redis-4.3.0]# ls
bash 复制代码
[root@web33 redis-4.3.0]# phpize   //创建configure命令及配置信息文件/usr/bin/php-config
Configuring for:
PHP Api Version:         20100412
Zend Module Api No:      20100525
Zend Extension Api No:   220100525
[root@web33 redis-4.3.0]# ls
bash 复制代码
[root@web33 redis-4.3.0]# ./configure --with-php-config=/usr/bin/php-config
[root@web33 redis-4.3.0]# make && make install   //编译安装
...
Installing shared extensions:     /usr/lib64/php/modules/   //提示模块安装目录
[root@web33 redis-4.3.0]# ls /usr/lib64/php/modules/    //查看目录列表

③ 修改php配置文件并重启服务

bash 复制代码
[root@web33 redis-4.3.0]# vim /etc/php.ini
728 extension_dir = "/usr/lib64/php/modules/"   //模块目录
730 extension = "redis.so"   //模块名
...
[root@web33 redis-4.3.0]# systemctl restart php-fpm     //重启php-fpm服务
[root@web33 redis-4.3.0]# php -m | grep -i redis     //查看模块
redis

测试配置:

① 在FTP存储服务器的共享目录下,创建连接集群PHP脚本

bash 复制代码
[root@nfs30 ~]# vim /sitedir/set_data.php        //存储数据脚本
<?php
$redis_list = ['192.168.4.51:6379','192.168.4.52:6379','192.168.4.53:6379','192.168.4.54:6379','192.168.4.55:6379','192.168.4.56:6379'];  //定义redis服务器列表
$client = new RedisCluster(NUll,$redis_list);  //定义连接redis服务器变量
$client->set("i","tarenaA "); //存储数据 变量名 i
$client->set("j","tarenaB ");  //存储数据 变量名 j
$client->set("k","tarenaC ");  //存储数据 变量名 k
?>
bash 复制代码
[root@nfs30 ~]# vim /sitedir/get_data.php    //获取数据脚本
<?php
$redis_list = ['192.168.4.51:6379','192.168.4.52:6379','192.168.4.53:6379','192.168.4.54:6379','192.168.4.55:6379','192.168.4.56:6379']; //定义redis服务器列表
$client = new RedisCluster(NUll,$redis_list);  //定义连接redis服务器变量
echo $client->get("i");  //获取变量i 的数据
echo $client->get("j");  //获取变量j 的数据
echo $client->get("k");  //获取变量k 的数据
?>
bash 复制代码
[root@nfs30 ~]# vim /sitedir/test3.php    //存/取数据脚本
<?php
$redis_list = ['192.168.4.51:6379','192.168.4.52:6379','192.168.4.53:6379','192.168.4.54:6379','192.168.4.55:6379','192.168.4.56:6379'];
$client = new RedisCluster(NUll,$redis_list);
$client->set("name","AnJern");  //存数据
echo $client->get("name");  //取数据
?>

② 访问网站执行脚本(在任意主机访问网站服务器都可以)

bash 复制代码
[root@localhost ~]# curl http://192.168.4.33/set_data.php
[root@localhost ~]# curl http://192.168.4.33/get_data.php
tarenaA tarenaB tarenaC
[root@localhost ~]# curl http://192.168.4.33/test3.php

③ 命令行连接任意一台redis服务器查看数据(在任意主机连接redis服务器都可以)

bash 复制代码
[root@localhost ~]# redis-cli -c -h 192.168.4.51 -p 6379
192.168.4.51:6379> KEYS *
1) "\xe2\x80\x9cname\xe2\x80\x9c"
2) "j"
192.168.4.51:6379> exit

[root@localhost ~]# redis-cli -c -h 192.168.4.52 -p 6379
192.168.4.52:6379> KEYS *
1) "z"
2) "k"
192.168.4.52:6379> exit

[root@localhost ~]# redis-cli -c -h 192.168.4.53 -p 6379
192.168.4.53:6379> KEYS *
1) "x"
2) "y"
3) "i"
192.168.4.53:6379> exit

案例3:数据迁移

要求如下:

① 配置一台作为mysql11主数据库服务器的从服务器(同步mysql11数据)

② 配置第1台PXC服务器

③ 配置第2台PXC服务器

④ 配置第3台PXC服务器

⑤ 测试配置


PROJECT2-DAY2拓扑:

服务器IP及角色规划:

|-----------|--------------|-----------|
| 主机名 | IP地址 | 角色 |
| pxcnode66 | 192.168.4.66 | 第1台数据库服务器 |
| pxcnode77 | 192.168.4.77 | 第2台数据库服务器 |
| pxcnode88 | 192.168.4.88 | 第3台数据库服务器 |


步骤1:配置从服务器

pxcnode66操作

  • 为了实现数据迁移,把pxcnode66配置为mysql11的从服务器,且确保数据一致;

① 安装数据库服务软件并启动mysqld服务

bash 复制代码
[root@pxcnode66 ~]# tar -xf mysql-5.7.17.tar
[root@pxcnode66 ~]# ls *.rpm     //查看软件列表
bash 复制代码
[root@pxcnode66 ~]# yum -y install mysql-community-*
[root@pxcnode66 ~]# systemctl start mysqld
[root@pxcnode66 ~]# systemctl enable mysqld
[root@pxcnode66 ~]# ls /var/lib/mysql
bash 复制代码
[root@pxcnode66 ~]# ss -nlptu | grep :3306
tcp    LISTEN     0      80       :::3306                 :::*                   users:(("mysqld",pid=1683,fd=22))

初始化数据库密码

bash 复制代码
[root@pxcnode66 ~]# grep password /var/log/mysqld.log
2021-06-23T14:52:15.889904Z 1 [Note] A temporary password is generated for root@localhost: KmtikV!jX7JL
[root@pxcnode66 ~]# mysqladmin -uroot -p'KmtikV!jX7JL' password '123qqq...A'
[root@pxcnode66 ~]# mysql -uroot -p123qqq...A    //新密码登录
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

② 修改配置文件(指定server_id)

bash 复制代码
[root@pxcnode66 ~]# vim /etc/my.cnf
[mysqld]
server_id=66
[root@pxcnode66 ~]# systemctl restart mysqld   //重启服务

补充:指定server_id并准备同步主服务器信息前,需要确保主机之间数据一致;

③ 确保数据一致(使用mysql11主机的完全备份恢复数据确保数据一致)

bash 复制代码
[root@mysql11 ~]# rpm -ivh libev-4.15-1.el6.rf.x86_64.rpm   //安装依赖软件
[root@mysql11 ~]# yum -y install percona-xtrabackup-24-2.4.7-1.el7.x86_64.rpm  //安装在线热备软件
[root@mysql11 ~]# innobackupex --user root --password 123qqq...A --slave-info /allbak --no-timestamp    //备份所有数据,并记录备份数据对应的binlog日志名
[root@mysql11 ~]# ls /allbak/

补充:--slave-info 记录备份数据对应的binlog日志名

把备份文件发送给pxcnode66主机

bash 复制代码
[root@mysql11 ~]# scp -r /allbak/ root@192.168.4.66:/root/

在pxcnode66主机上恢复mysql11的数据库数据

bash 复制代码
[root@pxcnode66 ~]# rpm -ivh libev-4.15-1.el6.rf.x86_64.rpm
[root@pxcnode66 ~]# yum -y install percona-xtrabackup-24-2.4.7-1.el7.x86_64.rpm
[root@pxcnode66 ~]# systemctl stop mysqld
[root@pxcnode66 ~]# rm -rf /var/lib/mysql/*
[root@pxcnode66 ~]# innobackupex --apply-log /root/allbak/  //准备恢复数据
[root@pxcnode66 ~]# innobackupex --copy-back /root/allbak/  //恢复数据
[root@pxcnode66 ~]# chown -R mysql:mysql /var/lib/mysql   //由于拷贝的备份数据权限为root,需修改所有者为mysql
[root@pxcnode66 ~]# systemctl start mysqld  //启动服务

④ 指定主服务器

bash 复制代码
[root@pxcnode66 ~]# cat /root/allbak/xtrabackup_info | grep master   //根据备份信息,查binlog日志
binlog_pos = filename 'master11.000006', position '154'

[root@pxcnode66 ~]# mysql -uroot -p123qqq...A     //同步主服务器数据
mysql> change master to
    -> master_host='192.168.4.11',
    -> master_user='repluser',
    -> master_password='123qqq...A',
    -> master_log_file='master11.000006',
    -> master_log_pos=154;
Query OK, 0 rows affected, 2 warnings (0.01 sec)
mysql> start slave;      //启动slave 程序
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status\G     //查看slave状态
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.4.11    //主服务器ip地址
                  Master_User: repluser
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: master11.000006
          Read_Master_Log_Pos: 154
               Relay_Log_File: pxcnode66-relay-bin.000002
                Relay_Log_Pos: 319
        Relay_Master_Log_File: master11.000006
             Slave_IO_Running: Yes    //IO线程正常
            Slave_SQL_Running: Yes    //SQL线程正常
...

查看数据是否同步

bash 复制代码
mysql> select * from gamedb.user;
+--------+
| name   |
+--------+
| jack   |
| AnJ    |
| lucy   |
| Jackie |
+--------+
5 rows in set (0.01 sec)
[root@pxcnode66 ~]# ls /var/lib/mysql/

步骤2:配置第1台PXC服务器

pxcnode66操作

① 停止mysqld服务、卸载mysqld服务软件

bash 复制代码
[root@pxcnode66 ~]# systemctl stop mysqld     //停止服务
[root@pxcnode66 ~]# rpm -qa | grep -i mysql   //查看安装的MySQL服务软件
mysql-community-libs-5.7.17-1.el7.x86_64
mysql-community-server-5.7.17-1.el7.x86_64
mysql-community-embedded-5.7.17-1.el7.x86_64
mysql-community-embedded-devel-5.7.17-1.el7.x86_64
mysql-community-embedded-compat-5.7.17-1.el7.x86_64
perl-DBD-MySQL-4.023-6.el7.x86_64
mysql-community-common-5.7.17-1.el7.x86_64
mysql-community-client-5.7.17-1.el7.x86_64
mysql-community-devel-5.7.17-1.el7.x86_64
mysql-community-test-5.7.17-1.el7.x86_64
mysql-community-libs-compat-5.7.17-1.el7.x86_64
mysql-community-minimal-debuginfo-5.7.17-1.el7.x86_64
[root@pxcnode66 ~]# rpm -e --nodeps mysql-community-server mysql-community-embedded-compat mysql-community-common mysql-community-client mysql-community-devel mysql-community-test mysql-community-libs-compat mysql-community-minimal-debuginfo mysql-community-libs mysql-community-embedded mysql-community-embedded-devel    //卸载所有的MySQL服务软件
警告:/etc/my.cnf 已另存为 /etc/my.cnf.rpmsave

② 安装PXC软件、修改配置文件、启动mysql服务

bash 复制代码
[root@pxcnode66 ~]# cd pxc
[root@pxcnode66 pxc]# ls
bash 复制代码
[root@pxcnode66 pxc]# rpm -ivh qpress-1.1-14.11.x86_64.rpm    //安装依赖
[root@pxcnode66 pxc]# tar -xf Percona-XtraDB-Cluster-5.7.25-31.35-r463-el7-x86_64-bundle.tar      //解压PXC软件包
[root@pxcnode66 pxc]# ls

提示:安装PXC中途出现报错,需将percona-xtrabackup-24 >= 2.4.12要求Percona在线备份工具的版本大于2.4.12;

bash 复制代码
[root@pxcnode66 pxc]# yum -y install percona-xtrabackup-24-2.4.13-1.el7.x86_64.rpm
[root@pxcnode66 pxc]# yum -y install Percona-XtraDB-Cluster-*.rpm   //安装软件

修改数据库服务配置文件

bash 复制代码
[root@pxcnode66 pxc]# vim /etc/percona-xtradb-cluster.conf.d/mysqld.cnf
...
[mysqld]
server-id=66    //指定server_id
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
log-bin
log_slave_updates
expire_logs_days=7
...

修改集群服务配置文件

bash 复制代码
[root@pxcnode66 pxc]# vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
wsrep_cluster_address=gcomm://192.168.4.66,192.168.4.77,192.168.4.88  //集群成员列表
wsrep_node_address=192.168.4.66      //指定本机ip地址
wsrep_cluster_name=pxc-cluster       //指定集群名称(另外2台的集群名称要于此相同)
wsrep_node_name=pxcnode66            //指定本机主机名
wsrep_sst_auth="sstuser:123qqq...A"  //数据全量同步授权用户及密码

启动集群服务(mysql@bootstrap)

bash 复制代码
[root@pxcnode66 pxc]# systemctl start mysql@bootstrap
[root@pxcnode66 pxc]# systemctl enable mysql@bootstrap

查看端口信息

bash 复制代码
[root@pxcnode66 pxc]# ss -nlptu | grep :3306   //查看MySQL服务端口
tcp    LISTEN     0      80       :::3306                 :::*                   users:(("mysqld",pid=13570,fd=32))
[root@pxcnode66 pxc]# ss -nlptu | grep :4567     //查看集群通信端口
tcp    LISTEN     0      128       *:4567                  *:*                   users:(("mysqld",pid=13570,fd=11))

③ 数据库管理员登录、sstuser用户授权、查看集群状态信息

虽卸载MySQL,但还保留完全备份的/var/lib/mysql/数据,所以无需再初始化密码

bash 复制代码
[root@pxcnode66 pxc]# mysql -uroot -p123qqq...A
mysql> grant all on *.* to sstuser@'localhost' identified by '123qqq...A';   //用户授权
Query OK, 0 rows affected, 1 warning (0.01 sec)

mysql> show status like '%wsrep%';    //查看集群状态信息
+----------------------------------+--------------------------------------+
| Variable_name                    | Value                                |
+----------------------------------+--------------------------------------+
| wsrep_incoming_addresses         | 192.168.4.66:3306                    |
| wsrep_cluster_weight             | 1                                    |
| wsrep_cluster_state_uuid         | b034b1be-d447-11eb-9a0c-12896a6ff1b1                  |
| wsrep_cluster_status             | Primary                              |
| wsrep_connected                  | ON                                   |
| wsrep_ready                      | ON                                   |
+----------------------------------+--------------------------------------+
71 rows in set (0.00 sec)

查看状态信息依然是192.168.4.11的从服务器

bash 复制代码
[root@pxcnode66 pxc]# mysql -uroot -p123qqq...A -e "show slave status\G" | grep -i "yes"
mysql: [Warning] Using a password on the command line interface can be insecure.
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

步骤3:配置第2台PXC服务器

192.168.4.77操作

① 安装所需依赖和软件包

bash 复制代码
[root@maxscale77 ~]# cd pxc/
[root@maxscale77 pxc]# rpm -ivh qpress-1.1-14.11.x86_64.rpm   //安装依赖
[root@maxscale77 pxc]# tar -xf Percona-XtraDB-Cluster-5.7.25-31.35-r463-el7-x86_64-bundle.tar     //解压PXC软件包
[root@maxscale77 pxc]# yum -y install libev-4.15-1.el6.rf.x86_64.rpm   //安装依赖
[root@maxscale77 pxc]# yum -y install percona-xtrabackup-24-2.4.13-1.el7.x86_64.rpm    //升级在线备份工具
[root@maxscale77 pxc]# yum -y install Percona-XtraDB-Cluster-*.rpm     //安装软件

② 修改配置文件

修改数据库服务配置文件

bash 复制代码
[root@maxscale77 pxc]# vim /etc/percona-xtradb-cluster.conf.d/mysqld.cnf
...
[mysqld]
Server-id=77    //指定server_id
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
log-bin
log_slave_updates
expire_logs_days=7
...

修改集群服务配置文件

bash 复制代码
[root@maxscale77 pxc]# vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
wsrep_cluster_address=gcomm://192.168.4.66,192.168.4.77,192.168.4.88  //集群成员列表
wsrep_node_address=192.168.4.77      //指定本机ip地址
wsrep_cluster_name=pxc-cluster       //指定集群名称(另外2台的集群名称要于此相同)
wsrep_node_name=pxcnode77            //指定本机主机名
wsrep_sst_auth="sstuser:123qqq...A"  //数据全量同步授权用户及密码

③ 启动mysql服务(mysql)

bash 复制代码
[root@pxcnode77 pxc]# systemctl start mysql
[root@pxcnode77 pxc]# systemctl enable mysql
[root@pxcnode77 pxc]# ss -nlptu | grep :3306   //查看MySQL服务端口
tcp    LISTEN     0      80       :::3306                 :::*                   users:(("mysqld",pid=5268,fd=27))
[root@pxcnode77 pxc]# ss -nlptu | grep :4567   //查看集群端口
tcp    LISTEN     0      128       *:4567                  *:*                   users:(("mysqld",pid=5268,fd=11))

步骤4:配置第3台PXC服务器

192.168.4.88操作

① 安装所需依赖和软件包

bash 复制代码
[root@pxcnode88 ~]# cd pxc/
[root@pxcnode88 pxc]# rpm -ivh qpress-1.1-14.11.x86_64.rpm    //安装依赖
[root@pxcnode88 pxc]# tar -xf Percona-XtraDB-Cluster-5.7.25-31.35-r463-el7-x86_64-bundle.tar   //解压PXC软件包
[root@pxcnode88 pxc]# yum -y install libev-4.15-1.el6.rf.x86_64.rpm    //安装依赖
[root@pxcnode88 pxc]# yum -y install percona-xtrabackup-24-2.4.13-1.el7.x86_64.rpm   //升级在线备份工具
[root@pxcnode88 pxc]# yum -y install Percona-XtraDB-Cluster-*.rpm      //安装软件

② 修改配置文件

修改数据库服务配置文件

bash 复制代码
[root@pxcnode88 pxc]# vim /etc/percona-xtradb-cluster.conf.d/mysqld.cnf
...
[mysqld]
server-id=88    //指定server_id
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
log-bin
log_slave_updates
expire_logs_days=7
...

修改集群服务配置文件

bash 复制代码
[root@pxcnode77 pxc]# vim /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
wsrep_cluster_address=gcomm://192.168.4.66,192.168.4.77,192.168.4.88   //集群成员列表
wsrep_node_address=192.168.4.77      //指定本机Ip地址
wsrep_cluster_name=pxc-cluster       //指定集群名称(另外2台的集群名称要于此相同)
wsrep_node_name=pxcnode77            //指定本机主机名
wsrep_sst_auth="sstuser:123qqq...A"  //数据全量同步授权用户及密码

③ 启动mysql服务

bash 复制代码
[root@pxcnode88 pxc]# systemctl start mysql
[root@pxcnode88 pxc]# systemctl enable mysql
[root@pxcnode88 pxc]# ss -nlptu | grep :3306
tcp    LISTEN     0      80       :::3306                 :::*                   users:(("mysqld",pid=3558,fd=28))
[root@pxcnode88 pxc]# ss -nlptu | grep :4567
tcp    LISTEN     0      128       *:4567                  *:*                   users:(("mysqld",pid=3558,fd=11))

登录任意一台数据库服务器,查看集群状态信息

bash 复制代码
mysql> show status like '%wsrep%';
+----------------------------------+-------------------------------------------------------+
| Variable_name                    | Value                                                 |
+----------------------------------+-------------------------------------------------------+
| wsrep_incoming_addresses         | 192.168.4.66:3306,192.168.4.88:3306,192.168.4.77:3306 |
| wsrep_cluster_weight             | 3                                                     |
| wsrep_cluster_state_uuid         | b034b1be-d447-11eb-9a0c-12896a6ff1b1                  |
| wsrep_cluster_status             | Primary                                               |
| wsrep_connected                  | ON                                                    |
| wsrep_ready                      | ON                                                    |
+----------------------------------+-------------------------------------------------------+
71 rows in set (0.00 sec)

步骤5:测试配置

① 存储数据:在网站服务器连接3台PXC集群主机存储数据(web33操作)

bash 复制代码
[root@web33 ~]# yum -y install mysql-community-client   //或者安装Mariadb
[root@web33 ~]# systemctl start mysqld
[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.66 gamedb
mysql> insert into gamedb.user values('testA');
mysql> exit
[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.77 gamedb
mysql> insert into gamedb.user values('testB');
mysql> exit
[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.88 gamedb
mysql> insert into gamedb.user values('testC');
mysql> exit

常见报错:数据表没有primary key

ERROR 1105 (HY000): Percona-XtraDB-Cluster prohibits use of DML command on a table (gamedb.user) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER

解决办法:

bash 复制代码
mysql> alter table gamedb.user modify name char(10) primary key;

② 查询数据:在网站服务器连接PXC集群主机查询数据

bash 复制代码
[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.66 gamedb
mysql> select * from gamedb.user;
+--------+
| name   |
+--------+
...
| testA  |
| testB  |
| testC  |
+--------+
8 rows in set (0.00 sec)
mysql> exit

[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.77 gamedb
mysql> select * from gamedb.user;
+--------+
| name   |
+--------+
...
| testA  |
| testB  |
| testC  |
+--------+
8 rows in set (0.00 sec)
mysql> exit

[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.88 gamedb
mysql> select * from gamedb.user;
+--------+
| name   |
+--------+
...
| testA  |
| testB  |
| testC  |
+--------+
8 rows in set (0.00 sec)
mysql> exit

案例4:部署LB集群(负载均衡集群)

配置步骤如下:

① 安装haproxy软件

② 修改配置文件

③ 启动服务

④ 测试配置

准备1台主机做haproxy调度器,配置ip地址、主机名;运行haproxy服务,接受客户端访问数据库的连接请求,把请求平均分发给3台PXC集群主机;

PROJECT2-DAY2拓扑:

服务器IP及角色规划:

|-----------|--------------|-----|
| 主机名 | IP地址 | 角色 |
| haproxy99 | 192.168.4.99 | 调度器 |


步骤1:安装haproxy软件

haproxy99操作

bash 复制代码
[root@haproxy99 ~]# yum -y install haproxy.x86_64

步骤2:修改配置文件

bash 复制代码
[root@haproxy99 ~]# vim /etc/haproxy/haproxy.cfg
Global   //全局配置(默认即可)
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
stats socket /var/lib/haproxy/stats
 
defaults     //默认配置(默认即可)
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
 
listen status         //定义监控页面
    mode http         //模式为http
    bind *:80         //端口80
    stats enable      //启用配置
    stats uri /admin  //访问目录名
    stats auth admin:admin   //登录用户与密码
 
listen mysql_3306 *:3306   //定义haproxy服务名称与端口号
    mode    tcp            //mysql服务 得使用 tcp 协议
    option  tcpka          //使用长连接
    balance roundrobin     //调度算法(轮询)
    server  mysql_01 192.168.4.66:3306 check  //第1台数据库服务器
    server  mysql_02 192.168.4.77:3306 check  //第2台数据库服务器
    server  mysql_03 192.168.4.88:3306 check  //第3台数据库服务器

步骤3:启动服务

bash 复制代码
[root@haproxy99 ~]# systemctl start haproxy.service
[root@haproxy99 ~]# systemctl enable haproxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
[root@haproxy99 ~]# netstat -utnlp | grep :3306
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      1856/haproxy
[root@haproxy99 ~]# netstat -utnlp | grep haproxy
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      1856/haproxy
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      1856/haproxy
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1856/haproxy
udp        0      0 0.0.0.0:54518           0.0.0.0:*                           1855/haproxy

步骤4:测试配置

在网站服务器连接haproxy99主机访问数据

bash 复制代码
[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.99 -e 'select @@hostname'
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| pxcnode66  |      //第1次连接
+------------+

[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.99 -e 'select @@hostname'
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| pxcnode77  |      //第2次连接
+------------+

[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.99 -e 'select @@hostname'
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| pxcnode88  |      //第3次连接
+------------+

案例5:部署HA集群(高可用集群)

具体配置如下:

① 准备备用haproxy调度器主机

② 安装软件

③ 修改配置文件

④ 启动服务

⑤ 测试配置

PROJECT2-DAY2拓扑:


步骤1:准备备用调度器主机(haproxy98操作)

① 安装haproxy软件

bash 复制代码
[root@haproxy98 ~]# yum -y install haproxy.x86_64

② 修改配置文件(或直接拷贝haproxy99主机的配置文件)

bash 复制代码
[root@haproxy98 ~]# vim /etc/haproxy/haproxy.cfg
...
listen status         //定义监控页面
    mode http      //模式为http
    bind *:80       //端口80
    stats enable     //启用配置
    stats uri /admin  //访问目录名
    stats auth admin:admin   //登录用户与密码
 
listen mysql_3306 *:3306   //定义haproxy服务名称与端口号
    mode    tcp         //mysql服务 得使用 tcp 协议
    option  tcpka        //使用长连接
    balance roundrobin   //调度算法
    server  mysql_01 192.168.4.66:3306 check  //第1台数据库服务器
    server  mysql_02 192.168.4.77:3306 check  //第2台数据库服务器
    server  mysql_03 192.168.4.88:3306 check  //第3台数据库服务器
...

③ 启动haproxy服务

bash 复制代码
[root@haproxy98 ~]# systemctl start haproxy.service
[root@haproxy98 ~]# systemctl enable haproxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
[root@haproxy98 ~]# netstat -utnlp | grep :3306
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      1758/haproxy
[root@haproxy98 ~]# netstat -utnlp | grep haproxy
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      1758/haproxy
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      1758/haproxy
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1758/haproxy
udp        0      0 0.0.0.0:53459           0.0.0.0:*                           1757/haproxy

步骤2:部署keepalived高可用服务

① 安装keepalived软件,并修改配置文件(haproxy99操作)

bash 复制代码
[root@haproxy99 ~]# yum -y install keepalived.x86_64
[root@haproxy99 ~]# sed -i '36,$d' /etc/keepalived/keepalived.conf   //删除无关的配置行
[root@haproxy99 ~]# vim /etc/keepalived/keepalived.conf
global_defs {
...
   vrrp_iptables
...
}
 
vrrp_instance VI_1 {
    state MASTER      //主服务器标识
    interface eth0
    virtual_router_id 51
    priority 100      //haproxy99主机做主服务器,优先级要比haproxy98主机高
    advert_int 1
    authentication {
        auth_type PASS    //主备服务器连接方式
        auth_pass 1111    //连接密码
    }
    virtual_ipaddress {
        192.168.4.100     //定义VIP地址
    }
}

② 安装keepalived软件,并修改配置文件(haproxy98操作)

bash 复制代码
[root@haproxy98 ~]# yum -y install keepalived.x86_64
[root@haproxy98 ~]# sed -i '36,$d' /etc/keepalived/keepalived.conf   //删除无关的配置行
[root@haproxy98 ~]# vim /etc/keepalived/keepalived.conf
global_defs {
...
   vrrp_iptables
...
}
 
vrrp_instance VI_1 {
    state BACKUP     //备用服务器标识
    interface eth0
    virtual_router_id 51
    priority 95         //优先级要比haproxy99主机低
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.4.100      //定义VIP地址
    }
}

步骤3:启动服务

① 在haproxy99主机启动keepalived服务

bash 复制代码
[root@haproxy99 ~]# systemctl start keepalived.service   //启动服务
[root@haproxy99 ~]# ip add show | grep 192.168.4.100   //可查看到vip地址
    inet 192.168.4.100/32 scope global eth0

② 在haproxy98主机启动keepalived服务

bash 复制代码
[root@haproxy98 ~]# systemctl start keepalived.service   //启动服务
[root@haproxy98 ~]# ip add show | grep 192.168.4.100   //查看不到vip地址

步骤4:测试配置

① 使用网站服务器连接vip地址,访问数据库服务,测试轮询效果

bash 复制代码
[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.100 -e 'select @@hostname'
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| pxcnode66  |
+------------+
[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.100 -e 'select @@hostname'
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| pxcnode77  |
+------------+
[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.100 -e 'select @@hostname'
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| pxcnode88  |
+------------+

② 测试高可用

模拟测试停止haproxy99主机的keepalived服务,查看VIP地址

bash 复制代码
[root@haproxy99 ~]# ip add show | grep 192.168.4.100
    inet 192.168.4.100/32 scope global eth0
[root@haproxy99 ~]# systemctl stop keepalived.service   //停止keepalived服务
[root@haproxy99 ~]# ip add show | grep 192.168.4.100   //查看不到vip地址

在备用的haproxy98主机查看地址

bash 复制代码
[root@haproxy98 ~]# ip add show | grep 192.168.4.100
inet 192.168.4.100/32 scope global eth0

使用网站服务器再次连接vip地址,访问数据库服务

bash 复制代码
[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.100 -e 'select @@hostname'
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| pxcnode66  |
+------------+
[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.100 -e 'select @@hostname'
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| pxcnode77  |
+------------+
[root@web33 ~]# mysql -uarmy -p123qqq...A -h192.168.4.100 -e 'select @@hostname'
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| pxcnode88  |
+------------+

小结:

本篇章节为**【第四阶段】PROJECT2-DAY2**的学习笔记,这篇笔记可以初步了解到 升级网站运行平台、部署Redis内存存储服务集群、数据迁移、部署PXCMySQL实现强同步、部署LB和HA集群。


Tip:毕竟两个人的智慧大于一个人的智慧,如果你不理解本章节的内容或需要相关笔记、视频,可私信小安,请不要害羞和回避,可以向他人请教,花点时间直到你真正的理解。

相关推荐
GQH10003 分钟前
运算符、分支语句
linux·c语言
zundujia14 分钟前
C语言进程
linux·运维·服务器
Y_3_726 分钟前
Redis 中 String 字符串类型详解
linux·数据库·redis·缓存·bootstrap
_shenash38 分钟前
Linux C# DAY3
linux·运维·服务器
媛媛要加油呀1 小时前
web功能测试总结(自用分享)
运维·服务器·前端·功能测试
极地星光2 小时前
使用脚本自动化管理外部Git仓库依赖
运维·git·自动化
xjjeffery2 小时前
网络基础概念和 socket 编程
linux·c语言·网络·后端
人生匆匆2 小时前
OnlyOffice 打开文档时提示下载失败
linux
Dola_Pan2 小时前
Linux标准IO(五)-I/O缓冲详解
linux·运维·服务器
TeYiToKu2 小时前
笔记整理—内核!启动!—linux应用编程、网络编程部分(6)随机数与proc文件系统
linux·c语言·arm开发·笔记·嵌入式硬件