LVS+Keepalived高可用高并发集群
-
两台Keepalived作为Web服务入口
-
在Keepalived中配置LVS
实验:
LVS1 | 192.168.221.10 ,VIP:192.168.221.100 |
---|---|
LVS2 | 192.168.221.20, VIP:192.168.221.100 |
RS1 | 192.168.221.30 |
RS2 | 192.168.221.40 |
实验目标
1、使用LVS DR 模式直接路由模式)+ Keepalived实现高可用调度器。
2、VIP 由 Keepalived 在 LVS1/LVS2 之间漂移。
3、Client 访问 VIP(192.168.221.100),请求被转发到 RS1/RS2。
4、模拟 LVS1 故障时,LVS2 自动接管 VIP。
实验步骤:
javascript
(1)在RS1、RS2配置web服务
# 安装httpd/nginx
# sudo apt install -y nginx(Ubuntu)
yum install -y httpd(CentOS)
# 修改首页文件
echo "This is RS1 (192.168.221.30)" > /var/www/html/index.html
echo "This is RS2 (192.168.221.40)" > /var/www/html/index.html
# 启动服务
systemctl start httpd
(2)在RS1、RS2上绑定VIP到lo:0
# 在 RS1 / RS2 都执行
ip addr add 192.168.221.100/32 dev lo
ip link set lo up
# 永久关闭 ARP
echo "net.ipv4.conf.all.arp_ignore = 1" | sudo tee -a /etc/sysctl.conf
echo "net.ipv4.conf.all.arp_announce = 2" | sudo tee -a /etc/sysctl.conf
echo "net.ipv4.conf.lo.arp_ignore = 1" | sudo tee -a /etc/sysctl.conf
echo "net.ipv4.conf.lo.arp_announce = 2" | sudo tee -a /etc/sysctl.conf
sysctl -p
# 验证:
ip addr show lo
# 应该能看到 192.168.192.168.221.100/32
(3)在 LVS1、LVS2 上安装 Keepalived 和 ipvsadm
yum install -y keepalived ipvsadm
(4)LVS1 配置(192.168.221.30------Master)
vim /etc/keepalived/keepalived.conf
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
! Configuration File for keepalived
global_defs {
router_id lvs_1
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.221.100
}
}
#下面可选
virtual_server 192.168.221.100 80 {
delay_loop 6
lb_algo rr #算法轮询
lb_kind DR #LVS-DR模式
persistence_timeout 50
protocol TCP
real_server 192.168.221.30 80 { #主节点的IP
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.221.40 80 { #从节点的IP
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
(5)LVS2 配置(192.168.221.40------Backup)
vim /etc/keepalived/keepalived.conf
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
! Configuration File for keepalived
global_defs {
router_id lvs_1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.221.100
}
}
#下面可选
virtual_server 192.168.221.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.221.30 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.221.40 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
(6)启动 LVS + Keepalived
systemctl enable --now keepalived
# 查看 LVS 规则:
ipvsadm -Ln
# 应该能看到:
TCP 192.168.221.100:80 rr
-> 192.168.221.30:80
-> 192.168.221.40:80
(7)测试高可用与负载均衡
浏览器输入
测试故障切换
停止 LVS1:
systemctl stop keepalived
-
VIP 自动漂移到 LVS2(`ip addr show` 验证)。
-
Client 依旧访问 `192.168.221.100`,负载均衡继续生效。
搭建Nginx+Keepalived高可用集群
一、环境规划(以 4 台服务器为例)
角色 | 主机名 / IP | 软件 | 说明 |
---|---|---|---|
分发层(主) | 192.168.221.10 | Nginx + Keepalived | 承载虚拟 IP,反向代理到后端 |
分发层(备) | 192.168.221.20 | Nginx + Keepalived | 主节点故障时自动接管 VIP |
后端 Web1 | 192.168.221.30 | Nginx(提供 Web 服务) | 真实业务节点 1 |
后端 Web2 | 192.168.221.40 | Nginx(提供 Web 服务) | 真实业务节点 2 |
虚拟 IP(VIP) | 192.168.221.100 | - | 对外提供统一访问入口 |
二、具体操作
javascript
## 基础环境配置(四台虚拟机)
# 关闭防火墙(或放行80/443端口)
systemctl stop firewalld && systemctl disable firewalld
# 关闭SELinux(临时+永久)
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# 安装依赖(如epel-release)
yum install -y epel-release
# 安装Nginx
yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel
# nginx依赖pcre库
wget http://192.168.57.200/Software/pcre-8.45.tar.bz2
tar -xvf pcre-8.45.tar.bz2
cd pcre-8.45/
./configure
make && make install
# 检验是否安装
pcre-config --version
# 下载编译nginx
cd ..
wget http://192.168.57.200/Software/nginx-1.21.4.tar.gz
tar -xvf nginx-1.21.4.tar.gz
cd nginx-1.21.4
./configure --with-http_stub_status_module --with-http_ssl_module --with-pcre=../pcre-8.45
make && make install
# 安装成功后直接执行nginx启动
/usr/local/nginx/sbin/nginx
#软链接(是为了不用写路径直接执行Nginx命令)
ln -s /usr/local/nginx/sbin/nginx /usr/sbin/nginx
# 1. 安装Keepalived(192.168.221.10和20)
yum install -y keepalived
# 编写Nginx健康检查脚本,用于检测Nginx是否存活
vim /etc/keepalived/check_nginx.sh
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#!/bin/bash
if ! pgrep -x "nginx" > /dev/null; then
exit 1 # Nginx未运行,返回非0表示故障
fi
exit 0 # Nginx运行正常
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# 赋予执行权限
chmod +x /etc/keepalived/check_nginx.sh
# 2. 主节点(192.168.221.10)Keepalived 配置
# 编辑/etc/keepalived/keepalived.conf:
vim /etc/keepalived/keepalived.conf
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
! Configuration File for keepalived
global_defs {
router_id nginx_master # 唯一标识,备节点需不同
}
# 定义Nginx健康检查脚本
vrrp_script chk_nginx {
script "/etc/keepalived/check_nginx.sh"
interval 2 # 检查间隔(秒)
weight -50 # 检查失败时优先级减少值
fall 2 # 连续2次失败视为故障
rise 1 # 1次成功视为恢复
}
vrrp_instance VI_1 {
state MASTER # 主节点角色
interface ens33 # 网卡名(用`ip addr`确认)
virtual_router_id 51 # 虚拟路由ID(主备需相同)
priority 100 # 优先级(主节点需高于备节点)
advert_int 1 # 心跳间隔(秒)
# 认证(主备需相同)
authentication {
auth_type PASS
auth_pass 1111
}
# 关联健康检查脚本
track_script {
chk_nginx
}
# 虚拟IP(VIP)
virtual_ipaddress {
192.168.221.100/24 dev ens33
}
}
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# 3. 备节点(192.168.221.20)Keepalived 配置
vim /etc/keepalived/keepalived.conf
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
! Configuration File for keepalived
global_defs {
router_id nginx_backup # 唯一标识,与主节点不同
}
# 定义Nginx健康检查脚本(与主节点一致)
vrrp_script chk_nginx {
script "/etc/keepalived/check_nginx.sh"
interval 2
weight -50
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP # 备节点角色
interface ens33 # 网卡名(与主节点一致)
virtual_router_id 51 # 虚拟路由ID(与主节点一致)
priority 90 # 优先级(低于主节点)
advert_int 1 # 心跳间隔(与主节点一致)
# 认证(与主节点一致)
authentication {
auth_type PASS
auth_pass 1111
}
# 关联健康检查脚本
track_script {
chk_nginx
}
# 虚拟IP(与主节点一致)
virtual_ipaddress {
192.168.221.100/24 dev ens33
}
}
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# 4. 分发层 Nginx 反向代理配置(主/备节点)
vim /usr/local/nginx/conf/nginx.conf
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#在第34行左右直接添加
upstream web_cluster {
server 192.168.221.30:80 weight=1; # Web1节点
server 192.168.221.40:80 weight=1; # Web2节点
}
#在第41行左右
server {
listen 80;
#server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
## root html;
## index index.html index.htm;
proxy_pass http://web_cluster;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# 重启Nginx生效
nginx -s stop
nginx
# 五、服务启动与测
## 配置web服务(192.168.221.30和40)
# Web1节点:创建首页以区分节点
echo "<h1>Web1 - 192.168.221.30</h1>" > /usr/local/nginx/html/index.html
# Web2节点:创建首页以区分节点
echo "<h1>Web2 - 192.168.221.40</h1>" > /usr/local/nginx/html/index.html
# 重启httpd生效
nginx -s stop
nginx
六、测试
# 启动Keepalived并设置开机自启
systemctl start keepalived && systemctl enable keepalived
验证高可用功能(停掉一个测试另一个)
步骤 1:检查VIP绑定
在主节点执行 `ip addr`,确认 `192.168.221.100` 已绑定到网卡(如`ens33`);备节点暂未绑定。
步骤 2:访问 VIP 测试负载均衡
浏览器访问 `http://192.168.221.100`,刷新页面应交替显示 "Web1" 和 "Web2"(若未配置`ip_hash`)。
步骤 3:故障切换测试
-
模拟主节点 Nginx 故障:在主节点执行 `systemctl stop nginx`,等待约 5 秒后,在备节点执行 `ip addr`,确认`192.168.221.100`已漂移到备节点。
-
模拟主节点宕机:关闭主节点电源,备节点应自动接管 VIP,浏览器访问`http://192.168.221.100`仍正常。