一、环境规划与准备
1. 服务器规划(5 台物理 / 云服务器)
服务器角色 | IP 地址 | 硬件配置 | 主要部署组件 |
---|---|---|---|
负载层 | 192.168.15.143 | 4 核 8G 100G | Nginx、Keepalived |
应用层 1 | 192.168.15.145 | 4 核 8G 100G | Spring Cloud 微服务、Filebeat |
应用层 2 | 192.168.15.134 | 4 核 8G 100G | Spring Cloud Cloud 微服务、Filebeat |
数据层 | 192.168.15.169 | 8 核 16G 500G | MySQL(主从)、Redis 集群 |
日志层 | 192.168.15.149 | 4 核 8G 500G | Elasticsearch、Logstash、Kibana |
2. 基础环境初始化(所有服务器执行)
(1)系统配置
bash
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 关闭SELinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
# 配置时间同步
yum install -y ntpdate
ntpdate ntp.aliyun.com
echo "*/30 * * * * ntpdate ntp.aliyun.com" >> /etc/crontab
# 安装基础工具
yum install -y wget vim net-tools gcc gcc-c++ make
(2)配置 hosts 解析
bash
cat >> /etc/hosts << EOF
192.168.1.10 nginx-server
192.168.1.11 app-server1
192.168.1.12 app-server2
192.168.1.13 data-server
192.168.1.14 log-server
EOF
(3)安装 Docker 与 Docker Compose
bash
# 安装Docker
yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
systemctl start docker
systemctl enable docker
# 安装Docker Compose
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
二、数据层部署(192.168.15.143)
1. MySQL 主从部署
(1)创建 MySQL 配置文件
bash
mkdir -p /data/mysql/master /data/mysql/slave
主库配置(/data/mysql/master/my.cnf):
ini
[mysqld]
server-id=1
log-bin=mysql-bin
binlog_format=ROW
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
从库配置(/data/mysql/slave/my.cnf):
ini
[mysqld]
server-id=2
log-bin=mysql-bin
binlog_format=ROW
relay-log=relay-log
log-slave-updates=1
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
(2)启动 MySQL 容器
bash
# 启动主库
docker run -d --name mysql-master -p 3306:3306 \
-v /data/mysql/master/my.cnf:/etc/my.cnf \
-v /data/mysql/master/data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=Root@123456 \
--restart=always mysql:5.7
# 启动从库
docker run -d --name mysql-slave -p 3307:3306 \
-v /data/mysql/slave/my.cnf:/etc/my.cnf \
-v /data/mysql/slave/data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=Root@123456 \
--restart=always mysql:5.7
(3)配置主从复制
bash
# 进入主库容器
docker exec -it mysql-master mysql -uroot -pRoot@123456
# 主库创建同步用户
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%' IDENTIFIED BY 'Repl@123456';
FLUSH PRIVILEGES;
SHOW MASTER STATUS; # 记录File和Position值
# 进入从库容器
docker exec -it mysql-slave mysql -uroot -pRoot@123456
# 配置从库同步
CHANGE MASTER TO
MASTER_HOST='192.168.1.13',
MASTER_PORT=3306,
MASTER_USER='repl',
MASTER_PASSWORD='Repl@123456',
MASTER_LOG_FILE='mysql-bin.000001', # 替换为主库的File值
MASTER_LOG_POS=154; # 替换为主库的Position值
START SLAVE;
SHOW SLAVE STATUS\G; # 确认Slave_IO_Running和Slave_SQL_Running为Yes
2. Redis 集群部署
bash
# 创建Redis配置目录
mkdir -p /data/redis/{7001,7002,7003}
# 创建配置文件(以7001为例,7002、7003类似,仅端口不同)
cat > /data/redis/7001/redis.conf << EOF
port 7001
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
daemonize no
bind 0.0.0.0
requirepass Redis@123
masterauth Redis@123
EOF
# 启动3个Redis容器
docker run -d --name redis-7001 -p 7001:7001 \
-v /data/redis/7001/redis.conf:/etc/redis/redis.conf \
-v /data/redis/7001/data:/data \
--restart=always redis:5.0 redis-server /etc/redis/redis.conf
docker run -d --name redis-7002 -p 7002:7002 \
-v /data/redis/7002/redis.conf:/etc/redis/redis.conf \
-v /data/redis/7002/data:/data \
--restart=always redis:5.0 redis-server /etc/redis/redis.conf
docker run -d --name redis-7003 -p 7003:7003 \
-v /data/redis/7003/redis.conf:/etc/redis/redis.conf \
-v /data/redis/7003/data:/data \
--restart=always redis:5.0 redis-server /etc/redis/redis.conf
# 创建Redis集群
docker exec -it redis-7001 redis-cli -p 7001 -a Redis@123 cluster create \
192.168.1.13:7001 192.168.1.13:7002 192.168.1.13:7003 \
--cluster-replicas 0
三、日志层部署(192.168.15.145)
1. 部署 Elasticsearch
bash
# 创建ES配置和数据目录
mkdir -p /data/elasticsearch/{config,data,logs}
chmod 777 /data/elasticsearch/{data,logs}
# 创建配置文件
cat > /data/elasticsearch/config/elasticsearch.yml << EOF
cluster.name: es-cluster
node.name: es-node1
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
network.host: 0.0.0.0
discovery.type: single-node
bootstrap.memory_lock: true
"ES_JAVA_OPTS": "-Xms512m -Xmx512m"
EOF
# 启动ES容器
docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 \
-v /data/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /data/elasticsearch/data:/usr/share/elasticsearch/data \
-v /data/elasticsearch/logs:/usr/share/elasticsearch/logs \
-e "discovery.type=single-node" \
-e "ES_JAVA_OPTS=-Xms512m -Xmx512m" \
--restart=always elasticsearch:7.14.0
2. 部署 Logstash
bash
# 创建Logstash配置目录
mkdir -p /data/logstash/config
# 创建配置文件
cat > /data/logstash/config/logstash.conf << EOF
input {
beats {
port => 5044
}
}
filter {
if [log_type] == "spring" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:log_time} %{LOGLEVEL:log_level} %{DATA:class} - %{DATA:content}" }
}
}
}
output {
elasticsearch {
hosts => ["192.168.1.14:9200"]
index => "app-log-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
EOF
# 启动Logstash容器
docker run -d --name logstash -p 5044:5044 \
-v /data/logstash/config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
--link elasticsearch:elasticsearch \
--restart=always logstash:7.14.0
3. 部署 Kibana
bash
# 创建Kibana配置目录
mkdir -p /data/kibana/config
# 创建配置文件
cat > /data/kibana/config/kibana.yml << EOF
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.1.14:9200"]
kibana.index: ".kibana"
EOF
# 启动Kibana容器
docker run -d --name kibana -p 5601:5601 \
-v /data/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml \
--link elasticsearch:elasticsearch \
--restart=always kibana:7.14.0
四、应用层部署(192.168.15.134 和 192.168.15.169)
1. 部署 Filebeat(日志收集)
bash
# 创建Filebeat配置目录
mkdir -p /data/filebeat/config
# 创建配置文件
cat > /data/filebeat/config/filebeat.yml << EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /data/logs/*.log
tags: ["spring-log"]
fields:
log_type: spring
output.logstash:
hosts: ["192.168.15.134:5044"]
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
EOF
# 启动Filebeat容器
docker run -d --name filebeat \
-v /data/filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml \
-v /data/logs:/data/logs \
--restart=always elastic/filebeat:7.14.0
2. 部署 Spring Cloud 微服务
bash
# 创建应用目录和日志目录
mkdir -p /data/apps /data/logs
# 上传Spring Cloud应用包到/apps目录(示例以gateway服务为例)
# 假设已上传gateway.jar到/data/apps目录
# 创建启动脚本
cat > /data/apps/start.sh << EOF
#!/bin/bash
nohup java -jar /data/apps/gateway.jar --spring.profiles.active=prod \
--spring.datasource.url=jdbc:mysql://192.168.1.13:3306/cloud_db \
--spring.redis.cluster.nodes=192.168.1.13:7001,192.168.1.13:7002,192.168.1.13:7003 \
> /data/logs/gateway.log 2>&1 &
EOF
# 启动应用
chmod +x /data/apps/start.sh
/data/apps/start.sh
# 设置开机自启
echo "/data/apps/start.sh" >> /etc/rc.local
chmod +x /etc/rc.local
注:在 192.168.1.12 服务器上执行相同步骤部署其他微服务(如 user-service、order-service 等)
五、负载层部署(192.168.1.5.149)
1. 部署 Nginx
bash
# 创建Nginx配置目录
mkdir -p /data/nginx/{conf,conf.d,logs,html}
# 创建主配置文件
cat > /data/nginx/conf/nginx.conf << EOF
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '\$remote_addr - \$remote_user [\$time_local] "\$request" '
'\$status \$body_bytes_sent "\$http_referer" '
'"\$http_user_agent" "\$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
}
EOF
# 创建应用代理配置
cat > /data/nginx/conf.d/cloud.conf << EOF
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://app_servers;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
}
}
upstream app_servers {
server 192.168.1.11:8080;
server 192.168.1.12:8080;
}
EOF
# 启动Nginx容器
docker run -d --name nginx -p 80:80 \
-v /data/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
-v /data/nginx/conf.d:/etc/nginx/conf.d \
-v /data/nginx/logs:/var/log/nginx \
-v /data/nginx/html:/usr/share/nginx/html \
--restart=always nginx:1.21
2. 部署 Keepalived(高可用)
bash
# 安装Keepalived
yum install -y keepalived
# 创建配置文件
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.100/24
}
track_script {
check_nginx
}
}
EOF
# 创建Nginx检查脚本
cat > /etc/keepalived/check_nginx.sh << EOF
#!/bin/bash
if [ \$(ps -ef | grep nginx | grep -v grep | wc -l) -eq 0 ]; then
systemctl restart docker # 尝试重启Docker中的Nginx
sleep 3
if [ \$(ps -ef | grep nginx | grep -v grep | wc -l) -eq 0 ]; then
systemctl stop keepalived # 仍失败则停止Keepalived,触发切换
fi
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
# 启动Keepalived
systemctl start keepalived
systemctl enable keepalived
六、系统验证与测试
-
访问验证:通过 VIP(192.168.15.100)访问系统,确认能正常访问应用
-
负载均衡测试:
bash
# 多次访问测试负载均衡效果 for i in {1..10}; do curl http://192.168.1.100; done
-
高可用测试:
bash
# 在Nginx服务器上停止Nginx容器 docker stop nginx # 检查VIP是否漂移(需在备用Nginx节点上配置Keepalived) ip addr show eth0
-
日志收集测试:
bash
# 生成测试日志 echo "2023-10-07 10:00:00 INFO com.example.demo.IndexController - Test log message" >> /data/logs/gateway.log # 在Kibana(http://192.168.1.14:5601)中查看是否收集到日志
-
数据库主从测试:
bash
# 在主库插入数据 docker exec -it mysql-master mysql -uroot -pRoot@123456 -e "insert into testdb.t1 values(1,'test')" # 在从库查询验证 docker exec -it mysql-slave mysql -uroot -pRoot@123456 -e "select * from testdb.t1"
七、监控与运维建议
-
部署监控工具:建议在 log-server 上部署 Prometheus+Grafana 监控系统各组件状态
-
备份策略:
- MySQL:每日全量备份 + binlog 增量备份
- 重要配置文件:定期备份到对象存储
-
安全加固:
- 所有服务器配置防火墙,只开放必要端口
- 数据库和 Redis 设置复杂密码,并限制访问 IP
- 定期更新系统和组件补丁
-
日志管理:
- 配置 Elasticsearch 索引生命周期管理,自动清理过期日志
- 设置关键日志告警(如 ERROR 级别日志)