目录
一.集群基础环境说明
1.准备虚拟机
| IP地址 | 主机名 | CPU配置 | 内存配置 | 磁盘配置 | 角色说明 |
|---|---|---|---|---|---|
| 192.168.8.100 | node1 | 2 core | 4G | 20G+ | ES node |
| 192.168.8.101 | node2 | 2 core | 4G | 20G+ | ES node |
| 192.168.8.102 | node3 | 2 core | 4G | 20G+ | ES node |
镜像:
bash
CentOS-7-x86_64-DVD-2009
系统版本信息:
bash
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
2.配置集群免密登录及同步脚本
node1节点操作:
bash
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
[root@localhost ~]# cat >> /etc/hosts << EOF
192.168.8.100 node1
192.168.8.101 node2
192.168.8.102 node3
EOF
[root@localhost ~]# cat /etc/hosts
192.168.8.100 node1
192.168.8.101 node2
192.168.8.102 node3
[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# bash
node2节点操作:
bash
[root@localhost ~]# cat >> /etc/hosts << EOF
192.168.8.100 node1
192.168.8.101 node2
192.168.8.102 node3
EOF
[root@localhost ~]# cat /etc/hots
192.168.8.100 node1
192.168.8.101 node2
192.168.8.102 node3
[root@localhost ~]# hostnamectl set-hostname node2
[root@localhost ~]# bash
node3节点操作:
bash
[root@localhost ~]# cat >> /etc/hosts << EOF
192.168.8.100 node1
192.168.8.101 node2
192.168.8.102 node3
EOF
[root@localhost ~]# cat /etc/hots
192.168.8.100 node1
192.168.8.101 node2
192.168.8.102 node3
[root@localhost ~]# hostnamectl set-hostname node3
[root@localhost ~]# bash
node1节点上生成密钥对:
bash
[root@node1 ~]# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa -q
[root@node1 ~]# ll ~/.ssh/
total 8
-rw-------. 1 root root 1675 Dec 7 21:21 id_rsa
-rw-r--r--. 1 root root 392 Dec 7 21:21 id_rsa.pub
node1节点配置所有集群节点的免密登录:
bash
[root@node1 ~]# for ((host_id=1;host_id<=3;host_id++));do ssh-copy-id node${host_id};done
链接测试:
bash
[root@node1 ~]# ssh root@node1
Last login: Sun Dec 7 19:46:35 2025 from 192.168.8.1
[root@node1 ~]# exit
logout
Connection to node1 closed.
[root@node1 ~]# ssh root@node2
Last login: Sun Dec 7 19:46:42 2025 from 192.168.8.1
[root@node2 ~]# exit
logout
Connection to node2 closed.
[root@node1 ~]# ssh root@node3
Last login: Sun Dec 7 20:02:29 2025 from 192.168.8.1
[root@node3 ~]# exit
logout
Connection to node3 closed.
[root@node1 ~]#
3.修改软件源
bash
所有节点上执行(node1、node2、node3)
[root@node1 ~]# rm -rf /etc/yum.repos.d/*
[root@node1 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@node1 ~]# yum makecache
4.编写同步脚本
所有节点上(node1、node2、node3)安装rsync数据同步⼯具:
bash
root@node1 ~]# yum -y install rsync
bash
在node1上编写同步脚本
[root@node1 ~]# vi /usr/local/sbin/data_rsync.sh
[root@node1 ~]# cat /usr/local/sbin/data_rsync.sh
#!/bin/bash
# Auther: Jason Yin
if [ $# -ne 1 ];then
echo "Usage: $0 /path/to/file(绝对路径)"
exit
fi
# 判断文件是否存在
if [ ! -e $1 ];then
echo "[ $1 ] dir or file not find!"
exit
fi
# 获取父路径
fullpath=`dirname $1`
# 获取子路径
basename=`basename $1`
# 进入到父路径
cd $fullpath
for ((host_id=2;host_id<=3;host_id++))
do
# 使得终端输出变为绿色
tput setaf 2
echo ===== rsyncing node${host_id}: $basename =====
# 使得终端恢复原来的颜色
tput setaf 7
# 将数据同步到其他两个节点
rsync -az $basename `whoami`@node${host_id}:$fullpath
if [ $? -eq 0 ];then
echo "命令执行成功!"
fi
done
[root@node1 ~]# chmod +x /usr/local/sbin/data_rsync.sh
[root@node1 ~]# ll /usr/local/sbin/data_rsync.sh
-rwxr-xr-x. 1 root root 715 Dec 7 23:11 /usr/local/sbin/data_rsync.sh
[root@node1 ~]# mkdir /tmp/test
[root@node1 ~]# echo 111 > /tmp/test/1.txt
[root@node1 ~]# cat /tmp/test/1.txt
111
[root@node1 ~]# data_rsync.sh /tmp/test/
===== rsyncing node2: test =====
命令执行成功!
===== rsyncing node3: test =====
命令执行成功!
[root@node1 ~]#
在node2和node3验证是否同步
bash
[root@node2 ~]# cat /tmp/test/1.txt
111
[root@node2 ~]#
[root@node3 ~]# cat /tmp/test/1.txt
111
[root@node3 ~]#
5.集群时间同步
bash
所有节点上(node1、node2、node3)安装chrony
[root@node1 ~]# yum -y install ntpdate chrony
[root@node1 ~]# vim /etc/chrony.conf
[root@node1 ~]# cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp.aliyun.com iburst
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp3.aliyun.com iburst
server ntp4.aliyun.com iburst
server ntp5.aliyun.com iburst
...
[root@node1 ~]# systemctl enable --now chronyd
[root@node1 ~]# systemctl restart chronyd
[root@node1 ~]# systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2025-12-07 15:35:30 CST; 10s ago
Docs: man:chronyd(8)
man:chrony.conf(5)
Process: 2354 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
Process: 2350 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 2352 (chronyd)
CGroup: /system.slice/chronyd.service
└─2352 /usr/sbin/chronyd
Dec 07 15:35:30 node1 systemd[1]: Starting NTP client/server...
Dec 07 15:35:30 node1 chronyd[2352]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTE...DEBUG)
Dec 07 15:35:30 node1 chronyd[2352]: Frequency 0.000 +/- 1000000.000 ppm read from /var/lib/chrony/drift
Dec 07 15:35:30 node1 systemd[1]: Started NTP client/server.
Dec 07 15:35:35 node1 chronyd[2352]: Selected source 203.107.6.88
Dec 07 15:35:36 node1 chronyd[2352]: Selected source 223.4.249.80
Hint: Some lines were ellipsized, use -l to show in full.
[root@node1 ~]#
6.关闭防火墙禁用selinux
所有节点上(node1、node2、node3)关闭防火墙和禁用selinux:
bash
[root@node1 ~]# systemctl disable --now firewalld && systemctl is-enabled firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
disabled
[root@node1 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
Dec 07 22:34:42 node1 systemd[1]: Starting firewalld - dynamic firewall daemon...
Dec 07 22:34:43 node1 systemd[1]: Started firewalld - dynamic firewall daemon.
Dec 07 22:34:43 node1 firewalld[709]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure confi...t now.
Dec 07 22:42:51 node1 systemd[1]: Stopping firewalld - dynamic firewall daemon...
Dec 07 22:42:52 node1 systemd[1]: Stopped firewalld - dynamic firewall daemon.
Hint: Some lines were ellipsized, use -l to show in full.
[root@node1 ~]# sed -ri 's#(SELINUX=)enforcing#\1disabled#' /etc/selinux/config
[root@node1 ~]# grep ^SELINUX= /etc/selinux/config
SELINUX=disabled
[root@node1 ~]# setenforce 0
[root@node1 ~]# getenforce
Permissive
二.ElasticSearch单节点部署
1.下载指定的ES版本
参考链接:https://www.elastic.co/cn/downloads/elasticsearch
bash
[root@node1 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.3-linux-x86_64.tar.gz
[root@node1 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.3-x86_64.rpm
2.单节点部署elasticserch
bash
[root@node1 ~]# ll
total 566556
-rw-------. 1 root root 1272 Dec 7 2025 anaconda-ks.cfg
-rw-r--r-- 1 root root 311777007 Apr 20 2022 elasticsearch-7.17.3-linux-x86_64.tar.gz
-rw-r--r-- 1 root root 239023208 Dec 7 17:15 elasticsearch-7.17.3-x86_64.rpm
[root@node1 ~]# yum -y localinstall elasticsearch-7.17.3-x86_64.rpm
3.修改配置文件
bash
[root@node1 ~]# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: ELK
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["192.168.8.100"]
[root@node1 ~]# systemctl restart elasticsearch
[root@node1 ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:22 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 [::]:9200 [::]:*
LISTEN 0 128 [::]:9300 [::]:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 100 [::1]:25 [::]:*
bash
相关参数说明:
cluster.name:
集群名称,若不指定,则默认是"elasticsearch",⽇志⽂件的前缀也是集群名称。
node.name:
指定节点的名称,可以⾃定义,推荐使⽤当前的主机名,要求集群唯⼀。
path.data:
数据路径。
path.logs:
⽇志路径
network.host:
ES服务监听的IP地址
discovery.seed_hosts:
服务发现的主机列表,对于单点部署⽽⾔,主机列表和"network.host"字段配置相同
即可。
bash
[root@node1 ~]# curl 192.168.8.100:9200
{
"name" : "node-1",
"cluster_name" : "ELK",
"cluster_uuid" : "APeGMCCyQVGfwMNTmuugGg",
"version" : {
"number" : "7.17.3",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "5ad023604c8d7416c9eb6c0eadb62b14e766caff",
"build_date" : "2022-04-19T08:11:19.070913226Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
三.ElasticSearch分布式集群部署
1.修改配置文集
node1操作:
bash
[root@node1 ~]# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: ELK
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["192.168.8.100", "192.168.8.101","192.168.8.102"]
cluster.initial_master_nodes: ["192.168.8.100", "192.168.8.101","192.168.8.102"]
[root@node1 ~]#
温馨提示:
"node.name"各个节点配置要区分清楚,建议写对应的主机名称。
node2操作:
bash
[root@node2 ~]# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: ELK
node.name: node-2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["192.168.8.100","192.168.8.101","192.168.8.102" ]
cluster.initial_master_nodes: ["192.168.8.100", "192.168.8.101","192.168.8.102"]
node3操作
bash
[root@node3 ~]# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
cluster.name: ELK
node.name: node-3
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["192.168.8.100","192.168.8.101","192.168.8.102"]
cluster.initial_master_nodes: ["192.168.8.100", "192.168.8.101","192.168.8.102"]