Filebeat收集nginx日志到elasticsearch,最终在kibana做展示。
项目需求是要将 一天或15分钟内 网站访问次数最多的10个IP 过滤出来,分析IP是否是攻击
EFK架构介绍
EFK三个开源软件组成,分别是Elasticsearch、FileBeat和Kibana。这三个软件各自在日志管理和数据分析
领域发挥着重要作用,它们之间互相配合使用,完美衔接,高效地满足了多种场合的应用需求,是目前主流的
一种日志分析系统解决方案。
- Elasticsearch:负责日志的保存和搜索。它是一个分布式、高扩展、高实时的搜索与数据分析引擎,基于Lucene开发,通过RESTful web接口提供全文搜索和分析功能。Elasticsearch能够高效地存储和索引各种类型的数据,并支持快速搜索和实时分析。
- FileBeat:负责日志的收集。它由Elastic公司开发,专为日志收集而设计,具有资源占用低、易于部署的特点。FileBeat直接从服务器上的日志文件读取数据,进行初步的归集和简单处理,然后将其转发到Elasticsearch或其他中间件。
- Kibana:负责日志数据的界面展示。它是一个开源的数据分析和可视化平台,提供丰富的数据可视化选项,如柱状图、线图、饼图等,帮助用户以图形化的方式理解数据。此外,它还支持强大的数据探索功能,用户可以使用Elasticsearch的查询语言进行数据查询和筛选。
内存 | IP | 程序 |
---|---|---|
64G | 192.168.244.128 | nginx / filebeat |
128G | 192.168.244.129 | elasticsearch / kibana |
基础环境准备
两台都需要如下配置
bash
systemctl disable --now firewalld
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
setenforce 0
yum install -y epel-release
yum makecache
yum -y install net-tools vim lrzsz wget yum-utils telnet unzip tar iptables-services
systemctl restart iptables
systemctl enable iptables
vim /etc/security/limits.conf
* soft stack 102400
* soft nproc 655360
* hard nproc 655360
* soft nofile 655360
* hard nofile 655360
* - as unlimited
* - fsize unlimited
* - memlock unlimited
# 关闭 swap 分区
free -h
swapoff -a && sed -i '/swap/s/^.*$/#&/' /etc/fstab
vim /etc/sysctl.conf
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 30
net.ipv4.tcp_keepalive_probes=2
net.ipv4.tcp_keepalive_intvl=2
fs.file-max = 655350
vm.max_map_count = 262144
sysctl -p
vi /etc/profile
ulimit -u 512880
ulimit -SHn 655350
ulimit -d unlimited
ulimit -m unlimited
ulimit -s 819200
ulimit -t unlimited
ulimit -v unlimited
source /etc/profile
ulimit -a
以下操作在 192.168.244.129
安装 elasticsearch
一 、创建用户并下载安装包
创建 es 用户,并赋权
useradd -u 9200 esuser
mkdir -p /data/elasticsearch/{data,logs,temp}
下载安装包:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.17.2-linux-x86_64.tar.gz
解压 安装
tar -xf elasticsearch-8.17.2-linux-x86_64.tar.gz -C /usr/local/
cd /usr/local/
mv elasticsearch-8.17.2/ elasticsearch
chown -R esuser:esuser elasticsearch /data/elasticsearch/
cd elasticsearch/config/
vim jvm.options
-Xms10g
-Xmx10g
vim elasticsearch.yml
cluster.name: efk-devops #集群名称
node.name: efk-es #节点名称
path.data: /data/elasticsearch/data # 数据存储位置
path.logs: /data/elasticsearch/logs #日志存储位置
network.host: 0.0.0.0 #允许连接IP
# 允许跨域
http.port: 9200 # 网页访问端口
transport.profiles.default.port: 9300
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: "*"
#http.cors.allow-methods: "GET"
cluster.initial_master_nodes: ["efk-es"]
action.destructive_requires_name: false
discovery.seed_hosts: ["192.168.244.129:9300"] # 集群成员
# bootstrap.memory_lock: false
# bootstrap.system_call_filter: false
#如果关闭安全认证需要使用以下方法
xpack.security.enabled: false
xpack.security.transport.ssl.enabled: false
#关闭geoip
ingest.geoip.downloader.enabled: false
xpack.monitoring.collection.enabled: true
#用于此缓存的 Java 堆的百分比,默认为 10%
indices.queries.cache.size: 20%
#缓存是在节点级别进行管理的,默认最大大小为堆的1%
indices.requests.cache.size: 2%
#索引写入缓冲区,用于存储新写入的文档,缓冲区默认大小
indices.memory.index_buffer_size: 10%
#写线程池的队列大小,根据业务量和es的集群配置慢慢调优
thread_pool.write.queue_size: 200
二、安装ik分词器
# 直接使用下方命令自动安装
/usr/local/elasticsearch/bin/elasticsearch-plugin install https://get.infini.cloud/elasticsearch/analysis-ik/8.17.2
三、启动ES
# 再次赋权
chown -R esuser:esuser elasticsearch
# 启用elasticsearch
runuser -l esuser -c "/usr/local/elasticsearch-8.17.2/bin/elasticsearch -d"
安装 kibana
创建 kibana 用户 及赋权
useradd -u 5601 kibana
mkdir /data/kibana
mkdir /var/log/kibana/
chown -R kibana:kibana /data/kibana
chown -R kibana:kibana /var/log/kibana/
# 下载解压kibana
wget https://artifacts.elastic.co/downloads/kibana/kibana-8.17.2-linux-x86_64.tar.gz
tar -xf kibana-8.17.2-linux-x86_64.tar.gz -C /usr/local/
cd /usr/local
mv kibana-8.17.2 kibana
chown -R kibana:kibana /usr/local/kibana/
cd kibana
# 修改配置文件
vim kibana.yml
logging.appenders.default:
type: file
fileName: /var/log/kibana/kibana.log
layout:
type: json
path.data: /data/kibana/data
server.port: 5601
server.host: "0.0.0.0"
##填本机IP或者 0.0.0.0 都可以,最好写本机IP
server.name: "efk-kibana-devops"
## name 名称可以随便指定
### es集群配置
elasticsearch.hosts: ["http://192.168.244.129:9200"]
pid.file: /usr/local/kibana/kibana.pid
elasticsearch.requestTimeout: 99999
i18n.locale: "zh-CN"
启动 kibana
nohup /home/opt/kibana/bin/kibana --allow-root > /var/log/kibana/kibana.log &
以下操作在 192.168.244.128
安装 Filebeat
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.17.2-linux-x86_64.tar.gz
或者
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.17.2-linux-x86_64.tar.gz
tar -xf filebeat-8.17.2-linux-x86_64.tar.gz -C /usr/local/
cd /usr/local/
mv filebeat-8.17.2-linux-x86_64/ filebeat
cd filebeat
编辑配置文件,自建即可。无需使用原配置文件
vim b_nginx.yml
# Module: nginx
# Docs: https://www.elastic.co/guide/en/beats/filebeat/8.17/filebeat-module-nginx.html
# 此配置文件与下几个不同处,就是创建索引,索引名称不带日期
filebeat.modules:
- module: nginx
# Access logs
access:
# 修改为 true,开启收集日志
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
# 收集日志文件位置
var.paths: ["/var/log/nginx/access.log"]
# Error logs
# 是否收集error日志,默认 false,不收集
error:
enabled: false
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths: ["/var/log/nginx/error.log"]
# Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
ingress_controller:
enabled: false
output.elasticsearch:
# 将收集的日志放到 192.168.244.129 的ES 中
hosts: ["http://192.168.244.129:9200"]
# 索引名称如下
index: "nginx-access"
setup.template.name: "nginx-access-template" # Define a name for the template
setup.template.pattern: "nginx-access" # Pattern that matches your index name
# 开启日志
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0640
vim c_nginx.yml
# Module: nginx
# Docs: https://www.elastic.co/guide/en/beats/filebeat/8.17/filebeat-module-nginx.html
# 此配置文件与其余几个不同处,就是创建索引,索引名称按日期分类
filebeat.modules:
- module: nginx
# Access logs
access:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths: ["/var/log/nginx/access.log"]
# Error logs
error:
enabled: false
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths: ["/var/log/nginx/error.log"]
# Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
ingress_controller:
enabled: false
output.elasticsearch:
hosts: ["http://192.168.244.129:9200"]
index: "nginx-access-%{+YYYY.MM.dd}"
setup.template.name: "nginx-access-template" # Define a name for the template
setup.template.pattern: "nginx-access-*" # Pattern that matches your index name
d_nginx.yml
# Filebeat Configuration
# Define the NGINX module
# 创建索引,索引不指定名称。默认索引名称应该是 filebeat-filebeat版本号,例:filebeat-17.2
filebeat.modules:
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log"]
error:
enabled: false
# Output to Elasticsearch
output.elasticsearch:
hosts: ["http://192.168.244.128:9200"]
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0640
启动 filebeat
./filebeat -c b_nginx.yml
nginx 配置
配置一下nginx 日志格式
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
ngint -t
ngint -s reload
最终日志输出如下即可:
192.168.244.1 - - [06/Mar/2025:15:04:03 +0800] "GET /index.html HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:135.0) Gecko/20100101 Firefox/135.0" "-"
创建图形
根据日志创建图形
一、创建数据视图
点击 Discover

选择你们自建的索引名称,我这边选择的是 nginx-access
名称和索引模式都写成 nginx-access 即可
点击右上角保存就好了
二、创建可视化
打开 Kibana 页面 ---> 点击 Visualize Library

进去之后 ---》 点击 新建可视化 ---》点击 Lens

准备创建条形图

点击 水平轴 配置如下

点击 垂直轴

点击右上角的保存即可。
最后查看仪表盘
点击 kibana ---》 Dashboards
成品展示
