安装 Ansible
略,看我博客:# Ansible 环境搭建
创建 Ansible Playbook
在控制节点(192.168.92.19)上执行:
cd /root/ansible-playbook
# 创建角色目录结构
mkdir -p roles/elk/{tasks,handlers,files/{elasticsearch,kibana,logstash/conf.d}}
# 创建任务文件(空文件,后续编写)
touch roles/elk/tasks/{main,elasticsearch,kibana,logstash}.yml
# 创建 handlers 文件
touch roles/elk/handlers/main.yml
项目的树形包结构
[root@ansible ansible-playbook]# tree
.
├── roles
│ ├── docker_install
│ │ ├── files
│ │ │ └── docker-compose-Linux-x86_64
│ │ ├── handlers
│ │ │ └── main.yml
│ │ └── tasks
│ │ └── main.yml
│ ├── elk
│ │ ├── files
│ │ │ ├── elasticsearch
│ │ │ │ ├── elasticsearch.yml
│ │ │ │ └── jvm.options
│ │ │ ├── kibana
│ │ │ │ └── kibana.yml
│ │ │ └── logstash
│ │ │ ├── conf.d
│ │ │ │ └── test.conf
│ │ │ └── logstash.yml
│ │ ├── handlers
│ │ │ └── main.yml
│ │ └── tasks
│ │ ├── elasticsearch.yml
│ │ ├── kibana.yml
│ │ ├── logstash.yml
│ │ └── main.yml
│ ├── mysql
│ │ ├── files
│ │ │ └── init.sql
│ │ └── tasks
│ │ └── main.yml
│ ├── nginx
│ │ ├── tasks
│ │ │ └── main.yml
│ │ └── templates
│ │ └── nginx.conf.j2
│ ├── prometheus
│ │ ├── files
│ │ │ └── docker-prometheus
│ │ │ ├── alertmanager
│ │ │ │ └── config.yml
│ │ │ ├── docker-compose.yml
│ │ │ ├── grafana
│ │ │ │ ├── config.monitoring
│ │ │ │ └── provisioning
│ │ │ └── prometheus
│ │ │ ├── alert.yml
│ │ │ ├── prometheus.yml
│ │ │ └── prometheus.yml.back
│ │ └── tasks
│ │ └── main.yml
│ ├── redis
│ │ └── tasks
│ │ └── main.yml
│ ├── springboot
│ │ ├── files
│ │ │ └── Springbootdemo-0.0.1-SNAPSHOT.jar
│ │ ├── tasks
│ │ │ └── main.yml
│ │ └── templates
│ │ └── Dockerfile.j2
│ ├── system_init
│ │ └── tasks
│ │ └── main.yml
│ └── vue
│ ├── files
│ │ └── dist
│ │ ├── assets
│ │ │ ├── 1-DxVeaUtM.jpg
│ │ │ ├── 2-U6Qjex4J.jpg
│ │ │ ├── chichi03-DW8jof7n.jpg
│ │ │ ├── index-CYqzScuv.js
│ │ │ └── index-Cp89o39-.css
│ │ ├── favicon.ico
│ │ └── index.html
│ └── tasks
│ └── main.yml
├── site.yml
├── site2.yml
└── site3.yml
40 directories, 40 files
Elasticsearch自动化部署
将你手工部署机器(如 192.168.92.14)上的 /etc/elasticsearch/elasticsearch.yml 和 /etc/elasticsearch/jvm.options 复制到该目录:
scp root@192.168.92.14:/etc/elasticsearch/elasticsearch.yml /root/ansible-playbook/roles/elk/files/elasticsearch/
scp root@192.168.92.14:/etc/elasticsearch/jvm.options /root/ansible-playbook/roles/elk/files/elasticsearch/
这里看我博客:# Elasticsearch 7.17.10 双节点集群部署实战(基于 Rocky Linux 9.6),但是这里做简单点,只做单节点的elasticsearch
单节点elasticsearch.yml配置文件,和Elasticsearch 7.17.10 双节点集群部署实战(基于 Rocky Linux 9.6)这篇文章双节点的elasticsearch.yml配置文件略有不同,有所改动,单节点elasticsearch.yml配置文件如下:
[root@es01 elasticsearch]# cat elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# vi elasticsearch.ymlvi elasticsearch.ymlvi elasticsearch.ymlvi elasticsearch.ymlvi elasticsearch.ymlvi elasticsearch.ymlvi elasticsearch.yml Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#vi elasticsearch.ymlvi elasticsearch.ymlvi elasticsearch.yml
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: es-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: es01
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/es/data
#
# Path to log files:
#
path.logs: /data/es/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["es01"]
discovery.type: single-node
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["es01"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
# ---------------------------------- Security ----------------------------------
#
# *** WARNING ***
#
# Elasticsearch security features are not enabled by default.
# These features are free, but require configuration changes to enable them.
# This means that users don't have to provide credentials and can get full access
# to the cluster. Network connections are also not encrypted.
#
# To protect your data, we strongly encourage you to enable the Elasticsearch security features.
# Refer to the following documentation for instructions.
#
# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html
http.cors.enabled: true
http.cors.allow-origin: "*"
/etc/elasticsearch/jvm.options不需要改
编写 roles/elk/tasks/elasticsearch.yml
在 /root/ansible-playbook/roles/elk/tasks/elasticsearch.yml 中写入:
---
# 1. 安装 Java(Elasticsearch 依赖)
- name: 安装 Java 1.8
dnf:
name: java-1.8.0-openjdk
state: present
# 2. 下载 Elasticsearch RPM 包
- name: 下载 Elasticsearch RPM
get_url:
url: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.10-x86_64.rpm
dest: /tmp/elasticsearch.rpm
# 3. 导入 Elasticsearch GPG 密钥
- name: 导入 Elasticsearch GPG 密钥
rpm_key:
state: present
key: https://artifacts.elastic.co/GPG-KEY-elasticsearch
# 4. 安装 Elasticsearch
- name: 安装 Elasticsearch
yum:
name: /tmp/elasticsearch.rpm
state: present
# 5. 创建数据目录
- name: 创建数据目录
file:
path: /data/es/data
state: directory
owner: elasticsearch
group: elasticsearch
mode: '0755'
# 6. 创建日志目录
- name: 创建日志目录
file:
path: /data/es/logs
state: directory
owner: elasticsearch
group: elasticsearch
mode: '0755'
# 7. 复制 elasticsearch.yml(单节点配置)
- name: 复制 elasticsearch.yml
copy:
src: elasticsearch/elasticsearch.yml
dest: /etc/elasticsearch/elasticsearch.yml
owner: root
group: elasticsearch
mode: '0644'
notify: restart elasticsearch
# 8. 启动并启用 elasticsearch
- name: 启动 elasticsearch 服务
systemd:
name: elasticsearch
state: started
enabled: yes
添加 elasticsearch的 Handler(roles/elk/handlers/main.yml)
在 roles/elk/handlers/main.yml 中添加(如果文件不存在则创建):
- name: restart elasticsearch
systemd:
name: elasticsearch
state: restarted
Logstash自动化部署
这里在ansible控制端192.168.92.19在加个被控制端节点,因为这个实验需要两个节点
elasticsearch,kibana都放在192.168.92.20(已被快照恢复到初始状态),logstash放在192.168.92.21
192.168.92.19上(vi /etc/ansible/hosts)
[webservers]
192.168.92.20
[elk_logstash]
192.168.92.21
#SSH连接时的主机密钥验证
[root@ansible ansible-playbook]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:y+6gfqu03xxaA8npPIzUMCRJGcyJtv1TXAUmh8Pr0FA root@ansible
The key's randomart image is:
+---[RSA 3072]----+
| =+=. oE.+o. |
|..*o . ++. |
|. o o + + |
| . . * B |
| o X S |
| . B + . |
| o B * |
| . o.O o |
| .=++o= |
+----[SHA256]-----+
[root@ansible ansible-playbook]# ssh-copy-id root@192.168.92.21
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.92.21 (192.168.92.21)' can't be established.
ED25519 key fingerprint is SHA256:TtJq+VxSoGlLTkPvOJuAU+TJ+MZsYo6TuFHufmCOTeI.
This host key is known by the following other names/addresses:
~/.ssh/known_hosts:1: 192.168.92.20
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.92.21's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@192.168.92.21'"
and check to make sure that only the key(s) you wanted were added.
[root@ansible ansible-playbook]# ssh-copy-id root@192.168.92.20
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.92.20's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@192.168.92.20'"
and check to make sure that only the key(s) you wanted were added.
[root@ansible ansible-playbook]# ansible all -m ping
192.168.92.21 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.92.20 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
ansible控制端和被控制端连接成功
创建 Logstash 子任务文件 roles/elk/tasks/logstash.yml
---
# 1. 安装 Java(Logstash 依赖)
- name: 安装 Java 1.8 开发环境
dnf:
name: java-1.8.0-openjdk-devel
state: present
# 2. 下载 Logstash RPM 包
- name: 下载 Logstash RPM
get_url:
url: https://artifacts.elastic.co/downloads/logstash/logstash-7.17.10-x86_64.rpm
dest: /tmp/logstash.rpm
validate_certs: yes
# 3. 导入 Elasticsearch GPG 密钥(Logstash 也使用同一密钥)
- name: 导入 Elasticsearch GPG 密钥
rpm_key:
state: present
key: https://artifacts.elastic.co/GPG-KEY-elasticsearch
# 4. 安装 Logstash
- name: 安装 Logstash
yum:
name: /tmp/logstash.rpm
state: present
# 4. 将 Logstash 命令添加到 PATH(通过 profile.d)
- name: 创建 profile.d 脚本
copy:
content: |
export PATH=$PATH:/usr/share/logstash/bin
dest: /etc/profile.d/logstash.sh
mode: '0644'
owner: root
group: root
# 5. 创建 Logstash 配置目录(默认已存在,但确保权限)
- name: 确保配置目录存在
file:
path: /etc/logstash/conf.d
state: directory
owner: root
group: logstash
mode: '0755'
# 6. 复制 Logstash 主配置文件
- name: 复制 logstash.yml
copy:
src: logstash/logstash.yml
dest: /etc/logstash/logstash.yml
owner: root
group: logstash
mode: '0644'
notify: restart logstash
# 7. 复制管道配置文件(示例:从标准输入到标准输出测试)
- name: 复制管道配置
copy:
src: logstash/conf.d/
dest: /etc/logstash/conf.d/
owner: root
group: logstash
mode: '0644'
notify: restart logstash
# 8. 启动并启用 Logstash 服务
- name: 启动 Logstash 服务
systemd:
name: logstash
state: started
enabled: yes
Logstash主配置文件roles/elk/files/logstash/logstash.yml
创建此文件,内容可参考如下(通常使用默认配置即可):
# 设置管道配置目录
path.config: /etc/logstash/conf.d
# 数据目录
path.data: /var/lib/logstash
# 日志目录
path.logs: /var/log/logstash
管道配置示例 roles/elk/files/logstash/conf.d/test.conf
为了测试,可以创建一个简单的管道,从标准输入读取并输出到标准输出(类似博客中的测试)。但生产环境中通常接收 beats 输入并输出到 Elasticsearch。此处给出测试示例:
测试配置(便于验证部署)
input {
stdin { }
}
output {
stdout { codec => rubydebug }
}
将此文件保存为 test.conf 并放入 roles/elk/files/logstash/conf.d/。
添加 Logstash 的 Handler(roles/elk/handlers/main.yml)
在 roles/elk/handlers/main.yml 中添加(如果文件不存在则创建):
---
- name: restart logstash
systemd:
name: logstash
state: restarted
Kibana自动化部署
创建 roles/elk/tasks/kibana.yml
---
# 1. 下载 Kibana RPM 包
- name: 下载 Kibana RPM
get_url:
url: https://artifacts.elastic.co/downloads/kibana/kibana-7.17.10-x86_64.rpm
dest: /tmp/kibana.rpm
validate_certs: yes
# 2. 导入 Elasticsearch GPG 密钥(Kibana 也使用同一密钥)
- name: 导入 Elasticsearch GPG 密钥
rpm_key:
state: present
key: https://artifacts.elastic.co/GPG-KEY-elasticsearch
# 3. 安装 Kibana
- name: 安装 Kibana
yum:
name: /tmp/kibana.rpm
state: present
# 3. 复制 Kibana 配置文件
- name: 复制 kibana.yml
copy:
src: kibana/kibana.yml
dest: /etc/kibana/kibana.yml
owner: root
group: kibana
mode: '0644'
notify: restart kibana
# 4. 启动并启用 Kibana 服务
- name: 启动 Kibana 服务
systemd:
name: kibana
state: started
enabled: yes
创建 Kibana 配置文件 roles/elk/files/kibana/kibana.yml
# Kibana 服务端口
server.port: 5601
server.host: "192.168.92.20"
# Elasticsearch 地址(与 Kibana 同机,使用 localhost)
elasticsearch.hosts: ["http://localhost:9200"]
# PID 文件路径
pid.file: /run/kibana/kibana.pid
# 界面语言
i18n.locale: "zh-CN"
添加 kibana的 Handler(roles/elk/handlers/main.yml)
在 roles/elk/handlers/main.yml 中添加(如果文件不存在则创建):
- name: restart kibana
systemd:
name: kibana
state: restarted
扩展roles/elk/tasks/main.yml
修改 main.yml,使其根据主机组执行不同的子任务。这样同一个 role 可以服务于不同的主机。
---
# 根据主机组执行相应组件的安装
- name: 安装 Elasticsearch(仅在 webservers 组执行)
include_tasks: elasticsearch.yml
when: "'webservers' in group_names"
- name: 安装 Kibana(仅在 webservers 组执行)
include_tasks: kibana.yml
when: "'webservers' in group_names"
- name: 安装 Logstash(仅在 elk_logstash 组执行)
include_tasks: logstash.yml
when: "'elk_logstash' in group_names"
运行 Ansible Playbook
创建主 Playbook site3.yml
site3.yml 中的 roles 列表会严格按照定义的顺序依次执行。
在 ansible-playbook/ 目录下创建 site3.yml 文件:
---
# 1. 对所有目标主机进行基础系统初始化
- hosts: all
roles:
- system_init
# 2. 在 webservers 组(192.168.92.20)上部署 Elasticsearch + Kibana
- hosts: webservers
roles:
- elk
# 3. 在 elk_logstash 组(192.168.92.21)上部署 Logstash
- hosts: elk_logstash
roles:
- elk
cd /root/ansible-playbook
ansible-playbook site3.yml
[root@ansible ansible-playbook]# ansible-playbook site3.yml
PLAY [all] *************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************
ok: [192.168.92.20]
ok: [192.168.92.21]
TASK [system_init : 停止 firewalld 服务] *******************************************************************************************
ok: [192.168.92.21]
ok: [192.168.92.20]
TASK [system_init : 禁用 SELinux(临时)] ******************************************************************************************
ok: [192.168.92.21]
ok: [192.168.92.20]
TASK [system_init : 永久禁用 SELinux(修改配置文件)] ******************************************************************************
ok: [192.168.92.21]
ok: [192.168.92.20]
TASK [system_init : 备份原 yum 源并替换为阿里云源] *********************************************************************************
changed: [192.168.92.21]
changed: [192.168.92.20]
TASK [system_init : 安装基础工具包] ************************************************************************************************
ok: [192.168.92.20]
ok: [192.168.92.21]
PLAY [webservers] ******************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************
ok: [192.168.92.20]
TASK [elk : 安装 Elasticsearch(仅在 webservers 组执行)] **************************************************************************
included: /root/ansible-playbook/roles/elk/tasks/elasticsearch.yml for 192.168.92.20
TASK [elk : 安装 Java 1.8] *********************************************************************************************************
ok: [192.168.92.20]
TASK [elk : 下载 Elasticsearch RPM] ************************************************************************************************
ok: [192.168.92.20]
TASK [elk : 导入 Elasticsearch GPG 密钥] *******************************************************************************************
changed: [192.168.92.20]
TASK [elk : 安装 Elasticsearch] ****************************************************************************************************
changed: [192.168.92.20]
TASK [elk : 创建数据目录] **********************************************************************************************************
changed: [192.168.92.20]
TASK [elk : 创建日志目录] **********************************************************************************************************
changed: [192.168.92.20]
TASK [elk : 复制 elasticsearch.yml] ************************************************************************************************
changed: [192.168.92.20]
TASK [elk : 启动 elasticsearch 服务] ***********************************************************************************************
changed: [192.168.92.20]
TASK [elk : 安装 Kibana(仅在 webservers 组执行)] *********************************************************************************
included: /root/ansible-playbook/roles/elk/tasks/kibana.yml for 192.168.92.20
TASK [elk : 下载 Kibana RPM] *******************************************************************************************************
changed: [192.168.92.20]
TASK [elk : 导入 Elasticsearch GPG 密钥] *******************************************************************************************
ok: [192.168.92.20]
TASK [elk : 安装 Kibana] ***********************************************************************************************************
changed: [192.168.92.20]
TASK [elk : 复制 kibana.yml] *******************************************************************************************************
changed: [192.168.92.20]
TASK [elk : 启动 Kibana 服务] ******************************************************************************************************
changed: [192.168.92.20]
TASK [elk : 安装 Logstash(仅在 elk_logstash 组执行)] *****************************************************************************
skipping: [192.168.92.20]
RUNNING HANDLER [elk : restart elasticsearch] **************************************************************************************
changed: [192.168.92.20]
RUNNING HANDLER [elk : restart kibana] *********************************************************************************************
changed: [192.168.92.20]
PLAY [elk_logstash] ****************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************
ok: [192.168.92.21]
TASK [elk : 安装 Elasticsearch(仅在 webservers 组执行)] **************************************************************************
skipping: [192.168.92.21]
TASK [elk : 安装 Kibana(仅在 webservers 组执行)] *********************************************************************************
skipping: [192.168.92.21]
TASK [elk : 安装 Logstash(仅在 elk_logstash 组执行)] *****************************************************************************
included: /root/ansible-playbook/roles/elk/tasks/logstash.yml for 192.168.92.21
TASK [elk : 安装 Java 1.8 开发环境] ************************************************************************************************
changed: [192.168.92.21]
TASK [elk : 下载 Logstash RPM] *****************************************************************************************************
changed: [192.168.92.21]
TASK [elk : 导入 Elasticsearch GPG 密钥] *******************************************************************************************
changed: [192.168.92.21]
TASK [elk : 安装 Logstash] *********************************************************************************************************
changed: [192.168.92.21]
TASK [elk : 创建 profile.d 脚本] ***************************************************************************************************
changed: [192.168.92.21]
TASK [elk : 确保配置目录存在] ******************************************************************************************************
changed: [192.168.92.21]
TASK [elk : 复制 logstash.yml] *****************************************************************************************************
changed: [192.168.92.21]
TASK [elk : 复制管道配置] **********************************************************************************************************
changed: [192.168.92.21]
TASK [elk : 启动 Logstash 服务] ****************************************************************************************************
changed: [192.168.92.21]
RUNNING HANDLER [elk : restart logstash] *******************************************************************************************
changed: [192.168.92.21]
PLAY RECAP *************************************************************************************************************************
192.168.92.20 : ok=24 changed=13 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
192.168.92.21 : ok=18 changed=11 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
[root@ansible ansible-playbook]#
验证部署
验证 Elasticsearch
#"build_snapshot": false 表示当前 Elasticsearch 版本是正式发布版(非开发快照版),这是生产环境的标准配置。
[root@ansible ansible-playbook]# curl http://192.168.92.20:9200
{
"name" : "es01",
"cluster_name" : "es-cluster",
"cluster_uuid" : "fpdZ4SPJSiyRsQ8yJPT3Vg",
"version" : {
"number" : "7.17.10",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "fecd68e3150eda0c307ab9a9d7557f5d5fd71349",
"build_date" : "2023-04-23T05:33:18.138275597Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Elasticsearch 已成功运行。
验证 Kibana

出现了kibana界面,没问题
验证 Logstash
[root@ansible ansible-playbook]# ssh root@192.168.92.21 "systemctl status logstash"
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; preset: disabled)
Active: active (running) since Wed 2026-04-01 18:44:37 CST; 1min 10s ago
Main PID: 11248 (java)
Tasks: 29 (limit: 10844)
Memory: 537.4M
CPU: 50.335s
CGroup: /system.slice/logstash.service
└─11248 /usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djdk.io.File.enableADS=true -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -Djruby.regexp.interruptible=true -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -Dlog4j2.isThreadContextMapInheritable=true -cp /usr/share/logstash/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/share/logstash/logstash-core/lib/jars/checker-compat-qual-2.0.0.jar:/usr/share/logstash/logstash-core/lib/jars/commons-codec-1.14.jar:/usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.1.0.jar:/usr/share/logstash/logstash-core/lib/jars/commons-logging-1.2.jar:/usr/share/logstash/logstash-core/lib/jars/error_prone_annotations-2.1.3.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/guava-24.1.1-jre.jar:/usr/share/logstash/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.9.10.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.9.10.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.9.10.8.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.10.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-yaml-2.9.10.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.1.0.jar:/usr/share/logstash/logstash-core/lib/jars/javassist-3.26.0-GA.jar:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.20.1.jar:/usr/share/logstash/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-1.2-api-2.17.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-api-2.17.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-core-2.17.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-jcl-2.17.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.17.1.jar:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/reflections-0.9.11.jar:/usr/share/logstash/logstash-core/lib/jars/slf4j-api-1.7.30.jar:/usr/share/logstash/logstash-core/lib/jars/snakeyaml-1.33.jar org.logstash.Logstash --path.settings /etc/logstash
Apr 01 18:45:34 web21h logstash[11248]: [2026-04-01T18:45:34,901][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Apr 01 18:45:37 web21h logstash[11248]: [2026-04-01T18:45:37,083][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
Apr 01 18:45:38 web21h logstash[11248]: [2026-04-01T18:45:38,357][INFO ][org.reflections.Reflections] Reflections took 113 ms to scan 1 urls, producing 119 keys and 419 values
Apr 01 18:45:39 web21h logstash[11248]: [2026-04-01T18:45:39,603][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/test.conf"], :thread=>"#<Thread:0x350d4d7c run>"}
Apr 01 18:45:40 web21h logstash[11248]: [2026-04-01T18:45:40,432][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.82}
Apr 01 18:45:40 web21h logstash[11248]: WARNING: An illegal reflective access operation has occurred
Apr 01 18:45:40 web21h logstash[11248]: WARNING: Illegal reflective access by com.jrubystdinchannel.StdinChannelLibrary$Reader (file:/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jruby-stdin-channel-0.2.0-java/lib/jruby_stdin_channel/jruby_stdin_channel.jar) to field java.io.FilterInputStream.in
Apr 01 18:45:40 web21h logstash[11248]: WARNING: Please consider reporting this to the maintainers of com.jrubystdinchannel.StdinChannelLibrary$Reader
Apr 01 18:45:40 web21h logstash[11248]: WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
Apr 01 18:45:40 web21h logstash[11248]: WARNING: All illegal access operations will be denied in a future release
logstash已成功运行。
实验完毕!