ELK

ELK

elk介绍

运维人员需要对系统和业务日志进行精准把控,便于分析系统和业务状态。日志分布在不同的服务器上,传统的使用传统的方法依次登录每台服务器查看日志,既繁琐又效率低下。所以我们需要集中化的日志管理工具将位于不同服务器上的日志收集到一起, 然后进行分析,展示。

前期准备

1、修改主机名

python 复制代码
[root@node1 ~]# hostnamectl hostname vm1.example.com
[root@node1 ~]# bash

[root@node1 ~]# hostnamectl hostname vm2.example.com 
[root@node1 ~]# bash
[root@vm2 ~]# 

[root@node1 ~]# hostnamectl hostname v3.example.com
[root@node1 ~]# bash
[root@v3 ~]# 

2、配置/ect/hosts

python 复制代码
[root@vm1 ~]# vim /etc/hosts 
[root@vm1 ~]# cat /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.30	vm1.example.com	kibana
192.168.100.80	vm2.example.com	elasticsearch
192.168.100.90	vm3.example.com	logstash
[root@vm1 ~]# scp /etc/hosts root@192.168.100.80:/etc/hosts
The authenticity of host '192.168.100.80 (192.168.100.80)' can't be established.
ED25519 key fingerprint is SHA256:Ci2qzv2Hvt2jld5Q8LBu35qRbAnKzC3EaGZRV6Htsw0.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.100.80' (ED25519) to the list of known hosts.
root@192.168.100.80's password: 
hosts                                                    100%  281   249.2KB/s   00:00    
[root@vm1 ~]# scp /etc/hosts root@192.168.100.90:/etc/hosts
The authenticity of host '192.168.100.90 (192.168.100.90)' can't be established.
ED25519 key fingerprint is SHA256:Ci2qzv2Hvt2jld5Q8LBu35qRbAnKzC3EaGZRV6Htsw0.
This host key is known by the following other names/addresses:
    ~/.ssh/known_hosts:1: 192.168.100.80
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.100.90' (ED25519) to the list of known hosts.
root@192.168.100.90's password: 
hosts                                                    100%  281   681.0KB/s   00:00    
[root@vm1 ~]# 

3、检查防火墙selinux是否关闭

python 复制代码
[root@vm1 ~]# systemctl status firewalld
○ firewalld.service - firewalld - dynamic firewall daemon
     Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; preset: enabled)
     Active: inactive (dead)
       Docs: man:firewalld(1)   
[root@vm1 ~]# getenforce 
Disabled
[root@vm1 ~]# 

[root@vm2 ~]# yum -y install lrzsz tar net-tools wget

4、时钟同步

python 复制代码
[root@vm1 ~]# yum -y install chrony
[root@vm1 ~]# systemctl restart chronyd
[root@vm1 ~]# systemctl enable chronyd
[root@vm1 ~]# timedatectl 
               Local time: Mon 2024-08-19 16:02:19 CST
           Universal time: Mon 2024-08-19 08:02:19 UTC
                 RTC time: Mon 2024-08-19 08:02:19
                Time zone: Asia/Shanghai (CST, +0800)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
[root@vm1 ~]# hwclock -w

elasticsearch部署

介绍

Elasticsearch(简称ES)是一个开源的分布式搜索引擎,Elasticsearch还是一个分布式文档数据库。所以它

提供了大量数据的存储功能,快速的搜索与分析功能。

1、安装JAVA包

python 复制代码
[root@vm1 ~]# yum -y install java-1.8.0*
[root@vm2 ~]# yum -y install java-1.8.0*
[root@vm3 ~]# yum -y install java-1.8.0*

[root@vm1 ~]# java -version
openjdk version "1.8.0_422"
OpenJDK Runtime Environment (build 1.8.0_422-b05)
OpenJDK 64-Bit Server VM (build 25.422-b05, mixed mode)

2、解压安装包,修改配置文件

python 复制代码
[root@vm2 ~]# ls
anaconda-ks.cfg  -e  elasticsearch-6.5.2.rpm  -i.bak
[root@vm2 ~]# rpm -ivh elasticsearch-6.5.2.rpm 

[root@vm2 ~]# 
[root@vm2 ~]# cd /etc/elasticsearch/
[root@vm2 elasticsearch]# ls
elasticsearch.keystore  jvm.options        role_mapping.yml  users
elasticsearch.yml       log4j2.properties  roles.yml         users_roles
[root@vm2 elasticsearch]# vim elasticsearch.yml 
cluster.name: elk-cluster 
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0 
http.port: 9200
[root@vm2 elasticsearch]# systemctl restart elasticsearch
[root@vm2 elasticsearch]# systemctl enable elasticsearch
Created symlink /etc/systemd/system/multi-user.target.wants/elasticsearch.service → /usr/lib/systemd/system/elasticsearch.service.
[root@vm2 elasticsearch]# ss -anlt
State    Recv-Q   Send-Q       Local Address:Port       Peer Address:Port   Process   
LISTEN   0        128                0.0.0.0:22              0.0.0.0:*                
LISTEN   0        4096                     *:9300                  *:*                
LISTEN   0        128                   [::]:22                 [::]:*                
LISTEN   0        4096                     *:9200                  *:*   
[root@vm2 elasticsearch]# curl http://192.168.100.80:9200/_cluster/health?pretty
{
  "cluster_name" : "elk-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
[root@vm2 elasticsearch]# 

elasticsearch集群部署

python 复制代码
vm1:
[root@vm1 ~]# ls
anaconda-ks.cfg  -e  elasticsearch-6.5.2.rpm  -i.bak
[root@vm1 ~]# rpm -ivh elasticsearch-6.5.2.rpm 
warning: elasticsearch-6.5.2.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Updating / installing...
   1:elasticsearch-0:6.5.2-1          ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch
/usr/lib/tmpfiles.d/elasticsearch.conf:1: Line references path below legacy directory /var/run/, updating /var/run/elasticsearch → /run/elasticsearch; please update the tmpfiles.d/ drop-in file accordingly.
------------------------------------------------------------------------
[root@vm1 ~]# vim /etc/elasticsearch/elasticsearch.yml 
------------------------------------------------------------------------
cluster.name: elk-cluster
node.name: 192.168.100.30		本机IP或主机名
node.master: false 				指定不为master节点
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.100.30", "192.168.100.80"] 集群所有节点IP
------------------------------------------------------------------------
[root@vm1 ~]# systemctl restart elasticsearch
[root@vm1 ~]# systemctl enable elasticsearch
Created symlink /etc/systemd/system/multi-user.target.wants/elasticsearch.service → /usr/lib/systemd/system/elasticsearch.service.
[root@vm1 ~]# 

vm2:
[root@vm2 elasticsearch]# vim elasticsearch.yml 
-------------------------------------------------------------------
cluster.name: elk-cluster
node.name: 192.168.100.80 				本机IP或主机名
node.master: true 指定为master节点
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.100.30", "192.168.100.80"] 集群所有节点IP
-----------------------------------------------------------------
[root@vm2 elasticsearch]# systemctl restart elasticsearch

elaticsearch基础API操作

1、RestFul API 格式

python 复制代码
RestFul API 格式:curl -X<verb> '<protocol>://<host>:<port>/<path>?<query_string>'-d '<body>'
参数 描述
verb HTTP方法,比如GET、POST、PUT、HEAD、DELETE
host ES集群中的任意节点主机名
port ES HTTP服务端口,默认9200
path 索引路径
query_string 可选的查询请求参数。例如?pretty参数将返回JSON格式数据
-d 里面放一个GET的JSON格式请求主体
body 自己写的 JSON格式的请求主体

2、查看节点信息

python 复制代码
[root@vm2 elasticsearch]# curl http://192.168.100.80:9200/_cat/nodes?v
ip             heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.100.30           13          95   0    0.00    0.00     0.00 di        -      192.168.100.30
192.168.100.80           11          96   0    0.00    0.00     0.00 mdi       *      192.168.100.80

3、查看索引信息和新增索引

python 复制代码
[root@vm2 elasticsearch]# curl http://192.168.100.80:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size  //没有索引
[root@vm2 elasticsearch]# curl -X PUT http://192.168.100.80:9200/nginx_access_log 
{"acknowledged":true,"shards_acknowledged":true,"index":"nginx_access_log"}[root@vm2 elasticsearch]# curl -X PUT http:/
[root@vm2 elasticsearch]# curl http://192.168.100.80:9200/_cat/indices?v
health status index            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   nginx_access_log PGrIVaIERO2IizDOKL9b9A   5   1          0            0      2.2kb          1.1kb

4、删除索引

5、导入数据

python 复制代码
[root@vm2 ~]# ls
accounts.json  anaconda-ks.cfg  -e  elasticsearch-6.5.2.rpm  -i.bak
[root@vm2 ~]#  curl -H "Content-Type: application/json" -XPOST "192.168.100.80:9200/bank/_doc/_bulk?pretty&refresh" --data-binary "@accounts.json"
[root@vm2 ~]# curl "192.168.100.80:9200/_cat/indices?v"
health status index            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   nginx_access_log PGrIVaIERO2IizDOKL9b9A   5   1          0            0      2.5kb          1.2kb
green  open   bank             RZH-6IBNSOmQpduyCHSRKA   5   1       1000            0    965.6kb        482.5kb
 

6、查询bank索引的数据(使用查询字符串进行查询)

python 复制代码
[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?q=*&sort=account_number:asc&pretty"
、
默认结果为10条
_search 属于一类API,用于执行查询操作
q=* ES批量索引中的所有文档
sort=account_number:asc 表示根据account_number按升序对结果排序
pretty调整显示格式

7、查询bank索引的数据 (使用json格式进行查询)

python 复制代码
[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search" -H 'content-Type:application/json' -d'
> {
> "query": { "match_all": {} },
> "sort": [ 
> { "account_number": "asc"}
> ]
> }
> '
{"error":{"root_cause":[{"type":"json_parse_exception","reason":"Unexpected character ('"' (code 8220 / 0x201c)): was expecting double-quote to start field name\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@6738f56b; line: 3, column: 4]"}],"type":"json_parse_exception","reason":"Unexpected character ('"' (code 8220 / 0x201c)): was expecting double-quote to start field name\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@6738f56b; line: 3, column: 4]"},"status":500}[root@vm2 ~]# 

8、match_all 查询

匹配所有文档。默认查询

python 复制代码
[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H "content-Type:application/json" d'
> {
> "query": { "match_all": {} }
> }
> '
# query告诉我们查询什么
# match_all是我们查询的类型
# match_all查询仅仅在指定的索引的所有文件进行搜索

9、from,size 查询

除了query参数外,还可以传递其他参数影响查询结果,比如前面提到的sort,接下来使用的size

python 复制代码
[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H 'content-Type:application/json' -d'
{
"query":{ "match_all": {} },
"size":1
}
'

10、指定位置与查询条数

python 复制代码
[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H 'content-Type:application/json' -d'
> {
> "query": { "match_all": {} }
> "from": 0
> "size": 2
> }
> '

11、匹配查询字段

返回_source字段中的片段字段

python 复制代码
[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H 'content-Type:application/json' -d'
{
"query": { "match_all": {} },
> "_source": ["account_number","balance"]
> }
> '

12、match 查询

python 复制代码
[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H 'content-Type:application/json' -d'
{
"query": { "match": {"account_number": 20} }
> }
> '

13、基本搜索查询,针对特定字段或字段集合进行搜索

python 复制代码
[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H >'content-Type:application/json' -d'
>{
>"query": { "match": {"account_number": "mill"} }
>}
>'

14、bool 查询

bool must 查询的字段必须同时存在

查询包含mill和lane的所有账户

python 复制代码
[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H 'content-Type:application/json' -d'
> {
> "query": {
> "bool": {
> "must": [ 
> { "match": {"address": "mill"} },
> { "match": {"address": "lane"} }
> ]
> }
> }
> }
> '
 

15、range 查询

指定区间内的数字或者时间

操作符:gt大于,gte大于等于,lt小于,lte小于等于

python 复制代码
[root@vm2 ~]# curl -X GET "10.1.1.12:9200/bank/_search?pretty" -H 'Content-Type:
application/json' -d'
>{
>"query": {
>"bool": {
>"must": { "match_all": {} },
>"filter": {
>"range": {
>"balance": {
>"gte": 20000,
>"lte": 30000
>}
>}
>}
>}
>}
>}
>'

elasticsearch-head

elasticsearch-head是集群管理、数据可视化、增删改查、查询语句可视化工具。从ES5版本后安装方式

和ES2以上的版本有很大的不同,在ES2中可以直接在bin目录下执行plugin install xxxx 来进行安装,但是

在ES5中这种安装方式变了,要想在ES5中安装Elasticsearch Head必须要安装NodeJs,然后通过NodeJS来

启动Head。

安装nodejs

python 复制代码
[root@vm1 ~]# ls
anaconda-ks.cfg  -e  elasticsearch-6.5.2.rpm  -i.bak  node-v10.24.1-linux-x64.tar.xz
[root@vm1 ~]# tar xf node-v10.24.1-linux-x64.tar.xz -C /usr/local/
[root@vm1 ~]# ls /usr/local/
bin  etc  games  include  lib  lib64  libexec  node-v10.24.1-linux-x64  sbin  share  src
[root@vm1 ~]# mv /usr/local/node-v10.24.1-linux-x64/  /usr/local/nodejs
[root@vm1 ~]# ls /usr/local/
bin  etc  games  include  lib  lib64  libexec  nodejs  sbin  share  src
[root@vm1 ~]# ln -s /usr/local/nodejs/bin/npm /bin/npm
[root@vm1 ~]# ln -s /usr/local/nodejs/bin/node /bin/node
[root@vm1 ~]# 

安装es-head

python 复制代码
[root@vm2 bin]# yum -y install unzip
[root@vm2 ~]# ls
accounts.json    -e                       elasticsearch-head-master.zip  node-v10.24.1-linux-x64.tar.xz
anaconda-ks.cfg  elasticsearch-6.5.2.rpm  -i.bak
[root@vm2 ~]# unzip elasticsearch-head-master.zip
[root@vm2 ~]# cd elasticsearch-head-master/
[root@vm2 elasticsearch-head-master]# npm install -g grunt-cli --registry=http://registry.npm.taobao.org
##  --registry=http://registry.npm.taobao.org 网络不好就添加,网络好就不需要添加


[root@vm2 elasticsearch-head-master]# npm install -g grunt-cli --registry=http://registry.npm.taobao.org

added 56 packages in 5s

5 packages are looking for funding
  run `npm fund` for details
npm notice 
npm notice New major version of npm available! 8.19.4 -> 10.8.2
npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.8.2
npm notice Run npm install -g npm@10.8.2 to update!
npm notice 
[root@vm2 elasticsearch-head-master]# npm install --registry=http://registry.npm.taobao.org
python 复制代码
解决报错
[root@vm2 elasticsearch-head-master]# npm install phantomjs-prebuilt@2.1.16 --ignore-script
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE   package: 'karma@1.3.0',
npm WARN EBADENGINE   required: { node: '0.10 || 0.12 || 4 || 5 || 6' },
npm WARN EBADENGINE   current: { node: 'v16.20.2', npm: '8.19.4' }
npm WARN EBADENGINE }
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE   package: 'http2@3.3.7',
npm WARN EBADENGINE   required: { node: '>=0.12.0 <9.0.0' },
npm WARN EBADENGINE   current: { node: 'v16.20.2', npm: '8.19.4' }
npm WARN EBADENGINE }
npm WARN deprecated inflight@1.0.6: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
npm WARN deprecated source-map-url@0.4.1: See https://github.com/lydell/source-map-url#deprecated
npm WARN deprecated rimraf@2.7.1: Rimraf versions prior to v4 are no longer supported
npm WARN deprecated rimraf@2.7.1: Rimraf versions prior to v4 are no longer supported
npm WARN deprecated rimraf@2.7.1: Rimraf versions prior to v4 are no longer supported
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
npm WARN deprecated har-validator@5.1.5: this library is no longer supported
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated json3@3.3.2: Please use the native JSON object instead of JSON 3
npm WARN deprecated rimraf@2.2.8: Rimraf versions prior to v4 are no longer supported
npm WARN deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
npm WARN deprecated glob@5.0.15: Glob versions prior to v9 are no longer supported
npm WARN deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
npm WARN deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
npm WARN deprecated source-map-resolve@0.5.3: See https://github.com/lydell/source-map-resolve#deprecated
npm WARN deprecated chokidar@1.7.0: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated glob@7.1.7: Glob versions prior to v9 are no longer supported
npm WARN deprecated glob@7.0.6: Glob versions prior to v9 are no longer supported
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated phantomjs-prebuilt@2.1.16: this package is now deprecated
npm WARN deprecated request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN deprecated http2@3.3.7: Use the built-in module in node 9.0.0 or newer, instead
npm WARN deprecated json3@3.2.6: Please use the native JSON object instead of JSON 3
npm WARN deprecated coffee-script@1.10.0: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
npm WARN deprecated log4js@0.6.38: 0.x is no longer supported. Please upgrade to 6.x or higher.
npm WARN deprecated core-js@2.6.12: core-js@<3.23.3 is no longer maintained and not recommended for usage due to the number of issues. Because of the V8 engine whims, feature detection in old core-js versions could cause a slowdown up to 100x even if nothing is polyfilled. Some versions have web compatibility issues. Please, upgrade your dependencies to the actual version of core-js.

added 528 packages, and audited 529 packages in 33s

22 packages are looking for funding
  run `npm fund` for details

45 vulnerabilities (3 low, 7 moderate, 27 high, 8 critical)

To address issues that do not require attention, run:
  npm audit fix

To address all issues possible (including breaking changes), run:
  npm audit fix --force

Some issues need review, and may require choosing
a different dependency.

Run `npm audit` for details.
[root@vm2 elasticsearch-head-master]# 
python 复制代码
[root@vm2 elasticsearch-head-master]# npm install  --registry=http://registry.npm.taobao.org
[root@vm2 elasticsearch-head-master]# nohup npm run start &
[root@vm2 elasticsearch-head-master]# ss -anlt
State       Recv-Q       Send-Q             Local Address:Port             Peer Address:Port      Process      
LISTEN      0            511                      0.0.0.0:9100                  0.0.0.0:*                      
LISTEN      0            128                      0.0.0.0:22                    0.0.0.0:*                      
LISTEN      0            4096                           *:9300                        *:*                      
LISTEN      0            4096                           *:9200                        *:*                      
LISTEN      0            128                         [::]:22                       [::]:*                      
[root@vm2 elasticsearch-head-master]# 

修改ES集群配置文件,并重启服务

python 复制代码
[root@vm1 ~]# vim /etc/elasticsearch/elasticsearch.yml
[root@vm1 ~]# vim /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"  
#添加两行
[root@vm1 ~]# systemctl restart elasticsearch
[root@vm2 ~]# systemctl restart elasticsearch
[root@vm1 ~]# ss -anlt
State       Recv-Q       Send-Q             Local Address:Port             Peer Address:Port      Process      
LISTEN      0            128                      0.0.0.0:22                    0.0.0.0:*                      
LISTEN      0            4096                           *:9300                        *:*                      
LISTEN      0            4096                           *:9200                        *:*                      
LISTEN      0            128                         [::]:22                       [::]:*                      
[root@vm1 ~]# 
[root@vm2 ~]# ss -anlt
State       Recv-Q       Send-Q             Local Address:Port             Peer Address:Port      Process      
LISTEN      0            511                      0.0.0.0:9100                  0.0.0.0:*                      
LISTEN      0            128                      0.0.0.0:22                    0.0.0.0:*                      
LISTEN      0            4096                           *:9300                        *:*                      
LISTEN      0            4096                           *:9200                        *:*                      
LISTEN      0            128                         [::]:22                       [::]:*                      
[root@vm2 ~]# 

logstash部署

部署

python 复制代码
[root@v3 ~]# ls 
anaconda-ks.cfg  -e  -i.bak  logstash-6.5.2.rpm
[root@v3 ~]# rpm -ivh logstash-6.5.2.rpm 
[root@v3 ~]# cd /etc/logstash/
[root@v3 logstash]# ls
conf.d       log4j2.properties     logstash.yml   startup.options
jvm.options  logstash-sample.conf  pipelines.yml
[root@v3 logstash]# vim logstash.yml
-------------------------------------------------------------
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d/ 
http.host: "0.0.0.0" 
path.logs: /var/log/logstash
-------------------------------------------------------------

验证方式一:

python 复制代码
[root@v3 logstash]# cd /usr/share/logstash/bin/
[root@v3 bin]# ./logstash -e 'input {stdout {}} output {stdout {}}'

末尾出现:

验证方式二:

python 复制代码
[root@v3 ~]# vim /etc/logstash/conf.d/test.conf
[root@v3 ~]# cat /etc/logstash/conf.d/test.conf 
input {
	stdin {
	}
}

filter {
}

output {
	stdout {
		codec => rubydebug
	}
}
[root@v3 ~]# 

[root@v3 ~]# cd /usr/share/logstash/bin/
[root@v3 bin]# ./logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/test.conf -t
--path.settings 指定logstash主配置文件目录
-f 指定片段配置文件
-t 测试配置文件是否正确
-r参数很强大,会动态装载配置文件,也就是说启动后,可以不用重启修改配置文件
codec => rubydebug这句可写可不定,默认就是这种输出方式

出现:

python 复制代码
[root@v3 bin]# ./logstash --path.settings /etc/logstash -r -f /etc/logstash/conf.d/test.conf 
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2024-08-20T14:35:14,083][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
[2024-08-20T14:35:14,106][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
[2024-08-20T14:35:14,542][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[2024-08-20T14:35:16,347][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[root@v3 bin]# 
[root@v3 bin]# ./logstash --path.settings /etc/logstash -r -f /etc/logstash/conf.d/test.conf 
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2024-08-20T14:38:00,603][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2024-08-20T14:38:00,615][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.5.2"}
[2024-08-20T14:38:00,645][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"8843a144-df1e-45d7-a38b-c67a4758c30e", :path=>"/var/lib/logstash/uuid"}
[2024-08-20T14:38:02,829][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2024-08-20T14:38:03,016][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0xce3fefd sleep>"}
The stdin plugin is now waiting for input:
[2024-08-20T14:38:03,059][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

日志采集

采集messages日志

python 复制代码
[root@v3 bin]# vim /etc/logstash/conf.d/test.conf 
[root@v3 bin]# cat /etc/logstash/conf.d/test.conf 
input {
file {
path => "/var/log/messages"
start_position => "beginning"
}
}
output {
elasticsearch{
hosts => ["192.168.100.80:9200"]
index => "test-%{+YYYY.MM.dd}"
}
}
[root@v3 bin]# ps -ef | grep java   #停止服务

采集多日志源

python 复制代码
[root@v3 bin]# vim /etc/logstash/conf.d/test.conf 
[root@v3 bin]# cat /etc/logstash/conf.d/test.conf 
input {
	file {
		path => "/var/log/messages"
		start_position => "beginning"
		type => "messages"
	}
	file {
		path => "/var/log/dnf.log"
		start_position => "beginning"
		type => "dnf"
	}
}

filter{

}




output{
	if [type] == "messages" {
		elasticsearch {
			hosts => ["192.168.100.30:9200","192.168.100.80:9200"]
			index => "messages-%{+YYYY-MM-dd}"
			}
		}
	if [type] == "dnf" {
		elasticsearch {
			hosts => ["192.168.100.30:9200","192.168.100.80:9200"]
			index => "yum-%{+YYYY-MM-dd}"
			}
		}
}

[root@v3 bin]# ./logstash --path.settings /etc/logstash -r -f /etc/logstash/conf.d/test.conf &
[root@v3 bin]# ss -anlt
State        Recv-Q       Send-Q             Local Address:Port             Peer Address:Port       Process       
LISTEN       0            128                      0.0.0.0:22                    0.0.0.0:*                        
LISTEN       0            50                             *:9600                        *:*                        
LISTEN       0            128                         [::]:22                       [::]:*   

kibana部署

部署

python 复制代码
[root@vm1 ~]# ls
04-ELK2.pdf      -e                       -i.bak                   node-v10.24.1-linux-x64.tar.xz
anaconda-ks.cfg  elasticsearch-6.5.2.rpm  kibana-6.5.2-x86_64.rpm
[root@vm1 ~]# rpm -ivh kibana-6.5.2-x86_64.rpm 
warning: kibana-6.5.2-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:kibana-6.5.2-1                   ################################# [100%]
[root@vm1 ~]# 
[root@vm1 ~]# cd /etc/kibana/
[root@vm1 kibana]# ls
kibana.yml
[root@vm1 kibana]# vim kibana.yml 
---------------------------------------------------------------
server.port: 5601 端口
server.host: "0.0.0.0" 监听所有,允许所有人能访问
elasticsearch.url: "http://192.168.100.30:9200" ES集群的路径
logging.dest: /var/log/kibana.log 我这里加了kibana日志,方便排错与调试
---------------------------------------------------------------
[root@vm1 kibana]# cd /var/log/
[root@vm1 log]# ls
anaconda  cron             dnf.rpm.log    hawkey.log-20240819  messages           secure            sssd
audit     cron-20240819    elasticsearch  lastlog              messages-20240819  secure-20240819   tallylog
btmp      dnf.librepo.log  firewalld      maillog              private            spooler           wtmp
chrony    dnf.log          hawkey.log     maillog-20240819     README             spooler-20240819
[root@vm1 log]# touch kibana.log
[root@vm1 log]# chown kibana.kibana kibana.log 
[root@vm1 log]# systemctl restart kibana
[root@vm1 log]# systemctl enable kibana
[root@vm1 log]# 

汉化

python 复制代码
[root@vm1 ~]# unzip kibana-6.5.4_hanization-master.zip -d /usr/local/
[root@vm1 ~]# cd /usr/local/kibana-6.5.4_hanization-master
这里要注意:1,要安装python; 2,rpm版的kibana安装目录为/usr/share/kibana/
[root@vm1 kibana-6.5.4_hanization-master]# python main.py  /usr/share/kibana/

汉化完后需要重启
[root@vm1 Kibana_Hanization-master]# systemctl stop kibana
[root@vm1 Kibana_Hanization-master]# systemctl start kibana
相关推荐
獨枭8 小时前
CMake 构建项目并整理头文件和库文件
c++·github·cmake
web1309332039811 小时前
ctfshow-web入门-文件包含(web82-web86)条件竞争实现session会话文件包含
前端·github
运维&陈同学11 小时前
【Kibana01】企业级日志分析系统ELK之Kibana的安装与介绍
运维·后端·elk·elasticsearch·云原生·自动化·kibana·日志收集
东方小月1 天前
如何使用GitHub Actions自动部署我们的项目
前端·github·nestjs
eric-sjq1 天前
基于xiaothink对Wanyv-50M模型进行c-eval评估
人工智能·python·语言模型·自然语言处理·github
HelloGitHub1 天前
《HelloGitHub》第 105 期
开源·github
油泼辣子多加1 天前
2024年12月26日Github流行趋势
github
木心1 天前
Git基本操作快速入门(30min)
git·github
一个不秃头的 程序员2 天前
代码加入SFTP JAVA ---(小白篇3)
java·python·github
逸_2 天前
dify工作流+github actions实现翻译并创建PR
gpt·github·dify