ELKF(elasticsearch、logstash、kibana、filebeat)搭建及收集nginx日志

1、elasticsearch

1.1、根目录下新建data文件夹

1.2、修改elasticsearch.yml文件,添加以下内容

复制代码
path.data: /home/wwq/elk/elasticsearch-8.13.4/data
path.logs: /home/wwq/elk/elasticsearch-8.13.4/logs

1.3、修改jvm.options文件,新增以下内容

复制代码
-Xms2g
-Xmx2g

1.4、启动

复制代码
bin/elasticsearch 前台启动
bin/elasticsearch -d 后台启动

1.5、在日志中查看初始密码

复制代码
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Elasticsearch security features have been automatically configured!
✅ Authentication is enabled and cluster connections are encrypted.

ℹ️  Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
  qP3wO0GZ+pdomIp_ShHL

ℹ️  HTTP CA certificate SHA-256 fingerprint:
  98aef4ebc491b232a4c1cbf1cbfe7b73e1e4ebb8567caa174097f5c69f2b41fd

ℹ️  Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjEzLjQiLCJhZHIiOlsiMTk5Ljk5LjAuMTo5MjAwIl0sImZnciI6Ijk4YWVmNGViYzQ5MWIyMzJhNGMxY2JmMWNiZmU3YjczZTFlNGViYjg1NjdjYWExNzQwOTdmNWM2OWYyYjQxZmQiLCJrZXkiOiJmcF9ES3BBQlR6c3lRM0RMSU4teDoxclFRLXZraFREYUdYZmNiN2pQbXBBIn0=

ℹ️  Configure other nodes to join this cluster:
• On this node:
  ⁃ Create an enrollment token with `bin/elasticsearch-create-enrollment-token -s node`.
  ⁃ Uncomment the transport.host setting at the end of config/elasticsearch.yml.
  ⁃ Restart Elasticsearch.
• On other nodes:
  ⁃ Start Elasticsearch with `bin/elasticsearch --enrollment-token <token>`, using the enrollment token that you generated.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

1.6、修改密码

复制代码
# 重置密码
bin/elasticsearch-reset-password --username elastic -i

1.7、启动脚本

复制代码
#!/bin/bash
#chkconfig: 345 63 37
#description: elasticsearch
#processname: elasticsearch-7.10.2

# 这个目录是你Es所在文件夹的目录
export ES_HOME=/home/wwq/elk/elasticsearch-8.13.4
case $1 in
start)
    cd $ES_HOME
    ./bin/elasticsearch -d -p pid
    exit
!
    echo "elasticsearch is started"
    ;;
stop)
    pid=`cat $ES_HOME/pid`
    kill -9 $pid
    echo "elasticsearch is stopped"
    ;;
restart)
    pid=`cat $ES_HOME/pid`
    kill -9 $pid
    echo "elasticsearch is stopped"
    sleep 1
    cd $ES_HOME
    ./bin/elasticsearch -d -p pid
    exit
!
    echo "elasticsearch is started"
    ;;
*)
    echo "start|stop|restart"
    ;;
esac
exit 0

2.kibana

2.1、编辑kibana.yml

复制代码
server.port: 5601     # kibana的监听端口,可通过浏览器访问
server.host: "0.0.0.0"   # kibana监听本地IP,全零为本地所有网卡
i18n.locale: "zh-CN"

2.2、启动

复制代码
启动进程
./kibana
后台启动
nohup ./kibana &

2.3、浏览器访问,配置即可

http://127.0.0.1:5601

2.4、设置开机自启

复制代码
vim /lib/systemd/system/kibana.service
复制代码
[Unit]
Description=kibana
After=network.target
​
[Service]
Type=simple
User=wwq
ExecStart=/home/wwq/elk/kibana-8.13.4/bin/kibana
PrivateTmp=true
​
[Install]
WantedBy=multi-user.target
复制代码
systemctl enable kibana   #开机自启
systemctl start kibana    #启动
systemctl stop kibana     #停止
systemctl restart kibana  #重启

3.logstash

3.1、获取es秘钥

复制代码
[tomcat@wwq bin]$ ./elasticsearch-keystore show xpack.security.http.ssl.keystore.secure_password
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402.b06-1.el7_9.x86_64/jre; using bundled JDK
04nkv5WkSLuB854KnE-Kxg

3.2、配置logstash

复制代码
vim /home/wwq/elk/logstash-8.13.4/config/logstash-sample.conf
复制代码
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
​
input{
  beats{
    port => 5044
  }
}
​
output{
  elasticsearch{
    hosts => ["https://192.168.1.200:9200"]
    index => "%{[fields][logcategory]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    user => "******"
    password => "******"
    ssl_certificate_verification => true
    truststore => "/home/wwq/elk/elasticsearch-8.13.4/config/certs/http.p12"
    truststore_password => "04nkv5WkSLuB854KnE-Kxg"
  }
}

3.3、开机自启

复制代码
vim /lib/systemd/system/logstash.service
复制代码
[Unit]
Description=logstash
​
[Service]
User=wwq
ExecStart=/home/wwq/elk/logstash-8.13.4/bin/logstash -f /home/wwq/elk/logstash-8.13.4/config/logstash-sample.conf
Restart=always
​
[Install]
WantedBy=multi-user.target
复制代码
systemctl enable logstash   #开机自启
systemctl start logstash    #启动
systemctl stop logstash     #停止
systemctl restart logstash  #重启

4、filebeat

4.1、nginx日志json格式

复制代码
  
复制代码
log_format log_json '{"@timestamp":"$time_iso8601",'
                '"server_addr":"$server_addr",'
                '"server_name":"$server_name",'
                '"server_port":"$server_port",'
                '"server_protocol":"$server_protocol",'
                '"client_ip":"$remote_addr",'
                '"client_user":"$remote_user",'
                '"status":"$status",'
                '"request_method": "$request_method",'
                '"request_length":"$request_length",'
                '"request_time":"$request_time",'
                '"request_url":"$request_uri",'
                '"request_line":"$request",'
                '"send_client_size":"$bytes_sent",'
                '"send_client_body_size":"$body_bytes_sent",'
                '"proxy_protocol_addr":"$proxy_protocol_addr",'
                '"proxy_add_x_forward":"$proxy_add_x_forwarded_for",'
                '"proxy_port":"$proxy_port",'
                '"proxy_host":"$proxy_host",'
                '"upstream_host":"$upstream_addr",'
                '"upstream_status":"$upstream_status",'
                '"upstream_cache_status":"$upstream_cache_status",'
                '"upstream_connect_time":"$upstream_connect_time",'
                '"upstream_response_time":"$upstream_response_time",'
                '"upstream_header_time":"$upstream_header_time",'
                '"upstream_cookie_name":"$upstream_cookie_name",'
                '"upstream_response_length":"$upstream_response_length",'
                '"upstream_bytes_received":"$upstream_bytes_received",'
                '"upstream_bytes_sent":"$upstream_bytes_sent",'
                '"http_host":"$host",'
                '"http_cookie":"$http_cooke",'
                '"http_user_agent":"$http_user_agent",'
                '"http_origin":"$http_origin",'
                '"http_upgrade":"$http_upgrade",'
                '"http_referer":"$http_referer",'
                '"http_x_forward":"$http_x_forwarded_for",'
                '"http_x_forwarded_proto":"$http_x_forwarded_proto",'
                '"https":"$https",'
                '"http_scheme":"$scheme",'
                '"invalid_referer":"$invalid_referer",'
                '"gzip_ratio":"$gzip_ratio",'
                '"realpath_root":"$realpath_root",'
                '"document_root":"$document_root",'
                '"is_args":"$is_args",'
                '"args":"$args",'
                '"connection_requests":"$connection_requests",'
                '"connection_number":"$connection",'
                '"ssl_protocol":"$ssl_protocol",'
                '"ssl_cipher":"$ssl_cipher"}';
 
    access_log  logs/access_json.log log_json; 

4.2、配置filebeat.yml

复制代码
vim /home/wwq/elk/filebeat-8.13.4/filebeat.yml
复制代码
###################### Filebeat Configuration Example #########################
​
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
​
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
​
# ============================== Filebeat inputs ===============================
​
filebeat.inputs:
​
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
​
# filestream is an input for collecting log messages from files.
- type: filestream
​
  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id
​
  # Change to true to enable this input configuration.
  enabled: false
​
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
​
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']
​
  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']
​
  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']
​
  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1
​
# ============================== Filebeat modules ==============================
​
filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml
​
  # Set to true to enable config reloading
  reload.enabled: false
​
  # Period on which files under path should be checked for changes
  #reload.period: 10s
​
# ======================= Elasticsearch template setting =======================
​
setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false
​
​
# ================================== General ===================================
​
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
​
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
​
# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging
​
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
​
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
​
# =================================== Kibana ===================================
​
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
​
​
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]
​
  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
​
  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"
​
  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"
filebeat.inputs:
- type: log
  enabled: true
  paths:
  #nginx日志目录
    - /home/wwq/nginx-1.25.5/logs/access_json.log
  fields:
    logcategory: nginx
  json:
    keys_under_root: true
    overwrite_keys: true
    message_key: "message"
    add_error_key: true
​
output.logstash:
  hosts: ["192.168.1.200:5044"]
#
# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
​
# ================================== Logging ===================================
​
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
​
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]
​
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.
​
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
​
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
​
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
​
# ============================== Instrumentation ===============================
​
# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false
​
    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""
​
    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200
​
    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:
​
    # Secret token for the APM Server(s).
    #secret_token:
​
​
# ================================= Migration ==================================
​
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

4.3、启动

复制代码
filebeat -e -c filebeat.yml

4.4、开机自启

复制代码
vim /lib/systemd/system/filebeat.service
复制代码
[Unit]
Description=filebeat
Wants=network-online.target
After=network-online.target
​
[Service]
User=wwq
ExecStart=/home/wwq/elk/filebeat-8.13.4/filebeat -e -c /home/wwq/elk/filebeat-8.13.4/filebeat.yml
Restart=always
​
[Install]
WantedBy=multi-user.target
复制代码
systemctl enable filebeat   #开机自启
systemctl start filebeat    #启动
systemctl stop filebeat     #停止
systemctl restart filebeat  #重启

5、页面访问kibana及配置