【ELK】分布式日志平台搭建全攻略

>ELK 是一个由 Elasticsearch、Logstash 和 Kibana 组成的开源日志收集、存储、搜索和可视化分析平台。

目录

一、环境准备

[1.1 创建目录](#1.1 创建目录)

[1.2 创建配置文件](#1.2 创建配置文件)

二、系统集成

[2.1 FileBeat](#2.1 FileBeat)

[2.2 项目集成](#2.2 项目集成)

[2.3 日志查看](#2.3 日志查看)


一、环境准备

1.1 创建目录

elk架构图

创建目录结构

bash 复制代码
mkdir -p /opt/elk/{elasticsearch/{data,logs,plugins,config},logstash/{config,pipeline},kibana/config,filebeat/{config,data}}

设置权限

chomd -R 777 elasticsearch

chmod -R 777 logstash

chmod -R 777 kibana

chmod -R 777 filebeat

1.2 创建配置文件

Logstash配置:

vim logstash/config/logstash.yml

bash 复制代码
http.host: "0.0.0.0"
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: ["http://elasticsearch:9200"]

xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "mH0awV4RrkN2"

# 日志级别
log.level: info

vim logstash/pipeline/logstash.conf

bash 复制代码
input {
  beats {
    port => 5045
    ssl => false
  }
  
  tcp {
    port => 5044
    codec => json
  }
}

filter {
  # RuoYi 应用日志(优先级最高)
  if [app_name] {
    mutate {
      add_field => { "[@metadata][target_index]" => "ruoyi-logs-%{+YYYY.MM.dd}" }
    }
  }
  # 系统日志
  else if [fields][log_type] == "system" {
    grok {
      match => { "message" => "%{SYSLOGLINE}" }
    }
    date {
      match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
      target => "@timestamp"
    }
    mutate {
      add_field => { "[@metadata][target_index]" => "system-log-%{+YYYY.MM.dd}" }
    }
  }
  # Docker 容器日志
  else if [container] {
    # 尝试解析 JSON 消息
    if [message] =~ /^\{.*\}$/ {
      json {
        source => "message"
        skip_on_invalid_json => true
      }
    }
    mutate {
      add_field => { "[@metadata][target_index]" => "docker-log-%{+YYYY.MM.dd}" }
    }
  }
  # 其他未分类日志
  else {
    mutate {
      add_field => { "[@metadata][target_index]" => "logstash-%{+YYYY.MM.dd}" }
    }
  }
  
  # 清理不需要的字段
  mutate {
    remove_field => ["agent", "ecs", "input"]
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    user => "elastic"
    password => "mH0awV4RrkN2"
    index => "%{[@metadata][target_index]}"
  }
  
  # 调试输出(生产环境建议关闭)
  # stdout {
  #   codec => rubydebug
  # }
}

Kibana配置

vim kibana/config/kibana.yml

bash 复制代码
server.name: kibana
server.host: "0.0.0.0"
server.port: 5601

elasticsearch.hosts: ["http://elasticsearch:9200"]

# 中文界面
i18n.locale: "zh-CN"

# 监控配置
monitoring.ui.container.elasticsearch.enabled: true

filebeat配置

这里我加上了系统和docker的运行日志,是为了给读者扩展的,各位读者可以参考修改,让elk不仅是只会接受服务的日志,还能接受nginx日志,mysql慢日志等。

vim /filebeat/config/filebeat.yml

bash 复制代码
filebeat.inputs:
  # 收集系统日志
  - type: log
    enabled: true
    paths:
      - /var/log/messages
      - /var/log/syslog
    tags: ["system"]
    fields:
      log_type: system

  # 收集 Docker 容器日志
  - type: container
    enabled: true
    paths:
      - '/var/lib/docker/containers/*/*.log'
    processors:
      - add_docker_metadata:
          host: "unix:///var/run/docker.sock"

# 输出到 Logstash
output.logstash:
  hosts: ["logstash:5045"]
  
# 或者直接输出到 ES(二选一)
#output.elasticsearch:
#  hosts: ["elasticsearch:9200"]
#  username: "elastic"
#  password: "mH0awV4RrkN2"
#  index: "filebeat-%{+yyyy.MM.dd}"

# Kibana 配置
setup.kibana:
  host: "kibana:5601"
  username: "elastic"
  password: "mH0awV4RrkN2"

# 日志级别
logging.level: info
logging.to_files: true
logging.files:
  path: /usr/share/filebeat/logs
  name: filebeat
  keepfiles: 7
  permissions: 0644

# 启用监控
monitoring.enabled: true
monitoring.elasticsearch:
  hosts: ["elasticsearch:9200"]
  username: "elastic"
  password: "mH0awV4RrkN2"
  filebeat.inputs:
  # 收集系统日志
  - type: log
    enabled: true
    paths:
      - /var/log/messages
      - /var/log/syslog
    tags: ["system"]
    fields:
      log_type: system

  # 收集 Docker 容器日志
  - type: container
    enabled: true
    paths:
      - '/var/lib/docker/containers/*/*.log'
    processors:
      - add_docker_metadata:
          host: "unix:///var/run/docker.sock"

# 输出到 Logstash
output.logstash:
  hosts: ["logstash:5045"]
  
# 或者直接输出到 ES(二选一)
#output.elasticsearch:
#  hosts: ["elasticsearch:9200"]
#  username: "elastic"
#  password: "mH0awV4RrkN2"
#  index: "filebeat-%{+yyyy.MM.dd}"

# Kibana 配置
setup.kibana:
  host: "kibana:5601"
  username: "elastic"
  password: "mH0awV4RrkN2"

# 日志级别
logging.level: info
logging.to_files: true
logging.files:
  path: /usr/share/filebeat/logs
  name: filebeat
  keepfiles: 7
  permissions: 0644

# 启用监控
monitoring.enabled: true
monitoring.elasticsearch:
  hosts: ["elasticsearch:9200"]
  username: "elastic"
  password: "mH0awV4RrkN2"

compose配置

vim docker-compose.yml

bash 复制代码
version: '3.8'

services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.12.1
    container_name: elasticsearch
    environment:
      - node.name=es-node-1
      - cluster.name=elk-cluster
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512M -Xmx1g"
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      # === 安全认证配置 ===
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=false  # 禁用 HTTP SSL
      - xpack.security.transport.ssl.enabled=false  # 禁用内部通信 SSL
      # 设置 elastic 用户的初始密码(重要!)
      - ELASTIC_PASSWORD=mH0awV4RrkN2
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - ./elasticsearch/data:/usr/share/elasticsearch/data
      - ./elasticsearch/logs:/usr/share/elasticsearch/logs
      - ./elasticsearch/plugins:/usr/share/elasticsearch/plugins
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - elk-network
    restart: unless-stopped
    healthcheck:
      # 健康检查需要认证
      test: ["CMD-SHELL", "curl -u elastic:mH0awV4RrkN2 -f http://localhost:9200/_cluster/health || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 5

  logstash:
    image: docker.elastic.co/logstash/logstash:7.12.1
    container_name: logstash
    environment:
      - "LS_JAVA_OPTS=-Xms512m -Xmx512m"
      # 使用 elastic 超级用户(后续可改为 logstash_system)
      - ELASTICSEARCH_USERNAME=elastic
      - ELASTICSEARCH_PASSWORD=mH0awV4RrkN2
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./logstash/pipeline:/usr/share/logstash/pipeline
    ports:
      - "5044:5044"  # TCP输入
      - "5045:5045"  # Beats输入
      - "9600:9600"  # Logstash API
    networks:
      - elk-network
    depends_on:
      elasticsearch:
        condition: service_healthy
    restart: unless-stopped

  kibana:
    image: docker.elastic.co/kibana/kibana:7.12.1
    container_name: kibana
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
      - I18N_LOCALE=zh-CN
      # 使用 elastic 超级用户(后续可改为 kibana_system)
      - ELASTICSEARCH_USERNAME=elastic
      - ELASTICSEARCH_PASSWORD=mH0awV4RrkN2
    volumes:
      - ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - "5601:5601"
    networks:
      - elk-network
    depends_on:
      elasticsearch:
        condition: service_healthy
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:5601/api/status || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 5

  filebeat:
    image: docker.elastic.co/beats/filebeat:7.12.1
    container_name: filebeat
    user: root
    volumes:
      - ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      - ./filebeat/data:/usr/share/filebeat/data
      # 挂载宿主机日志目录(根据实际需求调整)
      - /var/log:/var/log:ro
      # 如果需要收集 Docker 容器日志
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
    command: filebeat -e -strict.perms=false
    networks:
      - elk-network
    depends_on:
      - elasticsearch
      - logstash
    restart: unless-stopped

networks:
  elk-network:
    driver: bridge

volumes:
  elasticsearch-data:
    driver: local

启动所有服务

docker-compose up -d

浏览器访问

http://127/.0.0.1:5601

二、系统集成

2.1 FileBeat

规范的流程是先通过filebeat给logstash插入到es,但是笔者实在不想折腾了,我这里就直接省去filebeat这一流程。

正常部署方式为:在每台应用服务器上安装Filebeat,配置相应的日志收集路径,指向中心化的ELK服务器地址,启动Filebeat服务

2.2 项目集成

依赖引入

bash 复制代码
<dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <version>7.2</version>
</dependency>

本地日志文件配置,在resouces目录下创建logback-elk.xml

bash 复制代码
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="60 seconds" debug="false">
    <!-- 日志存放路径 -->
    <property name="log.path" value="logs/ruoyi-system" />
    <!-- 日志输出格式 -->
    <property name="log.pattern" value="%d{HH:mm:ss.SSS} [%thread] %-5level %logger{20} - [%method,%line] - %msg%n" />
    
    <!-- 应用名称 -->
    <property name="APP_NAME" value="ruoyi-system" />
    <!-- Logstash服务器地址 -->
    <property name="LOGSTASH_HOST" value="127.0.0.1:5044" />

    <!-- 控制台输出 -->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>${log.pattern}</pattern>
        </encoder>
    </appender>

    <!-- 系统日志输出 -->
    <appender name="file_info" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${log.path}/info.log</file>
        <!-- 循环政策:基于时间创建日志文件 -->
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- 日志文件名格式 -->
            <fileNamePattern>${log.path}/info.%d{yyyy-MM-dd}.log</fileNamePattern>
            <!-- 日志最大的历史 60天 -->
            <maxHistory>60</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>${log.pattern}</pattern>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <!-- 过滤的级别 -->
            <level>INFO</level>
            <!-- 匹配时的操作:接收(记录) -->
            <onMatch>ACCEPT</onMatch>
            <!-- 不匹配时的操作:拒绝(不记录) -->
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>

    <appender name="file_error" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${log.path}/error.log</file>
        <!-- 循环政策:基于时间创建日志文件 -->
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- 日志文件名格式 -->
            <fileNamePattern>${log.path}/error.%d{yyyy-MM-dd}.log</fileNamePattern>
            <!-- 日志最大的历史 60天 -->
            <maxHistory>60</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>${log.pattern}</pattern>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <!-- 过滤的级别 -->
            <level>ERROR</level>
            <!-- 匹配时的操作:接收(记录) -->
            <onMatch>ACCEPT</onMatch>
            <!-- 不匹配时的操作:拒绝(不记录) -->
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>

    <!-- Logstash输出配置 - 通过TCP发送JSON格式日志 -->
    <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <!-- Logstash服务器地址和端口 -->
        <destination>${LOGSTASH_HOST}</destination>
        <!-- 连接超时时间 -->
        <connectionTimeout>5000</connectionTimeout>
        <!-- 重连延迟时间 -->
        <reconnectionDelay>1000</reconnectionDelay>
        <!-- 写缓冲区大小 -->
        <writeBufferSize>16384</writeBufferSize>
        <!-- 发送超时时间 -->
        <writeTimeout>5000</writeTimeout>
        
        <!-- JSON格式编码器 -->
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <!-- 自定义字段 -->
            <customFields>{"app_name":"${APP_NAME}"}</customFields>
            <!-- 包含MDC数据 -->
            <includeMdc>true</includeMdc>
            <!-- 包含上下文名称 -->
            <includeContext>true</includeContext>
            <!-- 包含调用者数据(注意:开启会影响性能) -->
            <includeCallerData>false</includeCallerData>
        </encoder>
        
        <!-- 异步发送 -->
        <keepAliveDuration>5 minutes</keepAliveDuration>
    </appender>

    <!-- 异步Logstash appender -->
    <appender name="async_logstash" class="ch.qos.logback.classic.AsyncAppender">
        <!-- 不丢失日志,默认如果队列80%满,会丢弃TRACE、DEBUG、INFO级别的日志 -->
        <discardingThreshold>0</discardingThreshold>
        <!-- 队列大小 -->
        <queueSize>512</queueSize>
        <!-- 添加附加的appender,最多只能添加一个 -->
        <appender-ref ref="logstash"/>
    </appender>

    <!-- 系统模块日志级别控制  -->
    <logger name="com.ruoyi" level="info" />
    <!-- Spring日志级别控制  -->
    <logger name="org.springframework" level="warn" />

    <root level="info">
        <appender-ref ref="console" />
        <appender-ref ref="file_info" />
        <appender-ref ref="file_error" />
        <!-- 添加异步Logstash输出 -->
        <appender-ref ref="async_logstash" />
    </root>
</configuration>

bootstrap.yml配置

2.3 日志查看

访问kibana地址,输入我们配置的用户名和账号,打开索引模式。

点击创建索引模式

输入:ruoyi-logs-*后点击下一步

时间字段选择如下:

这样我们的索引就创建完成

接下来打开左侧菜单栏的Discover页面

后记:目前的文档没有结合kafaka集群,因为笔者不想再折腾了;如果读者知道如何结合mq或kafaka可以在评论区分享一下。

相关推荐
yangmf20403 天前
APM(三):监控 Python 服务链
大数据·运维·开发语言·python·elk·elasticsearch·搜索引擎
安全检测中5 天前
利用ELK发现通过键盘记录器窃取信息
elk
y***86697 天前
后端服务日志分析,ELK与PLG
elk
java资料站10 天前
ELK+FileBeat 7.14.0版本安装、部署及使用
elk
w***Q35010 天前
后端服务日志聚合,ELK Stack配置
elk
YongCheng_Liang10 天前
ELK 自动化部署脚本解析
linux·运维·elk·jenkins
YongCheng_Liang11 天前
openEuler 22.03 LTS 部署 ELK(Elasticsearch+Logstash+Kibana)完整教程
linux·运维·elk·elasticsearch
小坏讲微服务11 天前
Spring Cloud Alibaba 2025.0.0 整合 ELK 实现日志
运维·后端·elk·spring cloud·jenkins
L.EscaRC12 天前
ELK Stack核心原理与运用要点解析
elk