使用docker部署elk,实现日志追踪

复制代码
一、前提条件
环境要求:
操作系统:Linux(CentOS/Ubuntu/Debian 均可,本文以 Ubuntu 为例);
Docker 版本:20.10+(确保 Docker 服务正常运行);
内存:至少 2GB(建议分配 1GB 给 Elasticsearch JVM)。
检查 Docker 状态:
# 确保 Docker 已启动
sudo systemctl status docker
# 若未启动,执行启动命令
sudo systemctl start docker && sudo systemctl enable docker
复制代码
拉取elk镜像
docker pull elasticsearch:8.19.2
docker pull kibana:8.19.2
docker pull logstash:8.19.2

安装elasticsearch:8.19.2

Step 1:创建专用网络(ELK 栈互通)

复制代码
为 Elasticsearch、Kibana、Logstash 创建独立 Docker 网络,避免端口冲突,确保服务间通信:

# 创建名为「elk」的桥接网络(后续 Kibana/Logstash 也加入此网络)
docker network create elk
# 验证网络创建成功
docker network ls | grep elk

Step 2:规划目录结构(避免文件混乱)

复制代码
提前创建宿主机目录,用于挂载 Elasticsearch 配置、数据、证书(持久化存储,容器删除后数据不丢失):

# 创建根目录(根据实际需求调整路径,本文以 /opt/elk 为例)
sudo mkdir -p /opt/elk/elasticsearch/{config,certs,data}
# 递归修改目录所有者为 1000:1000(匹配容器内 elasticsearch 用户 UID/GID)
sudo chown -R 1000:1000 /opt/elk/elasticsearch/
# 设置目录权限(所有者可读写,其他只读)
sudo chmod -R 755 /opt/elk/elasticsearch/
# 验证目录结构
tree /opt/elk/elasticsearch/

预期目录结构:
/opt/elk/elasticsearch/
├── certs       # 存储 SSL 证书
├── config      # 存储配置文件(elasticsearch.yml、密钥库等)
└── data        # 存储数据(持久化)

Step 3:生成 SSL 证书(8.x 必需)

复制代码
Elasticsearch 8.x 强制要求节点间传输加密,需生成「CA 根证书 + 实例证书」,避免之前的「证书缺失 / 密码错误」问题:
3.1 生成自签名 CA(根证书)
bash
# 启动临时容器,生成 CA 证书(密码:建议自定义,本文用 ElkCert@2025)
docker run -it --rm \
  -v /opt/elk/elasticsearch/certs:/usr/share/elasticsearch/certs \
  elasticsearch:8.19.2 \
  /usr/share/elasticsearch/bin/elasticsearch-certutil ca \
  --out /usr/share/elasticsearch/certs/elastic-ca.p12 \  # CA 输出路径
  --pass "ElkCert@2025"  # CA 密码(记牢,后续生成实例证书需用)

3.2 生成实例证书(用于 Elasticsearch 节点)
用上述 CA 根证书签署实例证书,确保证书包含「节点身份信息 + CA 根证书」:
bash
docker run -it --rm \
  -v /opt/elk/elasticsearch/certs:/usr/share/elasticsearch/certs \
  elasticsearch:8.19.2 \
  /usr/share/elasticsearch/bin/elasticsearch-certutil cert \
  --ca /usr/share/elasticsearch/certs/elastic-ca.p12 \  # 引用 CA 根证书
  --ca-pass "ElkCert@2025" \  # 与 CA 密码一致
  --out /usr/share/elasticsearch/certs/elasticsearch.p12 \  # 实例证书输出路径
  --pass "ElkCert@2025"  # 实例证书密码(建议与 CA 一致,方便管理)

3.3 修正证书权限
确保容器内 elasticsearch 用户能读取证书(避免「权限拒绝」错误):
bash
sudo chown -R 1000:1000 /opt/elk/elasticsearch/certs/
sudo chmod 644 /opt/elk/elasticsearch/certs/*  # 证书文件仅可读
# 验证证书生成成功
ls -l /opt/elk/elasticsearch/certs/

预期输出:
-rw-r--r-- 1 1000 1000 2688 Oct  6 17:43 elastic-ca.p12
-rw-r--r-- 1 1000 1000 3612 Oct  6 17:44 elasticsearch.p12

Step 4:配置 elasticsearch.yml(核心配置)

复制代码
创建并编辑 Elasticsearch 主配置文件,避免「配置缺失 / 格式错误」,使用绝对路径减少路径解析问题:
4.1 创建配置文件
bash
sudo nano /opt/elk/elasticsearch/config/elasticsearch.yml
4.2 写入完整配置(复制以下内容)
yaml
# ======================== 基础集群配置 ========================
cluster.name: es-app-cluster  # 集群名称(自定义,需与后续 Kibana 一致)
node.name: node-01            # 节点名称(单节点场景自定义)
network.host: 0.0.0.0         # 监听所有网卡(允许外部访问)
discovery.type: single-node   # 单节点模式(无需集群发现)
bootstrap.memory_lock: true   # 锁定 JVM 内存(避免内存交换,提升性能)

# ======================== 安全功能配置 ========================
xpack.security.enabled: true  # 启用用户名密码认证(必需)

# ======================== SSL 传输加密配置(8.x 必需) ========================
xpack.security.transport.ssl.enabled: true  # 启用节点间传输加密
xpack.security.transport.ssl.verification_mode: certificate  # 仅验证证书有效性(单节点足够)
xpack.security.transport.ssl.client_authentication: required  # 强制客户端认证
# 证书绝对路径(容器内路径,避免相对路径解析错误)
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elasticsearch.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elasticsearch.p12
4.3 修正配置文件权限
sudo chown 1000:1000 /opt/elk/elasticsearch/config/elasticsearch.yml
sudo chmod 644 /opt/elk/elasticsearch/config/elasticsearch.yml

Step 5:配置密钥库(存储敏感密码)

复制代码
Elasticsearch 8.x 废弃明文密码配置,需将证书密码存入「密钥库」(避免配置文件泄露密码):

5.1 自动创建密钥库并添加密码
docker run -it --rm \
  -v /opt/elk/elasticsearch/config:/usr/share/elasticsearch/config \
  elasticsearch:8.19.2 \
  /bin/bash -c "
    # 自动确认创建密钥库(echo 'y' 处理交互提示)
    echo 'y' | /usr/share/elasticsearch/bin/elasticsearch-keystore create;
    # 添加证书密码(与实例证书密码一致:ElkCert@2025)
    echo 'ElkCert@2025' | /usr/share/elasticsearch/bin/elasticsearch-keystore add -x xpack.security.transport.ssl.keystore.secure_password;
    echo 'ElkCert@2025' | /usr/share/elasticsearch/bin/elasticsearch-keystore add -x xpack.security.transport.ssl.truststore.secure_password;
    # 修正密钥库权限(仅 elasticsearch 用户可读写)
    chown 1000:1000 /usr/share/elasticsearch/config/elasticsearch.keystore;
    chmod 600 /usr/share/elasticsearch/config/elasticsearch.keystore;
  "

5.2 验证密钥库
bash
ls -l /opt/elk/elasticsearch/config/elasticsearch.keystore
预期输出:
-rw------- 1 1000 1000 2560 Oct  6 18:00 elasticsearch.keystore

Step 6:复制默认核心配置文件(避免缺失)

复制代码
Elasticsearch 启动需 jvm.options(JVM 配置)和 log4j2.properties(日志配置),从官方镜像复制默认文件:
6.1 复制 jvm.options
bash
docker run -it --rm --name es-tmp elasticsearch:8.19.2
# 新开一个终端,执行复制命令(不要关闭临时容器)
docker cp es-tmp:/usr/share/elasticsearch/config/jvm.options /opt/elk/elasticsearch/config/
# 关闭临时容器
docker stop es-tmp

6.2 复制 log4j2.properties
bash
docker run -it --rm --name es-tmp elasticsearch:8.19.2
# 新开一个终端,执行复制命令
docker cp es-tmp:/usr/share/elasticsearch/config/log4j2.properties /opt/elk/elasticsearch/config/
# 关闭临时容器
docker stop es-tmp

6.3 修正默认文件权限
bash
sudo chown -R 1000:1000 /opt/elk/elasticsearch/config/
sudo chmod 644 /opt/elk/elasticsearch/config/{jvm.options,log4j2.properties}
# 验证所有配置文件是否齐全
ls -l /opt/elk/elasticsearch/config/
预期配置文件:
plaintext
elasticsearch.keystore  elasticsearch.yml  jvm.options  log4j2.properties

Step 7:启动 Elasticsearch 容器

复制代码
执行完整启动命令,挂载所有必要目录,避免「挂载路径错误」:
bash
# 停止并删除旧容器(若存在)
docker stop elasticsearch 2>/dev/null && docker rm elasticsearch 2>/dev/null

# 启动 Elasticsearch 8.19.2
# 停止并删除旧容器(若存在)
docker stop elasticsearch 2>/dev/null && docker rm elasticsearch 2>/dev/null

# 启动 Elasticsearch 8.19.2
docker run -d \
  --name elasticsearch \
  --network elk \  # 加入之前创建的 elk 网络
  --publish 9200:9200 \  # HTTP 访问端口(外部访问)
  --publish 9300:9300 \  # 节点间通信端口
  # 环境变量配置
  --env cluster.name=es-app-cluster \
  --env node.name=node-01 \
  --env discovery.type=single-node \
  --env network.host=0.0.0.0 \
  --env xpack.security.enabled=true \
  --env ELASTIC_USERNAME=elastic \  # 超级用户名(固定为 elastic)
  --env ELASTIC_PASSWORD=165656656 \  # 自定义超级用户密码(记牢)
  --env ES_JAVA_OPTS="-Xms512m -Xmx1g" \  # JVM 内存配置(建议为物理内存的 50%)
  --env bootstrap.memory_lock=true \
  # 挂载宿主机目录(持久化存储)
  -v /opt/elk/elasticsearch/config:/usr/share/elasticsearch/config \
  -v /opt/elk/elasticsearch/data:/usr/share/elasticsearch/data \
  -v /opt/elk/elasticsearch/certs:/usr/share/elasticsearch/config/certs \
  # 镜像版本
  elasticsearch:8.19.2

Step 8:验证 Elasticsearch 服务

复制代码
8.1 检查容器状态
bash
docker ps | grep elasticsearch
若 STATUS 为 Up,说明容器启动成功;若为 Exited,需查看日志排查错误


 测试 HTTP 接口访问
bash
# 用超级用户密码访问 Elasticsearch(未启用 HTTP SSL,用 http)
curl -u elastic:165656656 http://localhost:9200

安装Kibana:8.19.2

复制代码
前提条件
已安装 Docker 20.10+
已部署 Elasticsearch 8.19.2 并正常运行
已创建 elk 网络
内存:至少 1.5GB(建议 2GB)

Step 1:创建 Kibana 目录结构

复制代码
# 创建 Kibana 配置和数据目录
sudo mkdir -p /opt/elk/kibana/{config,data}

# 递归修改目录所有者为 1000:1000
sudo chown -R 1000:1000 /opt/elk/kibana/

# 设置目录权限
sudo chmod -R 755 /opt/elk/kibana/

# 验证目录结构
tree /opt/elk/kibana/

Step 2:在 Elasticsearch 中创建服务账户令牌

复制代码
# 进入 Elasticsearch 容器
docker exec -it elasticsearch bash

# 创建 Kibana 服务账户令牌
bin/elasticsearch-service-tokens create elastic/kibana kibana-token

# 输出示例(记下这个令牌):
# AAEAAWVsYXN0aWMva2liYW5hL2tpYmFuYS10b2tlbjpqMU1ZUUM1blFMU3NGOERhQ2xNaXpR

# 退出容器
exit

Step 3:创建 Kibana 配置文件

复制代码
# 创建 Kibana 配置文件
sudo tee /opt/elk/kibana/config/kibana.yml > /dev/null << 'EOF'
server.name: kibana
server.host: "0.0.0.0"
server.port: 5601

elasticsearch.hosts: ["http://elasticsearch:9200"]
elasticsearch.serviceAccountToken: "AAEAAWVsYXN0aWMva2liYW5hL2tpYmFuYS10b2tlbjpqMU1ZUUM1blFMU3NGOERhQ2xNaXpR"

xpack.security.enabled: true
xpack.encryptedSavedObjects.encryptionKey: "afafsdfdsgegergergweqwrerbhgjntyhtyhtyewergergergegerg"

monitoring.ui.container.elasticsearch.enabled: true

# 内存优化
node.options: "--max-old-space-size=1024"
EOF

# 修正配置文件权限
sudo chown 1000:1000 /opt/elk/kibana/config/kibana.yml
sudo chmod 644 /opt/elk/kibana/config/kibana.yml

Step 4:启动 Kibana 容器

复制代码
# 停止并删除旧容器(若存在)
docker stop kibana 2>/dev/null && docker rm kibana 2>/dev/null

# 启动 Kibana 8.19.2
docker run -d \
  --name kibana \
  --network elk \
  --publish 5601:5601 \
  --memory 1.5g \
  --memory-swap 2g \
  --cpus 1.5 \
  -v /opt/elk/kibana/config:/usr/share/kibana/config \
  -v /opt/elk/kibana/data:/usr/share/kibana/data \
  kibana:8.19.2

Step 5:验证 Kibana 服务

复制代码
5.1 检查容器状态
bash
docker ps | grep kibana
5.2 查看启动日志
bash
docker logs -f kibana
等待看到成功启动消息(通常需要1-3分钟):

text
[info][listening] Server running at http://0.0.0.0:5601
[info][server][Kibana][http] http server running at http://0.0.0.0:5601
5.3 检查服务健康状态
bash
# 等待 Kibana 完全启动后测试
curl http://localhost:5601/api/status

安装Logstash:8.19.2

1. 准备工作

复制代码
创建目录结构
bash
# 创建所有必要的目录
sudo mkdir -p /opt/elk/logstash/{config,pipeline,data,logs,certs}

# 设置正确的权限(Logstash 使用 1000:1000 用户)
sudo chown -R 1000:1000 /opt/elk/logstash/
sudo chmod -R 755 /opt/elk/logstash/

# 验证目录结构
tree /opt/elk/logstash/

2. 配置文件创建

复制代码
创建 Logstash 主配置文件
bash
sudo tee /opt/elk/logstash/config/logstash.yml > /dev/null <<EOF
api.http.host: "0.0.0.0"
api.http.port: 9600
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "165656656"
xpack.monitoring.enabled: true
path.data: /usr/share/logstash/data
path.logs: /usr/share/logstash/logs
log.level: info
config.reload.automatic: true
config.reload.interval: 3s
EOF
复制代码
创建管道配置文件
sudo tee /opt/elk/logstash/pipeline/logstash.conf > /dev/null <<EOF
input {
  tcp {
    port => 5044
    codec => json_lines
  }
}

filter {
  # Token 认证检查
  if [token] != "AAEAAWVsYXN0aWMva2liYW5hL2tpYmFuYS10b2tlbjpqMU1ZUUM1blFMU3NGOERhQ2xNaXpR" {
    drop { }
  }

  # 根据服务名称设置索引
  if [service] {
    mutate {
      add_field => {
        "index_name" => "logs-%{service}-%{+YYYY}"
      }
    }
  } else {
    mutate {
      add_field => {
        "index_name" => "logs-unknown-%{+YYYY}"
      }
    }
  }

  # 解析时间戳
  date {
    match => [ "timestamp", "ISO8601" ]
    target => "@timestamp"
  }

  # 添加通用字段
  mutate {
    #add_field => {
    #  "environment" => "production"
    #  "processed_by" => "logstash"
    #  "log_type" => "application"
    #}

    # 移除敏感字段
    remove_field => [ "token", "host" ]
  }

  # 数据验证
  if ![message] or [message] == "" {
    drop { }
  }
}

output {
  # 输出到 Elasticsearch,使用动态索引名称
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    index => "%{index_name}"
    user => "elastic"
    password => "165656656"
    ssl_certificate_verification => false
    action => "create"  # 关键:使用 create 操作而不是 index
  }

  # 输出到控制台(用于调试)
  stdout {
    codec => rubydebug
  }
}
EOF

3. 运行 Logstash 容器

复制代码
docker run -d \
  --name logstash \
  --network elk \
  --user 1000:1000 \
  --publish 5044:5044 \
  --publish 9600:9600 \
  --env LS_JAVA_OPTS="-Xms512m -Xmx512m" \
  --env XPACK_MONITORING_ENABLED="true" \
  --memory 1g \
  --memory-swap 1.5g \
  --cpus 1 \
  -v /opt/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml \
  -v /opt/elk/logstash/pipeline/:/usr/share/logstash/pipeline/ \
  -v /opt/elk/logstash/data/:/usr/share/logstash/data/ \
  -v /opt/elk/logstash/logs/:/usr/share/logstash/logs/ \
  logstash:8.19.2

4. 故障排除

复制代码
常见问题解决
bash
# 如果容器启动失败,检查日志
docker logs -f logstash

# 检查端口是否监听
netstat -tlnp | grep 5044
netstat -tlnp | grep 9600

# 进入容器检查
docker exec -it logstash bash
ls -la /usr/share/logstash/data/
ls -la /usr/share/logstash/logs/

# 重启服务
docker restart logstash

# 完全重新部署
docker stop logstash
docker rm logstash
# 然后重新运行 docker run 命令

集成到springboot中使用,实现服务日志追踪查看:

注意这里使用的是jdk17版本

1.导入依赖

复制代码
<dependencies>
    <dependency>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
       <groupId>org.projectlombok</groupId>
       <artifactId>lombok</artifactId>
       <version>1.18.24</version> <!-- 版本可根据需求调整,建议使用较新稳定版 -->
       <optional>true</optional>
    </dependency>
    <dependency>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-starter-test</artifactId>
       <scope>test</scope>
    </dependency>
    <!-- Logstash Logback编码器 -->
    <dependency>
       <groupId>net.logstash.logback</groupId>
       <artifactId>logstash-logback-encoder</artifactId>
       <version>7.4</version>
    </dependency>
</dependencies>

2.添加服务配置文件

application.yml

复制代码
spring:
  application:
    name: springboot-elk-demo
  profiles:
    active: dev  # 开发环境使用 dev,生产环境改为 prod

app:
  logging:
    token: "AAEAAWVsYXN0aWMva2liYW5hL2tpYmFuYS10b2tlbjpqMU1ZUUM1blFMU3NGOERhQ2xNaXpR"

logging:
  config: classpath:logback-spring.xml
  level:
    org.elk.demo01: DEBUG

logback-spring.xml

复制代码
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <springProperty scope="context" name="appName" source="spring.application.name" defaultValue="user-service"/>
    <springProperty scope="context" name="appToken" source="app.logging.token" defaultValue="AAEAAWVsYXN0aWMva2liYW5hL2tpYmFuYS10b2tlbjpqMU1ZUUM1blFMU3NGOERhQ2xNaXpR"/>

    <!-- 控制台输出 -->
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <!-- Logstash TCP 输出 -->
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>IP:5044</destination>
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <logLevel/>
                <loggerName/>
                <pattern>
                    <pattern>
                        {
                        "service": "${appName}",
                        "token": "${appToken}",
                        "traceId": "%mdc{traceId}",
                        "spanId": "%mdc{spanId}",
                        "environment": "production"
                        }
                    </pattern>
                </pattern>
                <threadName/>
                <message/>
                <mdc/>
                <stackTrace/>
                <context/>
            </providers>
        </encoder>
        <keepAliveDuration>5 minutes</keepAliveDuration>
        <connectionStrategy>
            <roundRobin>
                <connectionTTL>5 minutes</connectionTTL>
            </roundRobin>
        </connectionStrategy>
    </appender>

    <!-- 异步 appender 提高性能 -->
    <appender name="ASYNC_LOGSTASH" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="LOGSTASH" />
        <queueSize>1024</queueSize>
        <discardingThreshold>0</discardingThreshold>
        <includeCallerData>true</includeCallerData>
    </appender>

    <!-- Root 日志配置:同时输出到控制台和 Logstash -->
    <root level="INFO">
        <appender-ref ref="CONSOLE"/>
        <appender-ref ref="ASYNC_LOGSTASH"/>
    </root>

    <!-- 服务特定包日志级别 -->
    <logger name="org.elk.demo01" level="DEBUG" additivity="false">
        <appender-ref ref="CONSOLE"/>
        <appender-ref ref="ASYNC_LOGSTASH"/>
    </logger>
</configuration>

3.添加测试类

复制代码
LogController
复制代码
package org.elk.demo01.controller;

import lombok.extern.slf4j.Slf4j;
import org.slf4j.MDC;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.UUID;

@RestController
@Slf4j
public class LogController {

    @GetMapping("/test")
    public String testLog() {
        log.info("测试INFO级别日志");
        log.debug("测试DEBUG级别日志");
        log.error("测试ERROR级别日志");
        logWithCustomFields();
        return "日志测试完成";
    }

    // 使用MDC添加自定义字段
    public void logWithCustomFields() {
        MDC.put("userId", "12345");
        MDC.put("requestId", UUID.randomUUID().toString());
        log.info("业务操作日志");

        MDC.clear();
    }

    @GetMapping("/error-test")
    public String errorTest() {
        try {
            int result = 10 / 0;
        } catch (Exception e) {
            log.error("发生数学运算错误", e);
        }
        return "错误日志测试";
    }
}

4.查看测试生成的日志

http://localhost:5601/

Spring Boot 3.x 全局链路追踪实现:

方案一:Micrometer Tracing

环境:使用jdk17,springboot版本3.3.11,springcloud:2023.0.5

1. 依赖配置

复制代码
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>3.3.11-SNAPSHOT</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>org.example</groupId>
    <artifactId>elk-demo-02</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>elk-demo-02</name>
    <description>elk-demo-02</description>
    <url/>
    <licenses>
        <license/>
    </licenses>
    <developers>
        <developer/>
    </developers>
    <scm>
        <connection/>
        <developerConnection/>
        <tag/>
        <url/>
    </scm>
    <properties>
        <java.version>17</java.version>
    </properties>
    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-dependencies</artifactId>
                <version>2023.0.5</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
            <dependency>
                <groupId>com.alibaba.cloud</groupId>
                <artifactId>spring-cloud-alibaba-dependencies</artifactId>
                <version>2023.0.1.0</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
        </dependency>
        <!--单元测试的坐标-->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-validation</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-bootstrap</artifactId>
        </dependency>
        <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
        </dependency>
        <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
        </dependency>

        <dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <version>7.4</version>
        </dependency>

        <!-- Micrometer Tracing Core (无Zipkin) -->
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-tracing</artifactId>
        </dependency>

        <!-- Brave Tracer -->
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-tracing-bridge-brave</artifactId>
        </dependency>

        <!-- Actuator for Observation -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>

        <!-- 确保包含Feign的Micrometer集成 -->
        <dependency>
            <groupId>io.github.openfeign</groupId>
            <artifactId>feign-micrometer</artifactId>
        </dependency>


        <!-- OpenFeign -->
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-openfeign</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-loadbalancer</artifactId>
        </dependency>

    </dependencies>

    <!-- 在pom.xml的build/plugins中添加 -->
    <build>
        <plugins>
            <!-- Lombok 编译插件:处理Lombok注解 -->
            <plugin>
                <groupId>org.projectlombok</groupId>
                <artifactId>lombok-maven-plugin</artifactId>
                <version>1.18.20.0</version> <!-- 版本建议与lombok依赖版本一致 -->
                <executions>
                    <execution>
                        <phase>compile</phase> <!-- 在编译阶段执行 -->
                        <goals>
                            <goal>delombok</goal> <!-- 核心目标:解析Lombok注解并生成代码 -->
                        </goals>
                        <configuration>
                            <!-- 生成的代码输出目录(默认与源文件同目录,可自定义) -->
                            <sourceDirectory>src/main/java</sourceDirectory>
                            <outputDirectory>target/generated-sources/delombok</outputDirectory>
                            <addOutputDirectory>true</addOutputDirectory> <!-- 将生成目录加入编译路径 -->
                        </configuration>
                    </execution>
                </executions>
            </plugin>

            <!-- 其他插件(如maven-compiler-plugin) -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.8.1</version>
                <configuration>
                    <source>1.8</source> <!-- 对应项目的JDK版本 -->
                    <target>1.8</target>
                    <!-- 确保编译时包含Lombok生成的代码目录 -->
                    <generatedSourcesDirectory>target/generated-sources/delombok</generatedSourcesDirectory>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

2. 应用配置

application.yml

复制代码
spring:
  application:
    name: springboot-elk-demo02
  profiles:
    active: dev  # 开发环境使用 dev,生产环境改为 prod
#使用elk日志,如果不使用elk日志则把  logging:config修改为:logback-spring.xml
app:
  logging:
    token: "AAEAAWVsYXN0aWMva2liYW5hL2tpYmFuYS10b2tlbjpqMU1ZUUM1blFMU3NGOERhQ2xNaXpR"
#指定日志配置文件
logging:
  pattern:
    # 在日志中显示Trace ID和Span ID
    level: "%5p [${spring.application.name:},%X{traceId:-},%X{spanId:-}]"
  level:
    io.micrometer.tracing: DEBUG
    org.springframework.cloud: INFO
  config: classpath:logback-elk-spring.xml

management:
  tracing:
    sampling:
      probability: 1.0  # 采样率,生产环境可以设置为0.1
  endpoints:
    web:
      exposure:
        include: health,metrics,info
  # 不配置zipkin相关属性

# 自定义Tracing配置
tracing:
  enabled: true
  log-level: INFO

server:
  port: 8082

logback-elk-spring.xml

复制代码
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="60 seconds" debug="false">
    <!-- 引入Spring Boot的默认日志基础配置 -->
    <include resource="org/springframework/boot/logging/logback/base.xml" />

    <springProperty scope="context" name="appName" source="spring.application.name" defaultValue="user-service"/>
    <springProperty scope="context" name="appToken" source="app.logging.token" defaultValue="AAEAAWVsYXN0aWMva2liYW5hL2tpYmFuYS10b2tlbjpqMU1ZUUM1blFMU3NGOERhQ2xNaXpR"/>

    <!-- Logstash TCP 输出 -->
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>ip:5044</destination>
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <logLevel/>
                <loggerName/>
                <pattern>
                    <pattern>
                        {
                        "service": "${appName}",
                        "token": "${appToken}",
                        "traceId": "%mdc{traceId}",
                        "spanId": "%mdc{spanId}",
                        "environment": "production"
                        }
                    </pattern>
                </pattern>
                <threadName/>
                <message/>
                <mdc/>
                <stackTrace/>
                <context/>
            </providers>
        </encoder>
        <keepAliveDuration>5 minutes</keepAliveDuration>
        <connectionStrategy>
            <roundRobin>
                <connectionTTL>5 minutes</connectionTTL>
            </roundRobin>
        </connectionStrategy>
    </appender>

    <!-- 异步 appender 提高性能 -->
    <appender name="ASYNC_LOGSTASH" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="LOGSTASH" />
        <queueSize>1024</queueSize>
        <discardingThreshold>0</discardingThreshold>
        <includeCallerData>true</includeCallerData>
    </appender>

    <!-- 从Spring配置中获取值(在application.yml/properties中定义) -->
    <springProperty scope="context" name="LOG_PATH" source="logging.path" defaultValue="./logs"/>
    <springProperty scope="context" name="APP_NAME" source="spring.application.name" defaultValue="myapp"/>

    <!-- ************************** -->
    <!-- # 1. 定义日志输出格式 -->
    <!-- ************************** -->
    <!-- 控制台彩色日志格式(通常base.xml已定义,此处可覆盖) -->
    <property name="CONSOLE_PATTERN" value="%clr(%d{HH:mm:ss.SSS}){faint} %clr(%-5level) %clr(${APP_NAME}){magenta} %clr([%thread]){blue} %clr(%logger{36}){cyan} %msg%n"/>

    <!-- 文件日志格式 -->
    <property name="FILE_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"/>

    <!-- ************************** -->
    <!-- # 2. 输出到控制台 -->
    <!-- ************************** -->
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>${CONSOLE_PATTERN}</pattern>
            <charset>UTF-8</charset>
        </encoder>
    </appender>

    <!-- ************************** -->
    <!-- # 3. 输出到文件(按级别和时间滚动) -->
    <!-- ************************** -->
    <!-- 3.1 INFO级别日志 -->
    <appender name="FILE-INFO" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_PATH}/${APP_NAME}-info.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <!-- 归档的日志文件路径和格式:按天和索引分割 -->
            <fileNamePattern>${LOG_PATH}/${APP_NAME}-info.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
            <!-- 单个日志文件最大大小 -->
            <maxFileSize>100MB</maxFileSize>
            <!-- 日志文件保留天数 -->
            <maxHistory>30</maxHistory>
            <!-- 所有日志文件总大小上限 -->
            <totalSizeCap>1GB</totalSizeCap>
        </rollingPolicy>
        <encoder>
            <pattern>${FILE_PATTERN}</pattern>
            <charset>UTF-8</charset>
        </encoder>
        <!-- 过滤器:只记录INFO级别日志 -->
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>INFO</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>

    <!-- 3.2 ERROR级别日志 -->
    <appender name="FILE-ERROR" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_PATH}/${APP_NAME}-error.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <fileNamePattern>${LOG_PATH}/${APP_NAME}-error.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
            <maxFileSize>100MB</maxFileSize>
            <maxHistory>30</maxHistory>
            <totalSizeCap>1GB</totalSizeCap>
        </rollingPolicy>
        <encoder>
            <pattern>${FILE_PATTERN}</pattern>
            <charset>UTF-8</charset>
        </encoder>
        <!-- 过滤器:只记录ERROR级别日志 -->
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>ERROR</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>

    <!-- ************************** -->
    <!-- # 4. 指定包或类的日志级别 -->
    <!-- ************************** -->
    <!-- 设置特定包的日志级别,例如将SQL日志输出为DEBUG -->
    <logger name="org.apache.ibatis" level="DEBUG" additivity="false">
        <appender-ref ref="CONSOLE"/>
    </logger>
    <logger name="java.sql.Connection" level="DEBUG" additivity="false"/>
    <logger name="java.sql.Statement" level="DEBUG" additivity="false"/>
    <logger name="java.sql.PreparedStatement" level="DEBUG" additivity="false"/>

    <!-- ************************** -->
    <!-- # 5. 根日志记录器配置 -->
    <!-- ************************** -->
    <root level="INFO">
        <appender-ref ref="CONSOLE" />
        <appender-ref ref="FILE-INFO" />
        <appender-ref ref="FILE-ERROR" />
    </root>

    <!-- 服务特定包日志级别 -->
    <logger name="org.example.demo02" level="DEBUG" additivity="false">
        <appender-ref ref="CONSOLE"/>
        <appender-ref ref="ASYNC_LOGSTASH"/>
    </logger>
</configuration>

3. Tracing配置类

复制代码
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.openfeign.EnableFeignClients;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.client.RestTemplate;

@Configuration
@EnableConfigurationProperties(TracingProperties.class)
@EnableFeignClients
public class TracingConfig {

    @Bean
    public RestTemplate restTemplate() {
        return new RestTemplate();
    }


    // 自定义Brave配置
    @Bean
    public brave.Tracing braveTracing(TracingProperties properties) {
        return brave.Tracing.newBuilder()
                .localServiceName(properties.getServiceName())
                .sampler(brave.sampler.Sampler.create(properties.getSamplingProbability()))
                .build();
    }
}
复制代码
import lombok.Data;
import org.springframework.boot.context.properties.ConfigurationProperties;

// Tracing配置属性
@ConfigurationProperties(prefix = "tracing")
@Data
public class TracingProperties {
    private boolean enabled = true;
    private String serviceName = "default-service";
    private float samplingProbability = 1.0f;
    private String logLevel = "INFO";
}

4. 业务代码示例

前3个步骤的配置代码需要在两个服务(demo02,demo3)中配置;

demo02业务代码:
复制代码
AsynService
复制代码
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;

@Service
@Slf4j
public class AsynService {
    public void test() {
        log.info("异步开始执行");
    }
}
复制代码
TestServiceClient
复制代码
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestParam;

import java.util.List;

@FeignClient(name = "springboot-elk-demo03", path = "/demo03")
public interface TestServiceClient {
    @GetMapping("/test1")
    List<Long> demoTest1(@RequestParam Long userId);

    @PostMapping("/test2")
    List<Long> demoTest2();
}
复制代码
Controller
复制代码
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

@RefreshScope
@RestController
@Slf4j
public class Controller {

    @Autowired
    private TestServiceClient testServiceClient;

    @Autowired
    private AsynService asynService;

    @GetMapping("/test2")
    public List<Long> index2() {
        log.info("调用demo03服务的demoTest1");
        List<Long> list1 = testServiceClient.demoTest1(1L);
        asynService.test();
        log.info("调用demo03服务的demoTest2");
        List<Long> list2 = testServiceClient.demoTest2();
        list1.addAll(list2);
        return list1;
    }
}
demo03业务代码:
复制代码
Controller
复制代码
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.web.bind.annotation.*;

import java.util.Arrays;
import java.util.List;

@RefreshScope
@RestController
@Slf4j
public class Controller {

    @GetMapping("demo03/test1")
    List<Long> demoTest1(@RequestParam Long userId){
        log.info("userId:{}",userId);
        return Arrays.asList(userId);
    }

    @PostMapping("demo03/test2")
    List<Long> demoTest2(){
        try {
            log.info("test2");
            int a = 1/0;
        }catch (Exception e){
            log.error("test2 error:{}",e);
        }
        return Arrays.asList(11L,12L,13L);
    }
}

5.调用测试接口

http://localhost:8082/test2

输出:[1,11,12,13]

查看es上的日志:

从结果可以看到,demo02和demo03的日志,而且链路id是从demo02生成然后流转到demo03,这样子就可以解决接口调用各个微服务后,可以通过统一的链路id查看同一请求不同服务之间的日志了

相关推荐
会飞的小蛮猪2 小时前
SkyWalking运维之路(Java探针接入)
java·运维·经验分享·容器·skywalking
天一生水water3 小时前
docker-compose安装
运维·docker·容器
蓝象_3 小时前
docker安装配置mysql
mysql·docker·容器
Cxzzzzzzzzzz4 小时前
Kubernetes 架构
容器·架构·kubernetes
一叶知秋yyds4 小时前
Centos 安装 Docker教程
linux·docker·centos
return(b,a%b);5 小时前
docker拉取失败,更换docker的源
docker·容器·eureka
IT小哥哥呀5 小时前
Jenkins + Docker 打造自动化持续部署流水线
docker·微服务·自动化·jenkins·springboot·高并发·限流
时鲟、时倾5 小时前
docker部署kafka
docker·容器·kafka
byte轻骑兵6 小时前
WSL+openEuler云原生实践:Docker全流程部署与多容器编排深度评测
docker·云原生·容器·openeuler