因为上一篇没对微服务代码很详细地说明,所以在此借花献佛,使用开源的微服务代码去说明如何去做链路追踪。
项目是开源项目,fork到github以及gitee中,然后拉取到本地
后端代码:
https://gitee.com/jelex/mall-swarm.git dev分支
前端管理系统后台代码:
https://gitee.com/jelex/mall-admin-web.git dev分支
呃,因为涉及到部分不宜公开的配置内容,大家有兴趣的可自行网上搜索 mall-swarm项目...
启动后端项目之前准备事项:
- 运行本机docker
启动my-nacos容器
启动ES01容器(此容器不用,转而使用服务器上的 es)
启动kibana-tencent容器(本地mac 上的docker kibana,连接服务器上的es作为存储) - 运行本机mysql
- 运行 redis
- 运行服务器 es服务
- 运行本机logstash服务
见 本机mac安装logstash 篇
logstash配置如下:
bash
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 4560
codec => json_lines
type => "debug"
}
tcp {
mode => "server"
host => "0.0.0.0"
port => 4561
codec => json_lines
type => "error"
}
tcp {
mode => "server"
host => "0.0.0.0"
port => 4562
codec => json_lines
type => "business"
}
tcp {
mode => "server"
host => "0.0.0.0"
port => 4563
codec => json_lines
type => "record"
}
}
filter{
if [type] == "record" {
mutate {
remove_field => "port"
remove_field => "host"
remove_field => "@version"
}
json {
source => "message"
remove_field => ["message"]
}
}
}
output {
elasticsearch {
hosts => "101.43.xxx.xx:80"
index => "mall-%{type}-%{+YYYY.MM.dd}"
user => "logstash_writer"
password => "logstash_writer"
}
}
运行logstash:
bash
cd Documents/work/logstash-7.17.0/bin
jelex@jelexxudeMacBook-Pro bin % ./logstash -f ../config/logstash-mall-swarm.conf &
[1] 29577
运行后端服务:
启动前端项目:管理后台:
先设置node版本:
bash
jelex@jelexxudeMacBook-Pro ~ % nvm current
v12.14.0
jelex@jelexxudeMacBook-Pro ~ % node -v
v12.14.0
npm install
。。。
运行:
bash
jelex@jelexxudeMacBook-Pro mall-admin-web % nvm use 12
Now using node v12.14.0 (npm v6.13.4)
jelex@jelexxudeMacBook-Pro mall-admin-web % npm run dev
访问测试:随便点几个功能
查看后端控制台日志:
查看响应头:
查看kibana:
-------------附录-----logback-spring.xml--------------
xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration>
<!--引用默认日志配置-->
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<!--使用默认的控制台日志输出实现-->
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<!--应用名称-->
<springProperty scope="context" name="APP_NAME" source="spring.application.name" defaultValue="mall-swarm"/>
<!--日志文件保存路径-->
<property name="LOG_FILE_PATH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/logs}"/>
<property name="FILE_LOG_PATTERN" value="${FILE_LOG_PATTERN:-%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } --- [%t] %-40.40logger{39} : %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
<!--LogStash访问host-->
<springProperty name="LOG_STASH_HOST" scope="context" source="logstash.host" defaultValue="localhost"/>
<!--DEBUG日志输出到文件-->
<appender name="FILE_DEBUG"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<!--输出DEBUG以上级别日志-->
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>DEBUG</level>
</filter>
<encoder>
<!--设置为默认的文件日志格式-->
<pattern>${FILE_LOG_PATTERN}</pattern>
<charset>UTF-8</charset>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!--设置文件命名格式-->
<fileNamePattern>${LOG_FILE_PATH}/debug/${APP_NAME}-%d{yyyy-MM-dd}-%i.log</fileNamePattern>
<!--设置日志文件大小,超过就重新生成文件,默认10M-->
<maxFileSize>${LOG_FILE_MAX_SIZE:-10MB}</maxFileSize>
<!--日志文件保留天数,默认30天-->
<maxHistory>${LOG_FILE_MAX_HISTORY:-30}</maxHistory>
</rollingPolicy>
</appender>
<!--ERROR日志输出到文件-->
<appender name="FILE_ERROR"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<!--只输出ERROR级别的日志-->
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
<encoder>
<!--设置为默认的文件日志格式-->
<pattern>${FILE_LOG_PATTERN}</pattern>
<charset>UTF-8</charset>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!--设置文件命名格式-->
<fileNamePattern>${LOG_FILE_PATH}/error/${APP_NAME}-%d{yyyy-MM-dd}-%i.log</fileNamePattern>
<!--设置日志文件大小,超过就重新生成文件,默认10M-->
<maxFileSize>${LOG_FILE_MAX_SIZE:-10MB}</maxFileSize>
<!--日志文件保留天数,默认30天-->
<maxHistory>${LOG_FILE_MAX_HISTORY:-30}</maxHistory>
</rollingPolicy>
</appender>
<!--DEBUG日志输出到LogStash-->
<appender name="LOG_STASH_DEBUG" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>DEBUG</level>
</filter>
<destination>${LOG_STASH_HOST}:4560</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>Asia/Shanghai</timeZone>
</timestamp>
<!--自定义日志输出格式-->
<pattern>
<pattern>
{
"project": "mall-swarm",
"traceId":"%X{traceId}",
"spanId":"%X{spanId}",
"level": "%level",
"line": "%line",
"service": "${APP_NAME:-}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger",
"message": "%message",
"stack_trace": "%exception{20}"
}
</pattern>
</pattern>
</providers>
</encoder>
<!--当有多个LogStash服务时,设置访问策略为轮询-->
<connectionStrategy>
<roundRobin>
<connectionTTL>5 minutes</connectionTTL>
</roundRobin>
</connectionStrategy>
</appender>
<!--ERROR日志输出到LogStash-->
<appender name="LOG_STASH_ERROR" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
<destination>${LOG_STASH_HOST}:4561</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>Asia/Shanghai</timeZone>
</timestamp>
<!--自定义日志输出格式-->
<pattern>
<pattern>
{
"project": "mall-swarm",
"traceId":"%X{traceId}",
"spanId":"%X{spanId}",
"level": "%level",
"line": "%line",
"service": "${APP_NAME:-}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger",
"message": "%message",
"stack_trace": "%exception{20}"
}
</pattern>
</pattern>
</providers>
</encoder>
<!--当有多个LogStash服务时,设置访问策略为轮询-->
<connectionStrategy>
<roundRobin>
<connectionTTL>5 minutes</connectionTTL>
</roundRobin>
</connectionStrategy>
</appender>
<!--业务日志输出到LogStash-->
<appender name="LOG_STASH_BUSINESS" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>${LOG_STASH_HOST}:4562</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>Asia/Shanghai</timeZone>
</timestamp>
<!--自定义日志输出格式-->
<pattern>
<pattern>
{
"project": "mall-swarm",
"traceId":"%X{traceId}",
"spanId":"%X{spanId}",
"level": "%level",
"line": "%line",
"service": "${APP_NAME:-}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger",
"message": "%message",
"stack_trace": "%exception{20}"
}
</pattern>
</pattern>
</providers>
</encoder>
<!--当有多个LogStash服务时,设置访问策略为轮询-->
<connectionStrategy>
<roundRobin>
<connectionTTL>5 minutes</connectionTTL>
</roundRobin>
</connectionStrategy>
</appender>
<!--接口访问记录日志输出到LogStash-->
<appender name="LOG_STASH_RECORD" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>${LOG_STASH_HOST}:4563</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>Asia/Shanghai</timeZone>
</timestamp>
<!--自定义日志输出格式-->
<pattern>
<pattern>
{
"project": "mall-swarm",
"traceId":"%X{traceId}",
"spanId":"%X{spanId}",
"level": "%level",
"line": "%line",
"service": "${APP_NAME:-}",
"class": "%logger",
"message": "%message"
}
</pattern>
</pattern>
</providers>
</encoder>
<!--当有多个LogStash服务时,设置访问策略为轮询-->
<connectionStrategy>
<roundRobin>
<connectionTTL>5 minutes</connectionTTL>
</roundRobin>
</connectionStrategy>
</appender>
<!--控制框架输出日志-->
<logger name="org.slf4j" level="INFO"/>
<logger name="springfox" level="INFO"/>
<logger name="io.swagger" level="INFO"/>
<logger name="org.springframework" level="INFO"/>
<logger name="org.hibernate.validator" level="INFO"/>
<logger name="com.alibaba.nacos.client.naming" level="INFO"/>
<root level="DEBUG">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE_DEBUG"/>
<appender-ref ref="FILE_ERROR"/>
<appender-ref ref="LOG_STASH_DEBUG"/>
<appender-ref ref="LOG_STASH_ERROR"/>
</root>
<logger name="com.macro.mall.common.log.WebLogAspect" level="DEBUG">
<appender-ref ref="LOG_STASH_RECORD"/>
</logger>
<logger name="com.macro.mall" level="DEBUG">
<appender-ref ref="LOG_STASH_BUSINESS"/>
</logger>
</configuration>
sleuth 官方reference示例:
TraceFilterConfig
java
package org.jeecg.config;
import brave.Span;
import brave.Tracer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import javax.servlet.Filter;
import javax.servlet.http.HttpServletResponse;
/**
* @author: sleuth
* @Date: 2023/9/11 11:04
* @desc:
**/
@Configuration
public class TraceFilterConfig {
private static final String TRACE_ID = "TraceId";
/**
* a servlet Filter for non-reactive applications
* @param tracer
* @return
*/
@Bean
Filter traceIdInResponseFilter(Tracer tracer) {
return (request, response, chain) -> {
Span currentSpan = tracer.currentSpan();
if (currentSpan != null) {
HttpServletResponse resp = (HttpServletResponse) response;
resp.addHeader(TRACE_ID, currentSpan.context().traceIdString());
}
chain.doFilter(request, response);
};
}
}