项目里用logstash分析日志,由于有多种模式(pattern)需要匹配,网上搜了很多示例,发现这些都是老的写法,都会报错,后来查阅了官方文档,才发现,新版本只支持新语法。
错误的语法:
if "batch-trans" in [tags] {
grok {
match => [
"message","\[(?<logDate>[\d{4}(\-|\/|.)\d{1,2}\1\d{1,2}\s+d{1,2}:d{1,2}:d{1,2}]*)\]\s+\[(?<mainJobId>(?:[+-]?(?:[0-9]+)))\-(?<subJobId>(?:[+-]?(?:[0-9]+)))\-(?<shardingId>(?:[+-]?(?:[0-9]+)))\]\s+\[(?<traceId>[^\]]*)\]\s+\[(?<jobName>[^\]]*)\]\s+\[(?<threadId>[^\]]*)\]\s+\[(?<zoneId>[^\]]*)\]\s+\[(?<traceType>[^\]]*)\]\s+\[(?<cost>[^\]]*)\]\s+\[(?<splitZoneId>[^\]]*)\]\s+\[(?<url>[^\]]*)\]\s+\[(?<subJobId>[^\]]*)\](?<msg>.*)",
"message","\[(?<logDate>[\d{4}(\-|\/|.)\d{1,2}\1\d{1,2}\s+d{1,2}:d{1,2}:d{1,2}]*)\]\s+\[(?<mainJobId>(?:[+-]?(?:[0-9]+)))\-(?<subJobId>(?:[+-]?(?:[0-9]+)))\]\s+\[(?<traceId>[^\]]*)\]\s+\[(?<jobName>[^\]]*)\]\s+\[(?<threadId>[^\]]*)\]\s+\[(?<zoneId>[^\]]*)\]\s+\[(?<traceType>[^\]]*)\]\s+\[(?<cost>[^\]]*)\]\s+\[(?<splitZoneId>[^\]]*)\]\s+\[(?<url>[^\]]*)\]\s+\[(?<subJobId>[^\]]*)\](?<msg>.*)",
]
}
}
正确的语法:
filter {
if "accounting-log" in [tags] {
grok {
match => {
"message" => [
"^\[(?<log-time>[\s\S]*)\]\s+%{LOGLEVEL:log-level}\s\[%{DATA:trace-id}\]\s\[%{DATA:thread-name}\s*\]\s\[%{DATA:logger}\s*: %{NUMBER:line-no}\] \[%{DATA:zone-id}\]\sJob-Sharding-Params: jobId=%{NUMBER:job-id}, transCode=*%{NUMBER:trans-code}, shardingId=*%{NUMBER:sharing-id}, shardingTable=*%{DATA:sharding-table}, JobParameters=\{%{GREEDYDATA:job-parameters}\}",
"^\[(?<log-time>[\s\S]*)\]\s+%{LOGLEVEL:log-level}\s\[%{DATA:trace-id}\]\s\[%{DATA:thread-name}\s*\]\s\[%{DATA:logger}\s*:\s*%{NUMBER:line-no}\]\s\[%{DATA:zone-id}\]\s%{GREEDYDATA:msg}"
]
}
}
}
}
}
注意,先后顺序很重要,上面示例中,如果排错了顺序,后面规则永远匹配不到,都会被前面的规则抢先了。
为方便大家拿来主义,上面示例对应的logback配置如下:
logback:
<property name="NORMAL_FILE_LOG_PATTERN"
value="[%d{yyyy-MM-dd HH:mm:ss.SSS}] %5p [%0.16X{traceId}] [%-12.12t] [%-40.40logger{39}:%3L] [%0.2X{zoneId}] %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}" />
gork:
"^\[(?<log-time>[\s\S]*)\]\s+%{LOGLEVEL:log-level}\s\[%{DATA:trace-id}\]\s\[%{DATA:thread-name}\s*\]\s\[%{DATA:logger}\s*:\s*%{NUMBER:line-no}\]\s\[%{DATA:zone-id}\]\s%{GREEDYDATA:msg}"
另外,为了让一条日志包含多行(如,异常日志),应该做如下配置:
input{
file {
path => "/logs/accounting-service.log"
type => "system"
tags => ["accounting-log"]
codec => multiline {
pattern => "^(\[.+\] )" #这儿就是说多行要匹配到一行开头:[******]跟随一个空格的形式
negate => true
what => "previous"
auto_flush_interval => 2 #这行非常重要,就是2秒内如果没新的内容,就认为这条日志结束了,否则最后一条日志永远就是要等到有下一条日志进来才会被采集
}
start_position => "beginning"
}
}
参考官方文档:(搜索"multiple patterns")
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html