1.应用场景
常用于日志采集和数据回流场景
1.1 日志类型
非容器化日志即python组件/go组件/java组件业务日志,可自由进行日志轮转,支持按时间、大小、历史、总容量等
容器化日志(适用于stdout/stderr)单行最大长度是16k,即超过最大长度,日志会自动换行,仅仅按大小/文件数,按时间需结合logrotate
|-----------------------------------------------------------------------------------|
| { "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" } } |
1.2 日志轮转logrotate
apk add --no-cache logrotate并配合启动crond服务实现
|---------------------------------------------------------------------------------------------------------------------------|
| /usr/local/kong/logs/*.log { size 1k missingok rotate 7 copytruncate notifempty dateext dateyesterday create root root } |
1.3 Filebeat-Logstash-Rabbitmq
Filebeat
|--------------------------------------------------------------------------------------------------------------------------------------|
| Input type=log scan_frequency 扫描新文件间隔 10s //不是仅实时可以降低 close_inactive 文件句柄关闭时间 5m Output type=logstash bulk_max_size 默认 2048 建议改成1024 |
Logstash
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Input type=beat scan_frequency 扫描新文件间隔 10s //不是仅实时可以降低 close_inactive 文件句柄关闭时间 5m filter { grok { match => { "message" => '\[dataflow-logger\]\s+response_data:(?<json_str>\{.*\}) while logging request' } remove_field => ["message"] } if "_grokparsefailure" in [tags] { drop {} } drop { percentage => 90 } mutate { remove_field => ["@version","tags","@timestamp","log","input","host","agent","ecs"] } } output { rabbitmq { id => "my-plugin" exchange => "logstash-topic-exchange" exchange_type => "topic" key => "logstash-topic-routing-key" #默认端口必须是5672 host => "ip" user => "guest" password => "guest" vhost => "/" durable => true persistent => true codec => json } } |
1.4 FluentBit-Kafka
fluent-bit.conf
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [INPUT] Name tail Path /var/lib/docker/containers/*/*.log Parser docker Tag docker.* Docker_Mode On Docker_Mode_Flush 5 Mem_Buf_Limit 50MB Skip_Long_Lines Off DB /fluent-bit/tail.db DB.Sync Normal [FILTER] https://docs.fluentbit.io/manual/4.0/data-pipeline/filters/grep Name grep Match docker.* Regex log dataflow-logger [FILTER] https://docs.fluentbit.io/manual/4.0/data-pipeline/filters/parser Name parser Match docker.* Key_Name log Parser extract_logger Reserve_Data false Preserve_Key false [FILTER] Name throttle Match docker.* Rate 6000 #允许每分钟最多 6000 条 Window 60 #单位秒 Interval 1s [OUTPUT] Name kafka Match docker.* Brokers 172.29.232.69:9092 Topics dataflow-logs |
parsers.conf
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L%z Time_Keep On [PARSER] Name extract_logger Format regex Regex \[dataflow-logger\]\s+response_data:(?<json_str>\{.*\})\s*$ |
1.5 对比总结
Filebeat-Logstash-Rabbitmq原生支持复杂采样/限流/过滤等处理,但性能欠缺,FluentBit-Kafka从日志采集到消息推送性能较高,且原生高度支持docker容器日志,缺点是业务处理复杂度不够