日志收集方案

1.应用场景

常用于日志采集和数据回流场景

1.1 日志类型

非容器化日志即python组件/go组件/java组件业务日志,可自由进行日志轮转,支持按时间、大小、历史、总容量等

容器化日志(适用于stdout/stderr)单行最大长度是16k,即超过最大长度,日志会自动换行,仅仅按大小/文件数,按时间需结合logrotate

|-----------------------------------------------------------------------------------|
| { "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" } } |

​​​​​​​1.2 日志轮转logrotate

apk add --no-cache logrotate并配合启动crond服务实现

|---------------------------------------------------------------------------------------------------------------------------|
| /usr/local/kong/logs/*.log { size 1k missingok rotate 7 copytruncate notifempty dateext dateyesterday create root root } |

​​​​​​​1.3 Filebeat-Logstash-Rabbitmq

Filebeat

|--------------------------------------------------------------------------------------------------------------------------------------|
| Input type=log scan_frequency 扫描新文件间隔 10s //不是仅实时可以降低 close_inactive 文件句柄关闭时间 5m Output type=logstash bulk_max_size 默认 2048 建议改成1024 |

Logstash

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Input type=beat scan_frequency 扫描新文件间隔 10s //不是仅实时可以降低 close_inactive 文件句柄关闭时间 5m filter { grok { match => { "message" => '\[dataflow-logger\]\s+response_data:(?<json_str>\{.*\}) while logging request' } remove_field => ["message"] } if "_grokparsefailure" in [tags] { drop {} } drop { percentage => 90 } mutate { remove_field => ["@version","tags","@timestamp","log","input","host","agent","ecs"] } } output { rabbitmq { id => "my-plugin" exchange => "logstash-topic-exchange" exchange_type => "topic" key => "logstash-topic-routing-key" #默认端口必须是5672 host => "ip" user => "guest" password => "guest" vhost => "/" durable => true persistent => true codec => json } } |

​​​​​​​1.4 FluentBit-Kafka

fluent-bit.conf

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [INPUT] Name tail Path /var/lib/docker/containers/*/*.log Parser docker Tag docker.* Docker_Mode On Docker_Mode_Flush 5 Mem_Buf_Limit 50MB Skip_Long_Lines Off DB /fluent-bit/tail.db DB.Sync Normal [FILTER] https://docs.fluentbit.io/manual/4.0/data-pipeline/filters/grep Name grep Match docker.* Regex log dataflow-logger [FILTER] https://docs.fluentbit.io/manual/4.0/data-pipeline/filters/parser Name parser Match docker.* Key_Name log Parser extract_logger Reserve_Data false Preserve_Key false [FILTER] Name throttle Match docker.* Rate 6000 #允许每分钟最多 6000 条 Window 60 #单位秒 Interval 1s [OUTPUT] Name kafka Match docker.* Brokers 172.29.232.69:9092 Topics dataflow-logs |

parsers.conf

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L%z Time_Keep On [PARSER] Name extract_logger Format regex Regex \[dataflow-logger\]\s+response_data:(?<json_str>\{.*\})\s*$ |

1.5 对比总结

Filebeat-Logstash-Rabbitmq原生支持复杂采样/限流/过滤等处理,但性能欠缺,FluentBit-Kafka从日志采集到消息推送性能较高,且原生高度支持docker容器日志,缺点是业务处理复杂度不够

相关推荐
似水明俊德19 小时前
02-C#.Net-反射-学习笔记
开发语言·笔记·学习·c#·.net
.NET修仙日记1 天前
Acme.ReturnOh:让.NET API返回值处理更优雅,统一响应格式一步到位
c#·.net·webapi
阿蒙Amon1 天前
C#常用类库-详解YamlDotNet
开发语言·c#
Sunsets_Red1 天前
乘法逆元的 exgcd 求法
c++·学习·数学·算法·c#·密码学·信息学竞赛
唐青枫1 天前
深入理解 C#.NET TaskScheduler:为什么大量使用 Work-Stealing
c#·.net
人工智能AI技术1 天前
Claude 3.7 企业版私有化部署技术验证:与 .NET 实战方案
人工智能·c#
呆子也有梦1 天前
思考篇:积分是存成道具还是直接存数值?——ET/Skynet 框架下,从架构权衡到代码实现全解析
游戏·架构·c#·lua
我是唐青枫1 天前
深入理解 C#.NET Task.Run:调度原理、线程池机制与性能优化
性能优化·c#·.net
阿蒙Amon1 天前
C#常用类库-详解NModbus4
开发语言·c#
LFly_ice1 天前
C# Web 开发从入门到实践
开发语言·前端·c#