日志收集方案

1.应用场景

常用于日志采集和数据回流场景

1.1 日志类型

非容器化日志即python组件/go组件/java组件业务日志,可自由进行日志轮转,支持按时间、大小、历史、总容量等

容器化日志(适用于stdout/stderr)单行最大长度是16k,即超过最大长度,日志会自动换行,仅仅按大小/文件数,按时间需结合logrotate

|-----------------------------------------------------------------------------------|
| { "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" } } |

​​​​​​​1.2 日志轮转logrotate

apk add --no-cache logrotate并配合启动crond服务实现

|---------------------------------------------------------------------------------------------------------------------------|
| /usr/local/kong/logs/*.log { size 1k missingok rotate 7 copytruncate notifempty dateext dateyesterday create root root } |

​​​​​​​1.3 Filebeat-Logstash-Rabbitmq

Filebeat

|--------------------------------------------------------------------------------------------------------------------------------------|
| Input type=log scan_frequency 扫描新文件间隔 10s //不是仅实时可以降低 close_inactive 文件句柄关闭时间 5m Output type=logstash bulk_max_size 默认 2048 建议改成1024 |

Logstash

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Input type=beat scan_frequency 扫描新文件间隔 10s //不是仅实时可以降低 close_inactive 文件句柄关闭时间 5m filter { grok { match => { "message" => '\[dataflow-logger\]\s+response_data:(?<json_str>\{.*\}) while logging request' } remove_field => ["message"] } if "_grokparsefailure" in [tags] { drop {} } drop { percentage => 90 } mutate { remove_field => ["@version","tags","@timestamp","log","input","host","agent","ecs"] } } output { rabbitmq { id => "my-plugin" exchange => "logstash-topic-exchange" exchange_type => "topic" key => "logstash-topic-routing-key" #默认端口必须是5672 host => "ip" user => "guest" password => "guest" vhost => "/" durable => true persistent => true codec => json } } |

​​​​​​​1.4 FluentBit-Kafka

fluent-bit.conf

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [INPUT] Name tail Path /var/lib/docker/containers/*/*.log Parser docker Tag docker.* Docker_Mode On Docker_Mode_Flush 5 Mem_Buf_Limit 50MB Skip_Long_Lines Off DB /fluent-bit/tail.db DB.Sync Normal [FILTER] https://docs.fluentbit.io/manual/4.0/data-pipeline/filters/grep Name grep Match docker.* Regex log dataflow-logger [FILTER] https://docs.fluentbit.io/manual/4.0/data-pipeline/filters/parser Name parser Match docker.* Key_Name log Parser extract_logger Reserve_Data false Preserve_Key false [FILTER] Name throttle Match docker.* Rate 6000 #允许每分钟最多 6000 条 Window 60 #单位秒 Interval 1s [OUTPUT] Name kafka Match docker.* Brokers 172.29.232.69:9092 Topics dataflow-logs |

parsers.conf

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L%z Time_Keep On [PARSER] Name extract_logger Format regex Regex \[dataflow-logger\]\s+response_data:(?<json_str>\{.*\})\s*$ |

1.5 对比总结

Filebeat-Logstash-Rabbitmq原生支持复杂采样/限流/过滤等处理,但性能欠缺,FluentBit-Kafka从日志采集到消息推送性能较高,且原生高度支持docker容器日志,缺点是业务处理复杂度不够

相关推荐
Scout-leaf18 小时前
WPF新手村教程(三)—— 路由事件
c#·wpf
用户2986985301421 小时前
程序员效率工具:Spire.Doc如何助你一键搞定Word表格排版
后端·c#·.net
mudtools2 天前
搭建一套.net下能落地的飞书考勤系统
后端·c#·.net
玩泥巴的2 天前
搭建一套.net下能落地的飞书考勤系统
c#·.net·二次开发·飞书
唐宋元明清21882 天前
.NET 本地Db数据库-技术方案选型
windows·c#
lindexi3 天前
dotnet DirectX 通过可等待交换链降低输入渲染延迟
c#·directx·d2d·direct2d·vortice
qq_454245033 天前
基于组件与行为的树状节点系统
数据结构·c#
bugcome_com3 天前
C# 类的基础与进阶概念详解
c#
雪人不是菜鸡3 天前
简单工厂模式
开发语言·算法·c#
铸人3 天前
大数分解的Shor算法-C#
开发语言·算法·c#