Flume集成Kafka

之前提到Flume可以直接采集数据存储到HDFS中,那为什么还要引入Kafka这个中间件呢,这个是因为在实际应用场景中,我们既需要实时计算也需要离线计算。

Kfka to HDFS配置

shell 复制代码
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# source
a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.channels = channel1
a1.sources.r1.batchSize = 10
a1.sources.r1.batchDurationMillis = 2000
a1.sources.r1.kafka.bootstrap.servers = hadoop01:9092,hadoop02:9092,hadoop03:9092
a1.sources.r1.kafka.topics = test_r2p5
a1.sources.r1.kafka.consumer.group.id = flume-group1

# Bind the source and sink to the channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 1000


# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = hdfs://192.168.52.100:9000/kafkaout/%Y-%m-%d
a1.sinks.k1.hdfs.filePrefix = access
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.rollInterval = 3600
a1.sinks.k1.hdfs.rollSize = 134217728

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

File to Kafka配置

shell 复制代码
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/log/test.log

# Bind the source and sink to the channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 1000

# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = test_r2p5
a1.sinks.k1.kafka.bootstrap.servers = hadoop01:9092,hadoop02:9092,hadoop03:9092
a1.sinks.k1.kafka.flumeBatchSize = 10
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

创建Topic

shell 复制代码
[root@hadoop01 kafka_2.12-2.4.0]# bin/kafka-topics.sh --create --zookeeper hadoop01:2181 --partitions 5 --replication-factor 2 --topic test_r2p5

启动flume

shell 复制代码
[root@hadoop04 conf-kafka-hdfs]# bin/flume-ng agent --name a1 --conf conf-kafka-hdfs --conf-file conf-kafka-hdfs/kafka-to-hdfs.conf -Dflume.root.logger=INFO,console
[root@hadoop04 apache-flume-1.11.0-bin]# bin/flume-ng agent --name a1 --conf conf-file-kafka --conf-file conf-file-kafka/file-to-kafka.conf -Dflume.root.logger=INFO,console

创建test.log文件

shell 复制代码
[root@hadoop04 log]# echo hello world >> /home/log/test.log

验证

shell 复制代码
[root@hadoop01 kafka_2.12-2.4.0]# hdfs dfs -cat /kafkaout/2024-03-13/access.1710307375351.tmp
hello world

p01 kafka_2.12-2.4.0]# hdfs dfs -cat /kafkaout/2024-03-13/access.1710307375351.tmp

hello world

复制代码
相关推荐
智算菩萨2 小时前
高效多模态大语言模型:从统一框架到训练与推理效率的系统化理论梳理
大数据·人工智能·多模态
hzp6662 小时前
新兴存储全景与未来架构走向
大数据·大模型·llm·aigc·数据存储
INFINI Labs3 小时前
Easy-Es 2.1.0-easysearch 版本发布
大数据·elasticsearch·搜索引擎·easysearch·easy-es
小北方城市网4 小时前
第 6 课:Vue 3 工程化与项目部署实战 —— 从本地开发到线上发布
大数据·运维·前端·ai
落叶,听雪4 小时前
AI建站推荐
大数据·人工智能·python
lhrimperial4 小时前
Elasticsearch核心技术深度解析
大数据·elasticsearch·搜索引擎
geneculture5 小时前
从智力仿真到认知协同:人机之间的价值对齐与共生框架
大数据·人工智能·学习·融智学的重要应用·信智序位
无代码专家5 小时前
设备巡检数字化闭环解决方案:从预防到优化的全流程赋能
大数据·人工智能
神算大模型APi--天枢6466 小时前
合规与高效兼得:国产全栈架构赋能行业大模型定制,从教育到工业的轻量化落地
大数据·前端·人工智能·架构·硬件架构
飞飞传输8 小时前
守护医疗隐私,数据安全摆渡系统撑起内外网安全伞!
大数据·运维·安全