Flume集成Kafka

之前提到Flume可以直接采集数据存储到HDFS中,那为什么还要引入Kafka这个中间件呢,这个是因为在实际应用场景中,我们既需要实时计算也需要离线计算。

Kfka to HDFS配置

shell 复制代码
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# source
a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.channels = channel1
a1.sources.r1.batchSize = 10
a1.sources.r1.batchDurationMillis = 2000
a1.sources.r1.kafka.bootstrap.servers = hadoop01:9092,hadoop02:9092,hadoop03:9092
a1.sources.r1.kafka.topics = test_r2p5
a1.sources.r1.kafka.consumer.group.id = flume-group1

# Bind the source and sink to the channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 1000


# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = hdfs://192.168.52.100:9000/kafkaout/%Y-%m-%d
a1.sinks.k1.hdfs.filePrefix = access
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.rollInterval = 3600
a1.sinks.k1.hdfs.rollSize = 134217728

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

File to Kafka配置

shell 复制代码
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/log/test.log

# Bind the source and sink to the channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 1000

# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = test_r2p5
a1.sinks.k1.kafka.bootstrap.servers = hadoop01:9092,hadoop02:9092,hadoop03:9092
a1.sinks.k1.kafka.flumeBatchSize = 10
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

创建Topic

shell 复制代码
[root@hadoop01 kafka_2.12-2.4.0]# bin/kafka-topics.sh --create --zookeeper hadoop01:2181 --partitions 5 --replication-factor 2 --topic test_r2p5

启动flume

shell 复制代码
[root@hadoop04 conf-kafka-hdfs]# bin/flume-ng agent --name a1 --conf conf-kafka-hdfs --conf-file conf-kafka-hdfs/kafka-to-hdfs.conf -Dflume.root.logger=INFO,console
[root@hadoop04 apache-flume-1.11.0-bin]# bin/flume-ng agent --name a1 --conf conf-file-kafka --conf-file conf-file-kafka/file-to-kafka.conf -Dflume.root.logger=INFO,console

创建test.log文件

shell 复制代码
[root@hadoop04 log]# echo hello world >> /home/log/test.log

验证

shell 复制代码
[root@hadoop01 kafka_2.12-2.4.0]# hdfs dfs -cat /kafkaout/2024-03-13/access.1710307375351.tmp
hello world

p01 kafka_2.12-2.4.0]# hdfs dfs -cat /kafkaout/2024-03-13/access.1710307375351.tmp

hello world

复制代码
相关推荐
IT毕设梦工厂22 分钟前
大数据毕业设计选题推荐-基于大数据的1688商品类目关系分析与可视化系统-Hadoop-Spark-数据可视化-BigData
大数据·毕业设计·源码·数据可视化·bigdata·选题推荐
君不见,青丝成雪30 分钟前
Hadoop技术栈(四)HIVE常用函数汇总
大数据·数据库·数据仓库·hive·sql
万邦科技Lafite38 分钟前
利用淘宝开放API接口监控商品状态,掌握第一信息
大数据·python·电商开放平台·开放api接口·淘宝开放平台
更深兼春远6 小时前
flink+clinkhouse安装部署
大数据·clickhouse·flink
专注API从业者9 小时前
Python + 淘宝 API 开发:自动化采集商品数据的完整流程
大数据·运维·前端·数据挖掘·自动化
媒体人88810 小时前
GEO 优化专家孟庆涛:技术破壁者重构 AI 时代搜索逻辑
大数据·人工智能
纪莫10 小时前
Kafka如何保证「消息不丢失」,「顺序传输」,「不重复消费」,以及为什么会发送重平衡(reblanace)
kafka
最初的↘那颗心11 小时前
Flink Stream API 源码走读 - print()
java·大数据·hadoop·flink·实时计算
君不见,青丝成雪12 小时前
hadoop技术栈(九)Hbase替代方案
大数据·hadoop·hbase
晴天彩虹雨12 小时前
存算分离与云原生:数据平台的新基石
大数据·hadoop·云原生·spark