Kafka集成flume

1.flume作为生产者集成Kafka

kafka作为flume的sink,扮演消费者角色

1.1 flume配置文件

vim $kafka/jobs/flume-kafka.conf

bash 复制代码
# agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1 c2

# Describe/configure the source
a1.sources.r1.type = TAILDIR
#记录最后监控文件的断点的文件,此文件位置可不改
a1.sources.r1.positionFile =  /export/server/flume/job/data/tail_dir.json
a1.sources.r1.filegroups = f1 f2
a1.sources.r1.filegroups.f1 = /export/server/flume/job/data/.*file.*
a1.sources.r1.filegroups.f2 =/export/server/flume/job/data/.*log.*

# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = customers
a1.sinks.k1.kafka.bootstrap.servers =node1:9092,node2:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy


# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

1.2开启flume监控

flume-ng agent -n a1 -c conf/ -f /export/server/kafka/jobs/kafka-flume.conf

1.3开启Kafka消费者

kafka-console-consumer.sh --bootstrap-server node1:9092,node2:9092 --topic consumers --from-beginning

1.4生产数据

往被监控文件输入数据

[ljr@node1 data]$echo hello >>file2.txt

[ljr@node1 data]$ echo ============== >>file2.txt

查看Kafka消费者

可见Kafka集成flume生产者成功。

2.flume作为消费者集成Kafka

kafka作为flume的source,扮演生产者角色

2.1flume配置文件

vim $kafka/jobs/flume-kafka.conf

bash 复制代码
# agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
#注意不要大于channel transactionCapacity的值100
a1.sources.r1.batchSize = 50 
a1.sources.r1.batchDurationMillis = 200
a1.sources.r1.kafka.bootstrap.servers =node1:9092, node1:9092
a1.sources.r1.kafka.topics = consumers
a1.sources.r1.kafka.consumer.group.id = custom.g.id

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
#注意transactionCapacity的值不要小于sources batchSize的值50
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2.2开启flume监控

flume-ng agent -n a1 -c conf/ -f /export/server/kafka/jobs/kafka-flume1.conf

2.3开启Kafka生产者并生产数据

kafka-console-producer.sh --bootstrap-server node1:9092,node2:9092 --topic consumers

查看flume监控台

可见Kafka集成flume消费者成功。

相关推荐
MZWeiei26 分钟前
Zookeeper的选举机制
大数据·分布式·zookeeper
学计算机的睿智大学生27 分钟前
Hadoop集群搭建
大数据·hadoop·分布式
一路狂飙的猪27 分钟前
RabbitMQ的工作模型
分布式·rabbitmq
miss writer1 小时前
Redis分布式锁释放锁是否必须用lua脚本?
redis·分布式·lua
m0_748254881 小时前
DataX3.0+DataX-Web部署分布式可视化ETL系统
前端·分布式·etl
字节程序员2 小时前
Jmeter分布式压力测试
分布式·jmeter·压力测试
darkdragonking3 小时前
OpenEuler 22.03 不依赖zookeeper安装 kafka 3.3.2集群
kafka
ProtonBase3 小时前
如何从 0 到 1 ,打造全新一代分布式数据架构
java·网络·数据库·数据仓库·分布式·云原生·架构
时时刻刻看着自己的心3 小时前
clickhouse分布式表插入数据不用带ON CLUSTER
分布式·clickhouse
Data跳动11 小时前
Spark内存都消耗在哪里了?
大数据·分布式·spark