1. Failover Sink Processor
故障转移处理器可以同时指定多个sink输出,按照优先级高低进行数据的分发,并具有故障转移能力。
需要修改第一台服务器agent
bash
a1.sources=r1
a1.sinks=k1 k2
a1.channels=c1
a1.sources.r1.type=netcat
a1.sources.r1.bind=worker-1
a1.sources.r1.port=44444
a1.channels.c1.type=memory
a1.channels.c1.capacity=100000
a1.channels.c1.transactionCapacity=100
a1.sinks.k1.type=avro
a1.sinks.k1.hostname = worke-1
a1.sinks.k1.port = 55555
a1.sinks.k1.type=avro
a1.sinks.k1.hostname = worke-2
a1.sinks.k1.port = 55555
a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = failover
a1.sinkgroups.g1.processor.priority.k1 = 5
a1.sinkgroups.g1.processor.priority.k2 = 10
a1.sinkgroups.g1.processor.maxpenalty = 10000
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
a1.sinks.k2.channel=c1
第二台和第三台agent编写如下:
bash
a1.sources=r1
a1.sinks=k1
a1.channels=c1
a1.sources.r1.type=avro
a1.sources.r1.bind=11.147.251.96
a1.sources.r1.port=55555
a1.channels.c1.type=memory
a1.channels.c1.capacity=100000
a1.channels.c1.transactionCapacity=100
a1.sinks.k1.type=logger
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
从后往前分别启动三台agent
bash
flume-ng agent -n a1 -c /usr/local/flume/conf/ -f ./avro.agent -Dflume.root.logger=INFO,console
测试给第一台flume发送数据,由于第三台节点的优先级高,所以第三台会打印数据到控制台
如果此时第三台flume宕机,则会将数据发送到优先级略低的第二台服务器上
2. Load balancing Sink Processor
负载平衡处理器提供了在多个sink负载平衡流量的能力。支持两种模式:round_robin and random 。round_robin 可以将数据负载均衡到多个sink上,random支持随机分发到不同的sink上。
需要修改第一台服务器agent
bash
a1.sources=r1
a1.sinks=k1 k2
a1.channels=c1
a1.sources.r1.type=netcat
a1.sources.r1.bind=worker-1
a1.sources.r1.port=44444
a1.channels.c1.type=memory
a1.channels.c1.capacity=100000
a1.channels.c1.transactionCapacity=100
a1.sinks.k1.type=avro
a1.sinks.k1.hostname = worke-1
a1.sinks.k1.port = 55555
a1.sinks.k1.type=avro
a1.sinks.k1.hostname = worke-2
a1.sinks.k1.port = 55555
a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.selector = random
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
a1.sinks.k2.channel=c1
第二台和第三台agent编写如下:
bash
a1.sources=r1
a1.sinks=k1
a1.channels=c1
a1.sources.r1.type=avro
a1.sources.r1.bind=11.147.251.96
a1.sources.r1.port=55555
a1.channels.c1.type=memory
a1.channels.c1.capacity=100000
a1.channels.c1.transactionCapacity=100
a1.sinks.k1.type=logger
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
从后往前分别启动三台agent
bash
flume-ng agent -n a1 -c /usr/local/flume/conf/ -f ./avro.agent -Dflume.root.logger=INFO,console
测试给第一台flume发送数据,第二台和第一台会随机收集数据
还支持轮询分发数据到两个sink中,这里的轮询是的是sink的轮询,不是event的轮询。
需要修改第一台服务器agent
bash
a1.sources=r1
a1.sinks=k1 k2
a1.channels=c1
a1.sources.r1.type=netcat
a1.sources.r1.bind=worker-1
a1.sources.r1.port=44444
a1.channels.c1.type=memory
a1.channels.c1.capacity=100000
a1.channels.c1.transactionCapacity=100
a1.sinks.k1.type=avro
a1.sinks.k1.hostname = worke-1
a1.sinks.k1.port = 55555
a1.sinks.k1.type=avro
a1.sinks.k1.hostname = worke-2
a1.sinks.k1.port = 55555
a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.backoff = true
a1.sinkgroups.g1.processor.selector = round_robin
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
a1.sinks.k2.channel=c1
3. Multiplexing Channel Selector
多路复用信道选择器,source是通过 event header 来决定传输到哪一个 channel。
比如:一个日志文件(多个系统的日志都在该文件中),根据日志中某个字段值,比如type=1,是系统A日志,sink to hdfs;type=2,是系统B日志,sink to kafka,此时就可以使用Flume多路复用,通过event header 来决定传输到哪个Channel
bash
a1.sources=r1
a1.sinks=k1 k2
a1.channels=c1 c2
a1.sources.r1.type=http
a1.sources.r1.bind=worker-1
a1.sources.r1.port=44444
a1.sources.r1.selector.type = multiplexing
a1.sources.r1.selector.header = type
a1.sources.r1.selector.mapping.1= c1
a1.sources.r1.selector.mapping.2 = c2
a1.sources.r1.selector.default = c2
a1.channels.c1.type=memory
a1.channels.c1.capacity=100000
a1.channels.c1.transactionCapacity=100
a1.channels.c2.type=memory
a1.channels.c2.capacity=100000
a1.channels.c2.transactionCapacity=100
a1.sinks.k1.type=avro
a1.sinks.k1.hostname = worke-1
a1.sinks.k1.port = 55555
a1.sinks.k2.type=avro
a1.sinks.k2.hostname = worke-2
a1.sinks.k2.port = 55555
a1.sources.r1.channels=c1 c2
a1.sinks.k1.channel=c1
a1.sinks.k2.channel=c2
测试:
通过http协议并携带type头信息,测试type=1,type=2,type=3去往哪一台服务器
第二台服务器接收:
4. Interceptor拦截器
拦截器可以将flume收集到的event进行拦截,并使用对应的拦截器,对event进行简单修改,过滤。同时可以配置多个拦截器实现不同的功能,按照配置的先后顺序进行拦截处理。
常见的 Interceptor | 描述 |
---|---|
timestamp Interceptor | 给event的头信息中添加时间戳 |
Static Interceptor | 给event的头信息中添加自定义键值 |
Host Interceptor | 给event的头信息中添加主机名或者ip信息 |
Search and Replace Interceptor | 拦截信息进行匹配和替换 |
Regex Filtering Interceptor | 拦截信息进行过滤 |
5. Timestamp Interceptor
此拦截器将插入事件标头,即它处理事件的时间(毫秒)到event中。
bash
#给agent组件起名
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#定义source
a1.sources.r1.type = netcat
a1.sources.r1.bind = 192.168.142.160
a1.sources.r1.port = 22222
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = timestamp
#定义channel
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000000
a1.channels.c1.transactionCapacity=100
#定义sink
a1.sinks.k1.type=logger
#绑定
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
6. Host Interceptor
此拦截器器插入主机的主机名或IP地址。插入带有key为host标头,值是主机的主机名称或IP地址。
bash
#给agent组件起名
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#定义source
a1.sources.r1.type = netcat
a1.sources.r1.bind = 192.168.142.160
a1.sources.r1.port = 22222
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = host
#定义channel
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000000
a1.channels.c1.transactionCapacity=100
#定义sink
a1.sinks.k1.type=logger
#绑定
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
7. Static Interceptor
静态拦截器允许用户将带有静态值的静态标头附加到所有事件。
当前实现不允许同时指定多个标头。相反,用户可以使用多个静态拦截器,每个拦截器定义一个静态标头。
bash
# Static Interceptor
#给agent组件起名
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#定义source
a1.sources.r1.type=netcat
a1.sources.r1.bind=worker-1
a1.sources.r1.port=55555
a1.sources.r1.interceptors=i1
a1.sources.r1.interceptors.i1.type = static
a1.sources.r1.interceptors.i1.key = name
a1.sources.r1.interceptors.i1.value = zhangsan
#定义channel
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000000
a1.channels.c1.transactionCapacity=100
#定义sink
a1.sinks.k1.type=logger
#绑定
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
8. Search and Replace Interceptor
这个拦截器基于Java正则表达式提供了简单的基于字符串的搜索和替换功能
bash
# Search and Replace Interceptor
#给agent组件起名
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#定义source
a1.sources.r1.type=netcat
a1.sources.r1.bind=worker-1
a1.sources.r1.port=55555
a1.sources.r1.interceptors=i1
a1.sources.r1.interceptors.i1.type = search_replace
a1.sources.r1.interceptors.i1.searchPattern = [a-z]
a1.sources.r1.interceptors.i1.replaceString =*
#定义channel
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000000
a1.channels.c1.transactionCapacity=100
#定义sink
a1.sinks.k1.type=logger
#绑定
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
9. Regex Filtering Interceptor
该拦截器通过将event解释为文本并将文本与配置的正则表达式匹配来选择性地过滤事件。提供的正则表达式可用于包含事件或排除事件。
bash
# Regex Filtering Interceptor
#给agent组件起名
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#定义source
a1.sources.r1.type=netcat
a1.sources.r1.bind=worker-1
a1.sources.r1.port=55555
a1.sources.r1.interceptors=i1
a1.sources.r1.interceptors.i1.type=regex_filter
a1.sources.r1.interceptors.i1.regex=^jp.*
a1.sources.r1.interceptors.i1.excludeEvents=true
#定义channel
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000000
a1.channels.c1.transactionCapacity=100
#定义sink
a1.sinks.k1.type=logger
#绑定
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
10. Regex Extractor Interceptor
此拦截器使用指定的正则表达式提取正则表达式匹配组,并将匹配组作为标头附加到事件上。
bash
#给agent组件起名
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#定义source
a1.sources.r1.type=netcat
a1.sources.r1.bind=worker-1
a1.sources.r1.port=55555
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = regex_extractor
a1.sources.r1.interceptors.i1.regex = (^[a-zA-Z]*)\\s([0-9]*$)
a1.sources.r1.interceptors.i1.serializers = s1 s2
# key name
a1.sources.r1.interceptors.i1.serializers.s1.name = word
a1.sources.r1.interceptors.i1.serializers.s2.name = digital
#定义channel
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000000
a1.channels.c1.transactionCapacity=100
#定义sink
a1.sinks.k1.type=logger
#绑定
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
测试:
收到: