Flume安装部署

安装部署

安装包连接:链接:https://pan.baidu.com/s/1m0d5O3Q2eH14BpWsGGfbLw?pwd=6666

(1)将apache-flume-1.10.1-bin.tar.gz上传到linux的/opt/software目录下

(2)解压apache-flume-1.10.1-bin.tar.gz到/opt/moudle/目录下

复制代码
tar -zxf /opt/software/apache-flume-1.10.1-bin.tar.gz -C /opt/moudle/

(3)修改apache-flume-1.10.1-bin的名称为flume

复制代码
mv apache-flume-1.10.1-bin/ flume

(4)修改conf目录下的log4j2.xml配置文件,配置日志文件路径

<?xml version="1.0" encoding="UTF-8"?>

<!--

Licensed to the Apache Software Foundation (ASF) under one or more

contributor license agreements. See the NOTICE file distributed with

this work for additional information regarding copyright ownership.

The ASF licenses this file to You under the Apache License, Version 2.0

(the "License"); you may not use this file except in compliance with

the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

-->

<Configuration status="ERROR">

<Properties>

<Property name="LOG_DIR">/opt/moudle/flume/log</Property>

</Properties>

<Appenders>

<Console name="Console" target="SYSTEM_ERR">

<PatternLayout pattern="%d (%t) [%p - %l] %m%n" />

</Console>

<RollingFile name="LogFile" fileName="{LOG_DIR}/flume.log" filePattern="{LOG_DIR}/archive/flume.log.%d{yyyyMMdd}-%i">

<PatternLayout pattern="%d{dd MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %equals{%x}{[]}{} - %m%n" />

<Policies>

<!-- Roll every night at midnight or when the file reaches 100MB -->

<SizeBasedTriggeringPolicy size="100 MB"/>

<CronTriggeringPolicy schedule="0 0 0 * * ?"/>

</Policies>

<DefaultRolloverStrategy min="1" max="20">

<Delete basePath="${LOG_DIR}/archive">

<!-- Nested conditions: the inner condition is only evaluated on files for which the outer conditions are true. -->

<IfFileName glob="flume.log.*">

<!-- Only allow 1 GB of files to accumulate -->

<IfAccumulatedFileSize exceeds="1 GB"/>

</IfFileName>

</Delete>

</DefaultRolloverStrategy>

</RollingFile>

</Appenders>

<Loggers>

<Logger name="org.apache.flume.lifecycle" level="info"/>

<Logger name="org.jboss" level="WARN"/>

<Logger name="org.apache.avro.ipc.netty.NettyTransceiver" level="WARN"/>

<Logger name="org.apache.hadoop" level="INFO"/>

<Logger name="org.apache.hadoop.hive" level="ERROR"/>

引入控制台输出,方便学习查看日志

<Root level="INFO">

<AppenderRef ref="LogFile" />

<AppenderRef ref="Console" />

</Root>

</Loggers>

</Configuration>

(5)分发flume(当前位置/opt/moudle/)

复制代码
xsync flume/
相关推荐
武子康30 分钟前
大数据-98 Spark 从 DStream 到 Structured Streaming:Spark 实时计算的演进
大数据·后端·spark
阿里云大数据AI技术34 分钟前
2025云栖大会·大数据AI参会攻略请查收!
大数据·人工智能
代码匠心3 小时前
从零开始学Flink:数据源
java·大数据·后端·flink
Lx3526 小时前
复杂MapReduce作业设计:多阶段处理的最佳实践
大数据·hadoop
武子康8 小时前
大数据-100 Spark DStream 转换操作全面总结:map、reduceByKey 到 transform 的实战案例
大数据·后端·spark
expect7g9 小时前
Flink KeySelector
大数据·后端·flink
阿里云大数据AI技术1 天前
StarRocks 助力数禾科技构建实时数仓:从数据孤岛到智能决策
大数据
Lx3521 天前
Hadoop数据处理优化:减少Shuffle阶段的性能损耗
大数据·hadoop
武子康1 天前
大数据-99 Spark Streaming 数据源全面总结:原理、应用 文件流、Socket、RDD队列流
大数据·后端·spark
阿里云大数据AI技术2 天前
大数据公有云市场第一,阿里云占比47%!
大数据