Docker部署Spark大数据组件:配置log4j日志

上一篇《Docker部署Spark大数据组件》中,日志是输出到console的,如果有将日志输出到文件的需要,需要进一步配置。

配置将日志同时输出到console和file

1、停止spark集群

bash 复制代码
docker-compose down -v

2、使用自带log4j日志配置模板配置

bash 复制代码
cp -f log4j2.properties.template log4j2.properties

编辑log4j2.properties,进行如下修改;但是,如下方案,日志无法轮转,也就是说日志一直会写到spark.log中。

Set everything to be logged to the console and file

......

rootLogger.appenderRef.file.ref = file

File appender

appender.file.type = File

appender.file.name = file

appender.file.fileName = spark.log

appender.file.layout.type = PatternLayout

appender.file.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex

3、配置支持日志轮转

rootLogger.appenderRef.file.ref = file

改为

rootLogger.appenderRef.rolling.ref = rolling

File appender 下的配置删掉,增加如下配置:

RollingFile appender

appender.rolling.type = RollingFile

appender.rolling.name = rolling

appender.rolling.fileName = logs/spark.log

appender.rolling.filePattern = logs/spark-%d{yyyy-MM-dd}.log

appender.rolling.layout.type = PatternLayout

appender.rolling.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex

appender.rolling.policies.type = Policies

appender.rolling.policies.time.type = TimeBasedTriggeringPolicy

appender.rolling.policies.time.interval = 1

appender.rolling.policies.time.modulate = true

appender.rolling.strategy.type = DefaultRolloverStrategy

appender.rolling.strategy.max = 30

可以直接使用如下配置模板:

bash 复制代码
cat >log4j2.properties <<'EOF'
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# Set everything to be logged to the console and rolling file
rootLogger.level = info
rootLogger.appenderRef.stdout.ref = console
rootLogger.appenderRef.rolling.ref = rolling

# Console appender
appender.console.type = Console
appender.console.name = console
appender.console.target = SYSTEM_ERR
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex

# RollingFile appender
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = logs/spark.log
appender.rolling.filePattern = logs/spark-%d{yyyy-MM-dd}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 30

# Set the default spark-shell/spark-sql log level to WARN. When running the
# spark-shell/spark-sql, the log level for these classes is used to overwrite
# the root logger's log level, so that the user can have different defaults
# for the shell and regular Spark apps.
logger.repl.name = org.apache.spark.repl.Main
logger.repl.level = warn

logger.thriftserver.name = org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver
logger.thriftserver.level = warn

# Settings to quiet third party logs that are too verbose
logger.jetty1.name = org.sparkproject.jetty
logger.jetty1.level = warn
logger.jetty2.name = org.sparkproject.jetty.util.component.AbstractLifeCycle
logger.jetty2.level = error
logger.replexprTyper.name = org.apache.spark.repl.SparkIMain$exprTyper
logger.replexprTyper.level = info
logger.replSparkILoopInterpreter.name = org.apache.spark.repl.SparkILoop$SparkILoopInterpreter
logger.replSparkILoopInterpreter.level = info
logger.parquet1.name = org.apache.parquet
logger.parquet1.level = error
logger.parquet2.name = parquet
logger.parquet2.level = error

# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
logger.RetryingHMSHandler.name = org.apache.hadoop.hive.metastore.RetryingHMSHandler
logger.RetryingHMSHandler.level = fatal
logger.FunctionRegistry.name = org.apache.hadoop.hive.ql.exec.FunctionRegistry
logger.FunctionRegistry.level = error

# For deploying Spark ThriftServer
# SPARK-34128: Suppress undesirable TTransportException warnings involved in THRIFT-4805
appender.console.filter.1.type = RegexFilter
appender.console.filter.1.regex = .*Thrift error occurred during processing of message.*
appender.console.filter.1.onMatch = deny
appender.console.filter.1.onMismatch = neutral
EOF

验证生效

1、启动spark集群

2、查看日志文件

相关推荐
Lansonli2 天前
大数据Spark(六十七):Transformation转换算子distinct和mapValues
大数据·分布式·spark
weixin_525936333 天前
金融大数据处理与分析
hadoop·python·hdfs·金融·数据分析·spark·matplotlib
geilip3 天前
知识体系_scala_利用scala和spark构建数据应用
开发语言·spark·scala
孟意昶3 天前
Spark专题-第三部分:性能监控与实战优化(3)-数据倾斜优化
大数据·分布式·sql·spark
Lansonli3 天前
大数据Spark(六十六):Transformation转换算子sample、sortBy和sortByKey
大数据·分布式·spark
IT毕设梦工厂4 天前
大数据毕业设计选题推荐-基于大数据的人口普查收入数据分析与可视化系统-Hadoop-Spark-数据可视化-BigData
大数据·hadoop·数据分析·spark·毕业设计·源码·bigdata
计算机源码社4 天前
基于Hadoop的车辆二氧化碳排放量分析与可视化系统|基于Spark的车辆排放量实时监控与预测系统|基于数据挖掘的汽车排放源识别与减排策略系统
大数据·hadoop·机器学习·数据挖掘·spark·毕业设计·课程设计
励志成为糕手5 天前
Spark Shuffle:分布式计算的数据重分布艺术
大数据·分布式·spark·性能调优·数据倾斜
DashingGuy5 天前
Spark的Broadcast Join以及其它的Join策略
大数据·spark
计算机编程小央姐5 天前
大数据工程师认证项目:汽车之家数据分析系统,Hadoop分布式存储+Spark计算引擎
大数据·hadoop·分布式·数据分析·spark·汽车·课程设计