Hadoop log4j日志说明

jopen 10年前

 log4j.propertites
    # Define some default values that can be overridden by system properties        hadoop.root.logger=INFO,console        hadoop.log.dir=.        hadoop.log.file=hadoop.log                #        # Job Summary Appender         #        # Use following logger to send summary to separate file defined by         # hadoop.mapreduce.jobsummary.log.file rolled daily:        # hadoop.mapreduce.jobsummary.logger=INFO,JSA        #         hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}        hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log                # Define the root logger to the system property "hadoop.root.logger".        log4j.rootLogger=${hadoop.root.logger}, EventCounter                # Logging Threshold        log4j.threshhold=ALL                #        # Daily Rolling File Appender        #                log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender        log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}                # Rollver at midnight        log4j.appender.DRFA.DatePattern=.yyyy-MM-dd                # 30-day backup        #log4j.appender.DRFA.MaxBackupIndex=30        log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout                # Pattern format: Date LogLevel LoggerName LogMessage        #log4j.appender.DRFA.layout.ConversionPattern=%l %m%n        # Debugging Pattern format 日志文件格式        log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n                        #        # console        # Add "console" to rootlogger above if you want to use this         #                log4j.appender.console=org.apache.log4j.ConsoleAppender        log4j.appender.console.target=System.err        log4j.appender.console.layout=org.apache.log4j.PatternLayout        log4j.appender.console.layout.ConversionPattern=%l: %m%n                #        # TaskLog Appender        #                #Default values        hadoop.tasklog.taskid=null        hadoop.tasklog.iscleanup=false        hadoop.tasklog.noKeepSplits=4        hadoop.tasklog.totalLogFileSize=100        hadoop.tasklog.purgeLogSplits=true        hadoop.tasklog.logsRetainHours=12                log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender        log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}        log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}        log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}                log4j.appender.TLA.layout=org.apache.log4j.PatternLayout        log4j.appender.TLA.layout.ConversionPattern=%l  %p %c: %m%n                #        #Security audit appender        #        hadoop.security.log.file=SecurityAuth.audit        log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender         log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}                log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout        log4j.appender.DRFAS.layout.ConversionPattern=%l %p %c: %m%n        #new logger        log4j.logger.SecurityLogger=OFF,console        log4j.logger.SecurityLogger.additivity=false                #        # Rolling File Appender        #                #log4j.appender.RFA=org.apache.log4j.RollingFileAppender        #log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}                # Logfile size and and 30-day backups        #log4j.appender.RFA.MaxFileSize=1MB        #log4j.appender.RFA.MaxBackupIndex=30                #log4j.appender.RFA.layout=org.apache.log4j.PatternLayout        #log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n        #log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n                #        # FSNamesystem Audit logging        # All audit events are logged at INFO level        #        log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG                # Custom Logging levels                hadoop.metrics.log.level=DEBUG        #log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG        #log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG        #log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG        #应该是设置包下类的日志级别        log4j.logger.org.apache.hadoop.metrics2=${hadoop.metrics.log.level}                # Jets3t library        log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR                #        # Null Appender        # Trap security logger on the hadoop client side        #        log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender                #        # Event Counter Appender        # Sends counts of logging messages at different severity levels to Hadoop Metrics.        #        log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter                #        # Job Summary Appender        #        log4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender        log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}        log4j.appender.JSA.layout=org.apache.log4j.PatternLayout        log4j.appender.JSA.layout.ConversionPattern=%l %p %c{2}: %m%n        log4j.appender.JSA.DatePattern=.yyyy-MM-dd        log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}        log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false  

hadoop日志级别设置

在hadoop/bin/ hadoop-daemon.sh文件下

export HADOOP_ROOT_LOGGER="DEBUG,DRFA"

 

自定义日志

目标:将需要的信息写入自己指定的独立的日志中。
需求:这次只是一个尝试,在DFSClient中,将部分内容写入指定的日志文件中。在客户端读取HDFS数据时,将读的blockID写入文件。
步骤:
1、修改hadoop/conf/log4j.properties文件。在文件末尾添加如下内容:
#为写日志的操作取个名字,MyDFSClient。用来在DFSClient中获取该日志的实例。并指定输出方式为自定义的OUT
log4j.logger.MyDFSClient=DEBUG,OUT
#设置OUT的输出方式为输出到文件
log4j.appender.OUT=org.apache.log4j.FileAppender
#设置文件路径
log4j.appender.OUT.File=${hadoop.log.dir}/DFSClient.log
#设置文件的布局
log4j.appender.OUT.layout=org.apache.log4j.PatternLayout
#设置文件的格式
log4j.appender.OUT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
#设置该日志操作不与父类日志操作重叠
log4j.additivity.MyDFSClient=false

2、保存该文件,复制到集群各个节点的hadoop/conf目录下,替换原有的文件。


3、修改DFSClient类
这里只是简单的为了验证这个过程的正确性,以后还回加入更有意义的日志内容。
首先在DFSClient类中声明一个LOG实例:
public static final Log myLOG = LogFactory.getLog("MyDFSClient");
在read(byte buf[], int off, int len)函数中,添加如下代码:
myLOG.info("Read Block!!!!");
if(currentBlock!=null)
   myLOG.info("Read block: "+currentBlock.getBlockId());

 

4、重新启动hadoop。

 

5、这里使用dfs命令进行测试。
$bin/hadoop dfs -cat /user/XXX/out/part-r-00000
可以看到文件part-r-00000的内容输出到屏幕。这时在/hadoop/logs/DFSClient.log文件中,可以看到刚才在类中记录的日志。验证成功。