我們無需關心Logback版本,只需關注Boot版本即可,Parent工程自動集成了Logback。Springboot本身就可以打印日志,為什么還需要規范日志?
- 日志統一,方便查閱管理。
- 日志歸檔功能。
- 日志持久化功能。
- 分布式日志查看功能(ELK),方便搜索和查閱。
關於Logback的介紹就略過了,下面進入代碼階段。本文主要有以下幾個功能:
- 重新規定日志輸出格式。
- 自定義指定包下的日志輸出級別。
- 按模塊輸出日志。
- 日志異步推送Kafka
POM文件
如果需要將日志持久化到磁盤,則引入如下兩個依賴(不需要推送Kafka也可以引入)。
<properties>
<logback-kafka-appender.version>0.2.0-RC1</logback-kafka-appender.version>
<janino.version>2.7.8</janino.version>
</properties>
<!-- 將日志輸出到Kafka -->
<dependency>
<groupId>com.github.danielwegener</groupId>
<artifactId>logback-kafka-appender</artifactId>
<version>${logback-kafka-appender.version}</version>
<scope>runtime</scope>
</dependency>
<!-- 在xml中使用<if condition>的時候用到的jar包 -->
<dependency>
<groupId>org.codehaus.janino</groupId>
<artifactId>janino</artifactId>
<version>${janino.version}</version>
</dependency>
配置文件
在
resource
文件夾下有三個配置文件。
logback-defaults.xml
logback-pattern.xml
logback-spring.xml
logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="logging/logback-pattern.xml"/>
<include resource="logging/logback-defaults.xml"/>
</configuration>
logback-defaults.xml
<?xml version="1.0" encoding="UTF-8"?>
<included>
<!-- spring日志 -->
<property name="LOG_FILE" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/spring.log}"/>
<!-- 定義日志文件的輸出路徑 -->
<property name="LOG_HOME" value="${LOG_PATH:-/tmp}"/>
<!--
將日志追加到控制台(默認使用LogBack已經實現好的)
進入文件,其中<logger>用來設置某一個包或者具體的某一個類的日志打印級別
-->
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<include resource="logback-pattern.xml"/>
<!--定義日志文件大小 超過這個大小會壓縮歸檔 -->
<property name="INFO_MAX_FILE_SIZE" value="100MB"/>
<property name="ERROR_MAX_FILE_SIZE" value="100MB"/>
<property name="TRACE_MAX_FILE_SIZE" value="100MB"/>
<property name="WARN_MAX_FILE_SIZE" value="100MB"/>
<!--定義日志文件最長保存時間 -->
<property name="INFO_MAX_HISTORY" value="9"/>
<property name="ERROR_MAX_HISTORY" value="9"/>
<property name="TRACE_MAX_HISTORY" value="9"/>
<property name="WARN_MAX_HISTORY" value="9"/>
<!--定義歸檔日志文件最大保存大小,當所有歸檔日志大小超出定義時,會觸發刪除 -->
<property name="INFO_TOTAL_SIZE_CAP" value="5GB"/>
<property name="ERROR_TOTAL_SIZE_CAP" value="5GB"/>
<property name="TRACE_TOTAL_SIZE_CAP" value="5GB"/>
<property name="WARN_TOTAL_SIZE_CAP" value="5GB"/>
<!-- 按照每天生成日志文件 -->
<appender name="INFO_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!-- 當前Log文件名 -->
<file>${LOG_HOME}/info.log</file>
<!-- 壓縮備份設置 -->
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME}/backup/info/info.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxHistory>${INFO_MAX_HISTORY}</maxHistory>
<maxFileSize>${INFO_MAX_FILE_SIZE}</maxFileSize>
<totalSizeCap>${INFO_TOTAL_SIZE_CAP}</totalSizeCap>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
</appender>
<appender name="WARN_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!-- 當前Log文件名 -->
<file>${LOG_HOME}/warn.log</file>
<!-- 壓縮備份設置 -->
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME}/backup/warn/warn.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxHistory>${WARN_MAX_HISTORY}</maxHistory>
<maxFileSize>${WARN_MAX_FILE_SIZE}</maxFileSize>
<totalSizeCap>${WARN_TOTAL_SIZE_CAP}</totalSizeCap>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>WARN</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<appender name="ERROR_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!-- 當前Log文件名 -->
<file>${LOG_HOME}/error.log</file>
<!-- 壓縮備份設置 -->
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME}/backup/error/error.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxHistory>${ERROR_MAX_HISTORY}</maxHistory>
<maxFileSize>${ERROR_MAX_FILE_SIZE}</maxFileSize>
<totalSizeCap>${ERROR_TOTAL_SIZE_CAP}</totalSizeCap>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<!-- Kafka的appender -->
<appender name="KAFKA" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<topic>${kafka_env}applog_${spring_application_name}</topic>
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
<producerConfig>bootstrap.servers=${kafka_broker}</producerConfig>
<!-- don't wait for a broker to ack the reception of a batch. -->
<producerConfig>acks=0</producerConfig>
<!-- wait up to 1000ms and collect log messages before sending them as a batch -->
<producerConfig>linger.ms=1000</producerConfig>
<!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
<producerConfig>max.block.ms=0</producerConfig>
<!-- Optional parameter to use a fixed partition -->
<partition>8</partition>
</appender>
<appender name="KAFKA_ASYNC" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="KAFKA" />
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="INFO_FILE"/>
<appender-ref ref="WARN_FILE"/>
<appender-ref ref="ERROR_FILE"/>
<if condition='"true".equals(property("kafka_enabled"))'>
<then>
<appender-ref ref="KAFKA_ASYNC"/>
</then>
</if>
</root>
</included>
注意:
- 上面的
<partition>8</partition>
指的是將消息發送到哪個分區,如果你主題的分區為0~7,那么會報錯,解決辦法是要么去掉這個屬性,要么指定有效的分區。 HostNameKeyingStrategy
是用來指定key的生成策略,我們知道kafka是根據key來判定將消息發送到哪個分區上的,此種是根據主機名來判定,這樣帶來的好處是每台服務器生成的日志都是在同一個分區上面,從而保證了時間順序。但默認的是NoKeyKeyingStrategy
,會隨機分配到各個分區上面,這樣帶來的壞處是,無法保證日志的時間順序,不推薦這樣來記錄日志。
logback-pattern.xml
<?xml version="1.0" encoding="UTF-8"?>
<included>
<!-- 日志展示規則,比如彩色日志、異常日志等 -->
<conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter" />
<conversionRule conversionWord="wex" converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter" />
<conversionRule conversionWord="wEx" converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter" />
<!-- 自定義日志展示規則 -->
<conversionRule conversionWord="ip" converterClass="com.ryan.utils.IPAddressConverter" />
<conversionRule conversionWord="module" converterClass="com.ryan.utils.ModuleConverter" />
<!-- 上下文屬性 -->
<springProperty scope="context" name="spring_application_name" source="spring.application.name" />
<springProperty scope="context" name="server_port" source="server.port" />
<!-- Kafka屬性配置 -->
<springProperty scope="context" name="spring_application_name" source="spring.application.name" />
<springProperty scope="context" name="kafka_enabled" source="ryan.web.logging.kafka.enabled"/>
<springProperty scope="context" name="kafka_broker" source="ryan.web.logging.kafka.broker"/>
<springProperty scope="context" name="kafka_env" source="ryan.web.logging.kafka.env"/>
<!-- 日志輸出的格式如下 -->
<!-- appID | module | dateTime | level | requestID | traceID | requestIP | userIP | serverIP | serverPort | processID | thread | location | detailInfo-->
<!-- CONSOLE_LOG_PATTERN屬性會在console-appender.xml文件中引用 -->
<property name="CONSOLE_LOG_PATTERN" value="%clr(${spring_application_name}){cyan}|%clr(%module){blue}|%clr(%d{ISO8601}){faint}|%clr(%p)|%X{requestId}|%X{X-B3-TraceId:-}|%X{requestIp}|%X{userIp}|%ip|${server_port}|${PID}|%clr(%t){faint}|%clr(%.40logger{39}){cyan}.%clr(%method){cyan}:%L|%m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>
<!-- FILE_LOG_PATTERN屬性會在logback-defaults.xml文件中引用 -->
<property name="FILE_LOG_PATTERN" value="${spring_application_name}|%module|%d{ISO8601}|%p|%X{requestId}|%X{X-B3-TraceId:-}|%X{requestIp}|%X{userIp}|%ip|${server_port}|${PID}|%t|%.40logger{39}.%method:%L|%m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>
<!--
將 org/springframework/boot/logging/logback/defaults.xml 文件下的默認logger寫過來
-->
<logger name="org.apache.catalina.startup.DigesterFactory" level="ERROR"/>
<logger name="org.apache.catalina.util.LifecycleBase" level="ERROR"/>
<logger name="org.apache.coyote.http11.Http11NioProtocol" level="WARN"/>
<logger name="org.apache.sshd.common.util.SecurityUtils" level="WARN"/>
<logger name="org.apache.tomcat.util.net.NioSelectorPool" level="WARN"/>
<logger name="org.eclipse.jetty.util.component.AbstractLifeCycle" level="ERROR"/>
<logger name="org.hibernate.validator.internal.util.Version" level="WARN"/>
</included>
自定義獲取moudle
/**
* 獲取日志模塊的名稱
*
* @author zhangjianbing
* time 2019/7/9
*/
public class ModuleConverter extends ClassicConverter {
private static final int MAX_LENGTH = 20;
@Override
public String convert(ILoggingEvent event) {
if (event.getLoggerName().length() > MAX_LENGTH) {
return "";
} else {
return event.getLoggerName();
}
}
}
自定義獲取ip
/**
* 獲取IP地址
*
* @author zhangjianbing
* time 2019/7/9
*/
@Slf4j
public class IPAddressConverter extends ClassicConverter {
private static String ipAddress;
static {
try {
ipAddress = InetAddress.getLocalHost().getHostAddress();
} catch (UnknownHostException e) {
log.error("fetch localhost host address failed", e);
ipAddress = "UNKNOWN";
}
}
@Override
public String convert(ILoggingEvent event) {
return ipAddress;
}
}
按模塊輸出
給
@Slf4j
加上topic
。
/**
* @author zhangjianbing
* time 2019/7/9
*/
@RestController
@RequestMapping(value = "/portal")
@Slf4j(topic = "LogbackController")
public class LogbackController {
@RequestMapping(value = "/gohome")
public void m1() {
log.info("buddy,we go home~");
}
}
自定義日志級別
如果想打印SQL語句,需要將日志級別設置成
debug
級別。
logging.path = /tmp
logging.level.com.ryan.trading.account.dao = debug
推送Kafka
ryan.web.logging.kafka.enabled=true
#多個broker用英文逗號分隔
ryan.web.logging.kafka.broker=127.0.0.1:9092
#創建Kafka的topic時使用
ryan.web.logging.kafka.env=test
日志輸出格式說明
名稱 | 含義 | 格式 |
---|---|---|
AppID | 應用標識 | |
Module | 模塊/子系統 | |
DateTime | 日期時間 | TimeStamp |
Level | 日志級別 | Level |
RequestID | 請求標識 | |
TraceID | 調用鏈標識 | |
RequestIP | 請求IP | IP |
UserIP | 用戶IP | IP |
ServerIP | 服務器IP | IP |
ServerPort | 服務器端口 | Port |
ProcessID | 進程標識 | |
Thread | 線程名稱 | |
Location | 代碼位置 | |
DetailInfo | 詳細日志 |
啟動示例
logback-framework-project||2019-07-09 21:25:48,135|INFO|||||192.168.0.102|8080|49877|main|com.ryan.LogbackBootStrap.logStarting:50|Starting LogbackBootStrap on bjw0101020035.lhwork.net with PID 49877 (/Users/zhangjianbing/learn-note/logback-learn-note/target/classes started by zhangjianbing in /Users/zhangjianbing/learn-note)
logback-framework-project||2019-07-09 21:25:48,138|INFO|||||192.168.0.102|8080|49877|main|com.ryan.LogbackBootStrap.logStartupProfileInfo:652|No active profile set, falling back to default profiles: default
logback-framework-project||2019-07-09 21:25:48,248|INFO|||||192.168.0.102|8080|49877|main|ConfigServletWebServerApplicationContext.prepareRefresh:589|Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@1d2bd371: startup date [Tue Jul 09 21:25:48 CST 2019]; root of context hierarchy
logback-framework-project||2019-07-09 21:25:50,155|INFO|||||192.168.0.102|8080|49877|main|o.s.b.w.embedded.tomcat.TomcatWebServer.initialize:91|Tomcat initialized with port(s): 8080 (http)
logback-framework-project||2019-07-09 21:25:50,249|INFO|||||192.168.0.102|8080|49877|main|o.apache.catalina.core.StandardService.log:180|Starting service [Tomcat]
logback-framework-project||2019-07-09 21:25:50,249|INFO|||||192.168.0.102|8080|49877|main|org.apache.catalina.core.StandardEngine.log:180|Starting Servlet Engine: Apache Tomcat/8.5.28
logback-framework-project||2019-07-09 21:25:50,256|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.a.catalina.core.AprLifecycleListener.log:180|The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/Users/zhangjianbing/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.]
logback-framework-project||2019-07-09 21:25:50,405|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.a.c.c.C.[Tomcat].[localhost].[/].log:180|Initializing Spring embedded WebApplicationContext
logback-framework-project||2019-07-09 21:25:50,406|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.s.web.context.ContextLoader.prepareWebApplicationContext:285|Root WebApplicationContext: initialization completed in 2159 ms
logback-framework-project||2019-07-09 21:25:50,566|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.s.b.w.servlet.ServletRegistrationBean.addRegistration:185|Servlet dispatcherServlet mapped to [/]
logback-framework-project||2019-07-09 21:25:50,572|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.s.b.w.servlet.FilterRegistrationBean.configure:243|Mapping filter: 'characterEncodingFilter' to: [/*]
logback-framework-project||2019-07-09 21:25:50,573|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.s.b.w.servlet.FilterRegistrationBean.configure:243|Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
logback-framework-project||2019-07-09 21:25:50,573|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.s.b.w.servlet.FilterRegistrationBean.configure:243|Mapping filter: 'httpPutFormContentFilter' to: [/*]
logback-framework-project||2019-07-09 21:25:50,573|INFO|||||192.168.0.102|8080|49877|localhost-startStop-1|o.s.b.w.servlet.FilterRegistrationBean.configure:243|Mapping filter: 'requestContextFilter' to: [/*]
logback-framework-project||2019-07-09 21:25:50,966|INFO|||||192.168.0.102|8080|49877|main|s.w.s.m.m.a.RequestMappingHandlerAdapter.initControllerAdviceCache:567|Looking for @ControllerAdvice: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@1d2bd371: startup date [Tue Jul 09 21:25:48 CST 2019]; root of context hierarchy
logback-framework-project||2019-07-09 21:25:51,097|INFO|||||192.168.0.102|8080|49877|main|s.w.s.m.m.a.RequestMappingHandlerMapping.register:548|Mapped "{[/portal/gohome]}" onto public void com.ryan.logback.LogbackController.m1()
logback-framework-project||2019-07-09 21:25:51,111|INFO|||||192.168.0.102|8080|49877|main|s.w.s.m.m.a.RequestMappingHandlerMapping.register:548|Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
logback-framework-project||2019-07-09 21:25:51,113|INFO|||||192.168.0.102|8080|49877|main|s.w.s.m.m.a.RequestMappingHandlerMapping.register:548|Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
logback-framework-project||2019-07-09 21:25:51,164|INFO|||||192.168.0.102|8080|49877|main|o.s.w.s.handler.SimpleUrlHandlerMapping.registerHandler:373|Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
logback-framework-project||2019-07-09 21:25:51,165|INFO|||||192.168.0.102|8080|49877|main|o.s.w.s.handler.SimpleUrlHandlerMapping.registerHandler:373|Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
logback-framework-project||2019-07-09 21:25:51,262|INFO|||||192.168.0.102|8080|49877|main|o.s.w.s.handler.SimpleUrlHandlerMapping.registerHandler:373|Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
logback-framework-project||2019-07-09 21:25:51,540|INFO|||||192.168.0.102|8080|49877|main|o.s.j.e.a.AnnotationMBeanExporter.afterSingletonsInstantiated:434|Registering beans for JMX exposure on startup
logback-framework-project||2019-07-09 21:25:51,603|INFO|||||192.168.0.102|8080|49877|main|o.s.b.w.embedded.tomcat.TomcatWebServer.start:205|Tomcat started on port(s): 8080 (http) with context path ''
logback-framework-project||2019-07-09 21:25:51,608|INFO|||||192.168.0.102|8080|49877|main|com.ryan.LogbackBootStrap.logStarted:59|Started LogbackBootStrap in 5.001 seconds (JVM running for 7.818)
全文完,本文用到的appender來自:https://github.com/danielwegener/logback-kafka-appender