這段時間的面試,遇到好多問日志管理的,今天我就一起聊聊,本地日志輸出和全鏈路日志配置
目標:介紹我們框架中使用的本地日志系統及全鏈路日志系統。本地日志使用spring推薦的logbak,基本是零配置的形式。全鏈路日志追蹤也叫分布式日志,使用zipkin。原理很簡單:
1、各模塊集成zipkin日志收集功能,日志打在kafka中(生產者);
2、zipkin管理端會上來搜刮kafka(消費者),並使用elasticsearch保存日志;
3、使用zipkin ui或kabana展示,在之前的我接觸的項目中我們兩者都用了
一、本地日志輸出配置
1. 新增日志配置文件[src/main/resources/logback-spring.xml]
必須使用這個配置,這是需要事先和運維約定好,日志文件的格式、存放路徑、存放周期、命名等都有要求,不要自己再去擼一個運維不認的
注意這個地方用到了一個變量${app.dir},必須先設置值。
有兩種方式,第一種就是引入我們已經封裝好項目模塊,pom.xml引入就好了,目前是0.0.2版本。
另外一種方式就是自己擼代碼,在自己的啟動類中加入以下java代碼。
log.info("初始化System.setProperty(\"app.dir\")"); String userDir = System.getProperty("user.dir"); System.setProperty("app.dir", userDir.substring(userDir.lastIndexOf(File.separator)));
<?xml version="1.0" encoding="UTF-8"?> <configuration> <property name="CONSOLE_LOG_PATTERN" value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:[%L]){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/> <property name="FILE_LOG_PATTERN" value="${FILE_LOG_PATTERN:-%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } --- [%t] %-40.40logger{39} :[%L] %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/> <property name="LOG_PATH" value="/log/web/${app.dir}"/> <property name="LOG_FILE" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/${app.dir}-info}"/> <include resource="org/springframework/boot/logging/logback/defaults.xml"/> <include resource="org/springframework/boot/logging/logback/console-appender.xml"/> <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${LOG_FILE}.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy"> <!-- rollover daily --> <fileNamePattern>${LOG_FILE}-%d{yyyy-MM-dd-HH00}.%i.log</fileNamePattern> <!-- each file should be at most 100MB, keep 15 days worth of history, but at most 20GB --> <maxFileSize>100MB</maxFileSize> <maxHistory>360</maxHistory> <totalSizeCap>20GB</totalSizeCap> </rollingPolicy> <encoder> <pattern>${FILE_LOG_PATTERN}</pattern> </encoder> </appender> <logger name="org.springframework.security" level="DEBUG"/> <logger name="org.springframework.cloud.sleuth.instrument.web.client.feign.TraceFeignClient" level="DEBUG"/> <logger name="org.springframework.web.servlet.DispatcherServlet" level="DEBUG"/> <logger name="org.springframework.cloud.sleuth.instrument.web.TraceFilter" level="DEBUG"/> <logger name="com.jarvis.cache" level="DEBUG"/> <logger name="com.youli" level="DEBUG"/> <logger name="org.springframework.security.web.util.matcher" level="INFO"/> <root level="INFO"> <appender-ref ref="CONSOLE"/> <appender-ref ref="FILE"/> </root> </configuration>
2. 在spring boot的啟動類中增加環境變量設置代碼
public static void main(String[] args) { //只要增加下面兩行就行了,為了獲取當前jar包所在的目錄。日志文件將輸出在/log/web/{app.dir}下面 String userDir = System.getProperty("user.dir"); System.setProperty("app.dir", userDir.substring(userDir.lastIndexOf(File.separator))); //End SpringApplication.run(AppGatewayApplication.class, args); }
3. 代碼邏輯,lombok的搞法。
lombok是個很好用的工具包,比如@Setter@Getter之類的,話題太長,不懂的度娘掃盲下。
安裝也很簡單,去官網下載個jar包,本地java -jar運行就好了。
pom.xml引入lombok依賴
<!--lombok工具--> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </dependency>
在Class上加上注解@Slf4j,然后日志就可以開心的打日志了
package com.youli.demo.controller; import javax.servlet.http.HttpServletRequest; import org.slf4j.LoggerFactory; import org.slf4j.Logger; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @RestController @Slf4j//這個注解加上 @RequestMapping(value="/demo/log") public class LogController { //這行代碼不要了 //private final Logger logger = LoggerFactory.getLogger(this.getClass()); @RequestMapping(value="/logMe") public String logMe(HttpServletRequest request) { String a = request.getParameter("a"); String b = request.getParameter("b"); // 日志輸出禁止使用字符串拼接,應該使用占位符的方式 // 以防止級別調高性能依然消耗的情況 //logger.debug("params: a={}, b={}", a, b); //logger沒有了,用@Slf4j注進來的log log.debug("params: a={}, b={}", a, b); return "ok"; } }
程序啟動后在D:\log\web\[你的工程目錄名]能看到日志了
3.日志場景
接口訪問404
修改日志配置文件[src/main/resources/logback-spring.xml],添加配置將RequestMappingHandlerMapping類設置為debug輸出
<logger name="org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping" level="DEBUG" />
這樣當出現404錯誤的時候服務端日志就能看見如下輸出:
2017-12-14 11:35:17.691 DEBUG 4484 --- [nio-9050-exec-1] s.w.s.m.m.a.RequestMappingHandlerMapping : Looking up handler method for path /abcdsdfsdf 2017-12-14 11:35:17.694 DEBUG 4484 --- [nio-9050-exec-1] s.w.s.m.m.a.RequestMappingHandlerMapping : Did not find handler method for [/abcdsdfsdf] 2017-12-14 11:35:17.699 DEBUG 4484 --- [nio-9050-exec-1] s.w.s.m.m.a.RequestMappingHandlerMapping : Looking up handler method for path /error 2017-12-14 11:35:17.701 DEBUG 4484 --- [nio-9050-exec-1] s.w.s.m.m.a.RequestMappingHandlerMapping : Returning handler method [public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)]
二、全鏈路日志配置
1.目標
簡單兩個步驟的配置就搞定了,配置成功的標志是-日志輸出帶全局traceID,長下面這個樣子的([bootstrap,948cb6650eb020a6,948cb6650eb020a6,true]這個就是全局traceID和spanID了)
2017-12-15 15:07:55.570 INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] s.c.a.AnnotationConfigApplicationContext :[583] Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@40c35b01: startup date [Fri Dec 15 15:07:55 CST 2017]; parent: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@162be91c 2017-12-15 15:07:55.636 INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] f.a.AutowiredAnnotationBeanPostProcessor :[155] JSR-330 'javax.inject.Inject' annotation found and supported for autowiring 2017-12-15 15:07:55.920 INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] c.netflix.config.ChainedDynamicProperty :[115] Flipping property: common-external-platform.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647 2017-12-15 15:07:55.945 INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] c.n.u.concurrent.ShutdownEnabledTimer :[58] Shutdown hook installed for: NFLoadBalancer-PingTimer-common-external-platform 2017-12-15 15:07:55.965 INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] c.netflix.loadbalancer.BaseLoadBalancer :[192] Client: common-external-platform instantiated a LoadBalancer: DynamicServerListLoadBalancer:{NFLoadBalancer:name=common-external-platform,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null 2017-12-15 15:07:55.971 INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] c.n.l.DynamicServerListLoadBalancer :[214] Using serverListUpdater PollingServerListUpdater 2017-12-15 15:07:55.999 INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] c.netflix.config.ChainedDynamicProperty :[115] Flipping property: common-external-platform.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647 2017-12-15 15:07:56.001 INFO [bootstrap,948cb6650eb020a6,948cb6650eb020a6,true] 5856 --- [nio-9001-exec-1] c.n.l.DynamicServerListLoadBalancer :[150] DynamicServerListLoadBalancer for client common-external-platform initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=common-external-platform,current list of Servers=[10.18.2.82:9050, 10.18.2.81:9050],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone; Instance count:2; Active connections count: 0; Circuit breaker tripped count: 0; Active connections per server: 0.0;]
2.步驟
1.修改配置文件[pom.xml] 添加zipkin依賴庫
<!--全鏈路日志追蹤zipkin,kafka收集-->
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-sleuth-zipkin-stream</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-stream-kafka</artifactId> </dependency>
2.修改配置文件[src/main/resources/application.yml] 配置kafka和ZK節點信息(注意:這個配置是SIT的),配置sleuth采樣率為1(100%采樣)
spring: cloud: stream: kafka: binder: brokers: 192.167.1.3:9092,192.168.1.4:9092,192.168.1.5:9092 zkNodes: 192.168.1.3:2181 sleuth: sampler: percentage: 1.0
完了,很簡單吧