前言
承接上文( 微服務日志之.NET Core使用NLog通過Kafka實現日志收集 https://www.cnblogs.com/maxzhang1985/p/9522017.html ).NET/Core的實現,我們的目地是為了讓微服務環境中dotnet和java的服務都統一的進行日志收集。
Java體系下Spring Boot + Logback很容易就接入了Kafka實現了日志收集。
Spring Boot集成
Maven 包管理
<dependencyManagement>
<dependencies>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.2.3</version>
</dependency>
</dependencies>
</dependencyManagement>
包依賴引用:
<dependency>
<groupId>com.github.danielwegener</groupId>
<artifactId>logback-kafka-appender</artifactId>
<version>0.2.0-RC1</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.3</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>5.0</version>
</dependency>
logback-spring.xml
在Spring Boot項目resources目錄下添加logback-spring.xml配置文件,注意:一定要修改 {"appname":"webdemo"},這個值也可以在配置中設置為變量。添加如下配置,STDOUT是在連接失敗時,使用的日志輸出配置。所以這每個項目要根據自己的情況添加配置。在普通日志輸出中使用異步策略提高性能,內容如下:
<appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" >
<customFields>{"appname":"webdemo"}</customFields>
<includeMdc>true</includeMdc>
<includeContext>true</includeContext>
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerThrowable>30</maxDepthPerThrowable>
<rootCauseFirst>true</rootCauseFirst>
</throwableConverter>
</encoder>
<topic>loges</topic>
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
<producerConfig>bootstrap.servers=127.0.0.1:9092</producerConfig>
<!-- don't wait for a broker to ack the reception of a batch. -->
<producerConfig>acks=0</producerConfig>
<!-- wait up to 1000ms and collect log messages before sending them as a batch -->
<producerConfig>linger.ms=1000</producerConfig>
<!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
<!--<producerConfig>max.block.ms=0</producerConfig>-->
<producerConfig>block.on.buffer.full=false</producerConfig>
<!-- kafka連接失敗后,使用下面配置進行日志輸出 -->
<appender-ref ref="STDOUT" />
</appender>
注意:一定要修改 {"appname":"webdemo"} , 這個值也可以在配置中設置為變量 。對於第三方框架或庫的錯誤和異常信息如需要寫入日志,錯誤配置如下:
<appender name="kafkaAppenderERROR" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" >
<customFields>{"appname":"webdemo"}</customFields>
<includeMdc>true</includeMdc>
<includeContext>true</includeContext>
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerThrowable>30</maxDepthPerThrowable>
<rootCauseFirst>true</rootCauseFirst>
</throwableConverter>
</encoder>
<topic>ep_component_log</topic>
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.BlockingDeliveryStrategy">
<!-- wait indefinitely until the kafka producer was able to send the message -->
<timeout>0</timeout>
</deliveryStrategy>
<producerConfig>bootstrap.servers=127.0.0.1:9020</producerConfig>
<!-- don't wait for a broker to ack the reception of a batch. -->
<producerConfig>acks=0</producerConfig>
<!-- wait up to 1000ms and collect log messages before sending them as a batch -->
<producerConfig>linger.ms=1000</producerConfig>
<!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
<producerConfig>max.block.ms=0</producerConfig>
<appender-ref ref="STDOUT" />
<filter class="ch.qos.logback.classic.filter.LevelFilter"><!-- 只打印錯誤日志 -->
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
在異常日志用使用了同步策略保證,錯誤日志的有效收集,當然可以根據實際項目情況進行配置。
LOG配置建議:
日志root指定錯誤即可輸出第三方框架異常日志:
<root level="INFO">
<appender-ref ref="kafkaAppenderERROR" />
</root>
建議只輸出自己程序里的級別日志配置如下(只供參考):
<logger name="項目所在包" additivity="false">
<appender-ref ref="STDOUT" />
<appender-ref ref="kafkaAppender" />
</logger>
最后
GitHub:https://github.com/maxzhang1985/YOYOFx 如果覺還可以請Star下, 歡迎一起交流。
.NET Core 開源學習群:214741894