【kafka學習之五】kafka運維:kafka操作日志設置和主題刪除


一、操作日志

首先附上kafka 操作日志配置文件:log4j.properties

根據相應的需要設置日志。

#日志級別覆蓋規則  優先級:ALL < DEBUG < INFO <WARN < ERROR < FATAL < OFF
#1.子日志log4j.logger會覆蓋主日志log4j.rootLogger,這里設置的是日志輸出級別,Threshold設置appender的日志接收級別;
#2.log4j.logger級別低於Threshold,appender接收級別取決於Threshold級別;
#3.log4j.logger級別高於Threshold,appender接收級別取決於log4j.logger級別,因為輸出里就沒有Threshold要求的日志;
#4.子logger設置,主要與rootLogger區分開打印日志 一般與log4j.additivity配合使用
#log4j.additivity 是否繼承父Logger的輸出源(appender),默認是true 
#true 在stdout, kafkaAppender里輸出 也會在stateChangeAppender輸出 #這里需要單獨輸出 所以設置為false 只會在stateChangeAppender輸出 #log4j.logger后面如果沒有appender,則默認使用log4j.rootLogger后面設置的appender
#如果使用
org.apache.log4j.RollingFileAppender 可以使用MaxFileSize設置最大文件大小 MaxBackupIndex設置最大文件數量

#主日志設置 log4j.rootLogger
=ERROR, stdout, kafkaAppender #控制台的appender和layout log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.kafkaAppender.Append=true log4j.appender.kafkaAppender.Threshold=ERROR log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n #kafkaAppender的appender和layout log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log log4j.appender.kafkaAppender.Append=true log4j.appender.kafkaAppender.Threshold=ERROR log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n #狀態變化日志 log4j.logger.state.change.logger=ERROR, stateChangeAppender log4j.additivity.state.change.logger=false log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n #請求處理 log4j.logger.kafka.request.logger=ERROR, requestAppender log4j.additivity.kafka.request.logger=false log4j.logger.kafka.network.Processor=ERROR, requestAppender log4j.additivity.kafka.network.Processor=false log4j.logger.kafka.server.KafkaApis=ERROR, requestAppender log4j.additivity.kafka.server.KafkaApis=false log4j.logger.kafka.network.RequestChannel$=ERROR, requestAppender log4j.additivity.kafka.network.RequestChannel$=false log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n #kafka-logs清理 log4j.logger.kafka.log.LogCleaner=ERROR, cleanerAppender log4j.additivity.kafka.log.LogCleaner=false log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n #controller log4j.logger.kafka.controller=ERROR, controllerAppender log4j.additivity.kafka.controller=false log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n #authorizer log4j.logger.kafka.authorizer.logger=ERROR, authorizerAppender log4j.additivity.kafka.authorizer.logger=false log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n #ZkClient log4j.logger.org.I0Itec.zkclient.ZkClient=ERROR #zookeeper log4j.logger.org.apache.zookeeper=ERROR #kafka log4j.logger.kafka=ERROR #org.apache.kafka log4j.logger.org.apache.kafka=ERROR

 

其次 kafka默認打印GC日志,如下,

[cluster@PCS102 logs]$ ls
kafka-authorizer.log          kafkaServer-gc.log.3  kafkaServer-gc.log.8      server.log.2018-10-22-14
kafka-request.log             kafkaServer-gc.log.4  kafkaServer-gc.log.9      server.log.2018-10-22-15
kafkaServer-gc.log.0          kafkaServer-gc.log.5  kafkaServer.out
kafkaServer-gc.log.1          kafkaServer-gc.log.6  server.log
kafkaServer-gc.log.2.current  kafkaServer-gc.log.7  server.log.2018-10-22-13

 

生產是不需要的   需要關掉,kafka home bin目錄下面有個kafka-run-class.sh腳本  vim編輯一下

將參數 KAFKA_GC_LOG_OPTS=" " 設置為空格即可,重啟kafka之后就不再打印GC日志了。

[cluster@PCS102 bin]$ vim kafka-run-class.sh

GC_FILE_SUFFIX='-gc.log'
GC_LOG_FILE_NAME=''
if [ "x$GC_LOG_ENABLED" = "xtrue" ]; then
  GC_LOG_FILE_NAME=$DAEMON_NAME$GC_FILE_SUFFIX
  KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M"
KAFKA_GC_LOG_OPTS=" "
fi

 

 

可以寫個定時清理腳本來清除日志結合 crontab :0 0 2 * * ? /home/cluster/kafka211/bin/cleanupkafkalog.sh

#!/bin/bash

# log dir
logDir=/home/cluster/kafka211/logs
#keep 60 file
count=60
count=$[$count+1]
LOGNUM=`ls -l /home/cluster/kafka211/logs/server.log.* |wc -l`
if [ $LOGNUM -gt 0 ]; then
    ls -t $logDir/server.log.* | tail -n +$count | xargs rm -f
fi

#kafkaServer.out 
if [ -e "$logDir/kafkaServer.out" ]; then
    rm -f /home/cluster/kafka211/logs/kafkaServer.out
fi

二、刪除主題和主題對應消息數據

舉例刪除主題:t1205
(1)在kafka集群中刪除topic,當前topic被標記成刪除。
./kafka-topics.sh --zookeeper node3:2181,node4:2181,node5:2181 --delete --topic t1205

(2)在每台broker節點上刪除當前這個topic對應的真實數據。
刪除kafka相關的數據目錄,數據目錄請參考目標機器上的kafka配置:server.properties -> log.dirs=/var/kafka/log/tmp
rm -r /var/kafka/log/tmp/t1205*

(3)進入zookeeper客戶端,刪除topic信息
rmr /brokers/topics/t1205

(4)刪除zookeeper中被標記為刪除的topic信息
rmr /admin/delete_topics/t1205


最后重啟ZK和kafka集群,查看是否還有
./kafka-topics.sh --list --zookeeper node3:2181,node4:2181,node5:2181


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM