一般作為服務端的應用,必須要有相應的日志,否則問題怎么排查呢?
而日志怎么打印,也是一個技術活。不然java中也不會存在N多廠商爭相提供日志框架了!
而日志滾動則往往也是剛需,畢竟沒人能保證日志的量及可閱讀性。日志滾動實現主要有兩個大方向:
1. 讓應用服務自行打印,打印到哪里也完全由應用決定!
2. 借助第三方的工具進行日志打印,這種一般要借助於控制台或者agent!
3. 讓日志框架提供日志滾動功能,自行管理日志;這樣做有個好處就是,應用自帶,無需外部處理。壞處就是要完全依賴該應用,會影響該應用的性能,且如果該應用存在bug,則功能就不敢保證了。(稍后我會以logback的日志滾動說明)
4. 借助第三方的工具進行日志滾動;這樣做的好處是滾動功能更獨立,對代碼無入侵,即使真的有問題,大不了把它干掉也沒關系;另外,第三方工具不會因為應用本身的bug而導致滾動異常,從而保證了有足夠的排查依據。(稍后我會以cronolog進行講解滾動實現);
具體日志滾動實現
1. 使用應用打印的方式:如logback的rollingpolicy,則自帶滾動日志功能!但是坑多!
1.1. 首先我們看下日志滾動的配置:(在 logback.xml 配置)
<!--輸出到文件--> <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${log_path}/api.ln.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy" > <fileNamePattern>${log_path}/api.%d{yyyy-MM-dd_HH}.log</fileNamePattern> <!-- keep 10 days' worth of history capped at 8GB total size --> <maxHistory>10</maxHistory> <totalSizeCap>8GB</totalSizeCap> </rollingPolicy> <encoder> <pattern>%d{MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern> </encoder> </appender>
這里配置以時間為滾動標准,每小時滾動一次!最大保留10天日志,總共大小不超過8G。我們后面來看下他的效果!
1.2. 看下滾動代碼!
首先,日志滾動會有相應的線程一直在跑(不管是應用實現還是第三方實現都是這樣,否則怎么隨時檢測滾動時機呢)!
在 EventPlayer中,有個play方法,此時會決斷是否是 EndEvent, 如果是的話就會調用后台線程生成!
// ch.qos.logback.core.joran.spi.EventPlayer public void play(List<SaxEvent> aSaxEventList) { eventList = aSaxEventList; SaxEvent se; for (currentIndex = 0; currentIndex < eventList.size(); currentIndex++) { se = eventList.get(currentIndex); if (se instanceof StartEvent) { interpreter.startElement((StartEvent) se); // invoke fireInPlay after startElement processing interpreter.getInterpretationContext().fireInPlay(se); } if (se instanceof BodyEvent) { // invoke fireInPlay before characters processing interpreter.getInterpretationContext().fireInPlay(se); interpreter.characters((BodyEvent) se); } // rollingPollicy 在此處調喚醒 if (se instanceof EndEvent) { // invoke fireInPlay before endElement processing interpreter.getInterpretationContext().fireInPlay(se); interpreter.endElement((EndEvent) se); } } }
然后,幾經轉換,就到了Interpreter 了,這里會做一個死循環,一直在監聽!
// ch.qos.logback.core.joran.spi.Interpreter private void callEndAction(List<Action> applicableActionList, String tagName) { if (applicableActionList == null) { return; } // logger.debug("About to call end actions on node: [" + localName + "]"); Iterator<Action> i = applicableActionList.iterator(); while (i.hasNext()) { Action action = i.next(); // now let us invoke the end method of the action. We catch and report // any eventual exceptions try { action.end(interpretationContext, tagName); } catch (ActionException ae) { // at this point endAction, there is no point in skipping children as // they have been already processed cai.addError("ActionException in Action for tag [" + tagName + "]", ae); } catch (RuntimeException e) { // no point in setting skip cai.addError("RuntimeException in Action for tag [" + tagName + "]", e); } } }
最后,就會調用 RollingPolicy 的start()了,這里是 TimeBasedRollingPollicy .
// ch.qos.logback.core.rolling.TimeBasedRollingPolicy public void start() { // set the LR for our utility object renameUtil.setContext(this.context); // find out period from the filename pattern if (fileNamePatternStr != null) { fileNamePattern = new FileNamePattern(fileNamePatternStr, this.context); determineCompressionMode(); } else { addWarn(FNP_NOT_SET); addWarn(CoreConstants.SEE_FNP_NOT_SET); throw new IllegalStateException(FNP_NOT_SET + CoreConstants.SEE_FNP_NOT_SET); } compressor = new Compressor(compressionMode); compressor.setContext(context); // wcs : without compression suffix fileNamePatternWithoutCompSuffix = new FileNamePattern(Compressor.computeFileNameStrWithoutCompSuffix(fileNamePatternStr, compressionMode), this.context); addInfo("Will use the pattern " + fileNamePatternWithoutCompSuffix + " for the active file"); if (compressionMode == CompressionMode.ZIP) { String zipEntryFileNamePatternStr = transformFileNamePattern2ZipEntry(fileNamePatternStr); zipEntryFileNamePattern = new FileNamePattern(zipEntryFileNamePatternStr, context); } // 默認會使用 DefaultTimeBasedFileNamingAndTriggeringPolicy 進行滾動 if (timeBasedFileNamingAndTriggeringPolicy == null) { timeBasedFileNamingAndTriggeringPolicy = new DefaultTimeBasedFileNamingAndTriggeringPolicy<E>(); } timeBasedFileNamingAndTriggeringPolicy.setContext(context); timeBasedFileNamingAndTriggeringPolicy.setTimeBasedRollingPolicy(this); timeBasedFileNamingAndTriggeringPolicy.start(); if (!timeBasedFileNamingAndTriggeringPolicy.isStarted()) { addWarn("Subcomponent did not start. TimeBasedRollingPolicy will not start."); return; } // the maxHistory property is given to TimeBasedRollingPolicy instead of to // the TimeBasedFileNamingAndTriggeringPolicy. This makes it more convenient // for the user at the cost of inconsistency here. if (maxHistory != UNBOUND_HISTORY) { archiveRemover = timeBasedFileNamingAndTriggeringPolicy.getArchiveRemover(); archiveRemover.setMaxHistory(maxHistory); archiveRemover.setTotalSizeCap(totalSizeCap.getSize()); if (cleanHistoryOnStart) { addInfo("Cleaning on start up"); Date now = new Date(timeBasedFileNamingAndTriggeringPolicy.getCurrentTime()); cleanUpFuture = archiveRemover.cleanAsynchronously(now); } } else if (!isUnboundedTotalSizeCap()) { addWarn("'maxHistory' is not set, ignoring 'totalSizeCap' option with value ["+totalSizeCap+"]"); } // 調用父類start(), 設置啟動標識,不允許多次調用初始化 super.start(); } // DefaultTimeBasedFileNamingAndTriggeringPolicy 的實現,設置類功能主要還是調用 TimeBasedFileNamingAndTriggeringPolicy 的方法,而其自身,則是處理一些異常情況,以及開啟一個 Remover, 供具體的實現調用 @Override public void start() { super.start(); if (!super.isErrorFree()) return; if(tbrp.fileNamePattern.hasIntegerTokenCOnverter()) { addError("Filename pattern ["+tbrp.fileNamePattern+"] contains an integer token converter, i.e. %i, INCOMPATIBLE with this configuration. Remove it."); return; } archiveRemover = new TimeBasedArchiveRemover(tbrp.fileNamePattern, rc); archiveRemover.setContext(context); started = true; } // TimeBasedFileNamingAndTriggeringPolicy, 則實際處理日志的滾動邏輯了 public void start() { DateTokenConverter<Object> dtc = tbrp.fileNamePattern.getPrimaryDateTokenConverter(); if (dtc == null) { throw new IllegalStateException("FileNamePattern [" + tbrp.fileNamePattern.getPattern() + "] does not contain a valid DateToken"); } if (dtc.getTimeZone() != null) { rc = new RollingCalendar(dtc.getDatePattern(), dtc.getTimeZone(), Locale.getDefault()); } else { rc = new RollingCalendar(dtc.getDatePattern()); } addInfo("The date pattern is '" + dtc.getDatePattern() + "' from file name pattern '" + tbrp.fileNamePattern.getPattern() + "'."); rc.printPeriodicity(this); if (!rc.isCollisionFree()) { addError("The date format in FileNamePattern will result in collisions in the names of archived log files."); addError(CoreConstants.MORE_INFO_PREFIX + COLLIDING_DATE_FORMAT_URL); withErrors(); return; } setDateInCurrentPeriod(new Date(getCurrentTime())); if (tbrp.getParentsRawFileProperty() != null) { File currentFile = new File(tbrp.getParentsRawFileProperty()); if (currentFile.exists() && currentFile.canRead()) { setDateInCurrentPeriod(new Date(currentFile.lastModified())); } } addInfo("Setting initial period to " + dateInCurrentPeriod); computeNextCheck(); }
經過如上初始化動作之后,發現並沒有啟動相應的輪循線程,所以這個點也是超出簡單的認知了,不管怎么樣,我們還要繼續的!我們先來看一下 RollingFileAppender 的 append() 邏輯吧,畢竟它才是log的接入口!
// ch.qos.logback.core.ch.qos.logback.core.rolling.RollingFileAppender, 其接入口為: UnsynchronizedAppenderBase.doAppend() // ch.qos.logback.core.OutputStreamAppender @Override protected void append(E eventObject) { if (!isStarted()) { return; } // 調用 RollingFileAppender 實現 subAppend(eventObject); } // ch.qos.logback.core.ch.qos.logback.core.rolling.RollingFileAppender @Override protected void subAppend(E event) { // The roll-over check must precede actual writing. This is the // only correct behavior for time driven triggers. // We need to synchronize on triggeringPolicy so that only one rollover // occurs at a time synchronized (triggeringPolicy) { if (triggeringPolicy.isTriggeringEvent(currentlyActiveFile, event)) { rollover(); } } super.subAppend(event); }
其中,rollover()就是其滾動邏輯!
所以,看到了吧!這里的文件滾動,是依賴於外部寫入的,原因是為了寫入的線程安全,保證文件的完整性!
換句話說就是,如果在滾動的這個時機,如果有外部寫入,那么,文件得以滾動,否則,不會主動滾動文件!如果外部一直沒日志寫入,就不會存在日志滾動!
我們先來看下滾動的條件吧: triggeringPolicy.isTriggeringEvent(currentlyActiveFile, event)
// ch.qos.logback.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy public boolean isTriggeringEvent(File activeFile, final E event) { long time = getCurrentTime(); if (time >= nextCheck) { Date dateOfElapsedPeriod = dateInCurrentPeriod; addInfo("Elapsed period: " + dateOfElapsedPeriod); elapsedPeriodsFileName = tbrp.fileNamePatternWithoutCompSuffix.convert(dateOfElapsedPeriod); setDateInCurrentPeriod(time); computeNextCheck(); return true; } else { return false; } }
如上判斷,即將當前時間與需要滾動的時間做對,大於滾動時間則返回 true, 並計算出下次需要滾動的時間,備用!
接下來,我們看下,具體的文件滾動實現!兩個主邏輯: 1. 將文件更名滾動; 2. 重新創建一個新的目標文件,以使后續可以寫入!
/** * Implemented by delegating most of the rollover work to a rolling policy. */ public void rollover() { // 此處lock為 ReentrantLock, 即是互斥鎖,只能一個線程可訪問! lock.lock(); try { // Note: This method needs to be synchronized because it needs exclusive // access while it closes and then re-opens the target file. // // make sure to close the hereto active log file! Renaming under windows // does not work for open files. this.closeOutputStream(); attemptRollover(); attemptOpenFile(); } finally { lock.unlock(); } } // 滾動文件邏輯,調用設置的 policy 實現進行滾動,此處我設置的是 TimeBasedRollingPolicy private void attemptRollover() { try { rollingPolicy.rollover(); } catch (RolloverFailure rf) { addWarn("RolloverFailure occurred. Deferring roll-over."); // we failed to roll-over, let us not truncate and risk data loss this.append = true; } } // ch.qos.logback.core.rolling.TimeBasedRollingPolicy rollover public void rollover() throws RolloverFailure { // when rollover is called the elapsed period's file has // been already closed. This is a working assumption of this method. String elapsedPeriodsFileName = timeBasedFileNamingAndTriggeringPolicy.getElapsedPeriodsFileName(); String elapsedPeriodStem = FileFilterUtil.afterLastSlash(elapsedPeriodsFileName); if (compressionMode == CompressionMode.NONE) { if (getParentsRawFileProperty() != null) { renameUtil.rename(getParentsRawFileProperty(), elapsedPeriodsFileName); } // else { nothing to do if CompressionMode == NONE and parentsRawFileProperty == null } } else { if (getParentsRawFileProperty() == null) { compressionFuture = compressor.asyncCompress(elapsedPeriodsFileName, elapsedPeriodsFileName, elapsedPeriodStem); } else { compressionFuture = renameRawAndAsyncCompress(elapsedPeriodsFileName, elapsedPeriodStem); } } if (archiveRemover != null) { Date now = new Date(timeBasedFileNamingAndTriggeringPolicy.getCurrentTime()); this.cleanUpFuture = archiveRemover.cleanAsynchronously(now); } }
TimeBasedRollingPolicy 的滾動方式為,重命名文件即可!即先獲取外部設置的主寫文件,然后根據新文件命名規則,生成一個新路徑,然后重命名文件!重命名也是有些講究的,有興趣的同學可以查看下其重命名的實現!
// ch.qos.logback.core.rolling.helper.RenameUtil /** * A relatively robust file renaming method which in case of failure due to * src and target being on different volumes, falls back onto * renaming by copying. * * @param src * @param target * @throws RolloverFailure */ public void rename(String src, String target) throws RolloverFailure { if (src.equals(target)) { addWarn("Source and target files are the same [" + src + "]. Skipping."); return; } File srcFile = new File(src); if (srcFile.exists()) { // 如果目錄不存在,會先去創建目錄,所以你可以滾動到其他地方,而目錄位置則不用管(權限除外) File targetFile = new File(target); createMissingTargetDirsIfNecessary(targetFile); addInfo("Renaming file [" + srcFile + "] to [" + targetFile + "]"); boolean result = srcFile.renameTo(targetFile); // 對於直接重命名失敗,則會再次嘗試,如果在不同的分區,則會使用一次文件復制的方式進行一次重命名,具體做法是,先把文件copy到新地址,然后再將當前文件刪除 if (!result) { addWarn("Failed to rename file [" + srcFile + "] as [" + targetFile + "]."); Boolean areOnDifferentVolumes = areOnDifferentVolumes(srcFile, targetFile); if (Boolean.TRUE.equals(areOnDifferentVolumes)) { addWarn("Detected different file systems for source [" + src + "] and target [" + target + "]. Attempting rename by copying."); renameByCopying(src, target); return; } else { addWarn("Please consider leaving the [file] option of " + RollingFileAppender.class.getSimpleName() + " empty."); addWarn("See also " + RENAMING_ERROR_URL); } } } else { throw new RolloverFailure("File [" + src + "] does not exist."); } }
在做完日志重命名的滾動后,還有一個可能的工作,就是刪除過期的日志!這個工作由 archiveRemover 來做,即之前在 DefaultTimeBasedFileNamingAndTriggeringPolicy 中創建的實例! 會調用其 archiveRemover.cleanAsynchronously(now);
public Future<?> cleanAsynchronously(Date now) { ArhiveRemoverRunnable runnable = new ArhiveRemoverRunnable(now); ExecutorService executorService = context.getScheduledExecutorService(); Future<?> future = executorService.submit(runnable); return future; }
在做刪除過期日志時,會先獲取一個 ExecutorService, 進行異步刪除, 而這個 ExecutorService 默認開啟 8 常駐線程,進行日志處理!
刪除動作進行異步執行,從而避免影響業務執行!清理過程如下:
public class ArhiveRemoverRunnable implements Runnable { Date now; ArhiveRemoverRunnable(Date now) { this.now = now; } @Override public void run() { // 先清除當前文件,再根據設置的最大值,刪除列表 clean(now); if (totalSizeCap != UNBOUNDED_TOTAL_SIZE_CAP && totalSizeCap > 0) { capTotalSize(now); } } } public void clean(Date now) { long nowInMillis = now.getTime(); // for a live appender periodsElapsed is expected to be 1 int periodsElapsed = computeElapsedPeriodsSinceLastClean(nowInMillis); lastHeartBeat = nowInMillis; if (periodsElapsed > 1) { addInfo("Multiple periods, i.e. " + periodsElapsed + " periods, seem to have elapsed. This is expected at application start."); } for (int i = 0; i < periodsElapsed; i++) { // 此處會根據 maxHistory 進行 -1 后清除文件,即: 只會清理 periodsElapsed 次歷史日志 int offset = getPeriodOffsetForDeletionTarget() - i; Date dateOfPeriodToClean = rc.getEndOfNextNthPeriod(now, offset); cleanPeriod(dateOfPeriodToClean); } } public void cleanPeriod(Date dateOfPeriodToClean) { // 獲取需要刪除的文件列表,然后依次刪除,如果文件夾內的文件全部被刪除,則將文件夾刪除 File[] matchingFileArray = getFilesInPeriod(dateOfPeriodToClean); for (File f : matchingFileArray) { addInfo("deleting " + f); f.delete(); } if (parentClean && matchingFileArray.length > 0) { File parentDir = getParentDir(matchingFileArray[0]); removeFolderIfEmpty(parentDir); } } // 按規則匹配需要刪除的文件 protected File[] getFilesInPeriod(Date dateOfPeriodToClean) { String filenameToDelete = fileNamePattern.convert(dateOfPeriodToClean); File file2Delete = new File(filenameToDelete); if (fileExistsAndIsFile(file2Delete)) { return new File[] { file2Delete }; } else { return new File[0]; } } // 清理歷史文件邏輯,注意要想清理歷史文件,就一定要設置好 totalSizeCap, 否則,不會進行自動清理! void capTotalSize(Date now) { long totalSize = 0; long totalRemoved = 0; for (int offset = 0; offset < maxHistory; offset++) { Date date = rc.getEndOfNextNthPeriod(now, -offset); File[] matchingFileArray = getFilesInPeriod(date); descendingSortByLastModified(matchingFileArray); for (File f : matchingFileArray) { long size = f.length(); if (totalSize + size > totalSizeCap) { addInfo("Deleting [" + f + "]" + " of size " + new FileSize(size)); totalRemoved += size; f.delete(); } totalSize += size; } } addInfo("Removed " + new FileSize(totalRemoved) + " of files"); }
以上就是一個刪除過期日志的邏輯,主要有幾個點:
1. 只會進行清理 maxHistory 個周期的日志,即只會倒推 n 個周期內的日志;
2. 只會清理文件大小大於 totalSizeCap 大小以后的文件;(這個文件強依賴文件列表的排序,這里的排序是根據最后修改時間來排的)
3. maxHistory 並非最大保留天數,不要相信坑貨文檔,它只是一個掃描周期而已,不過這個值在上一步清理時會處理一次!
還有個細節,咱們得再來看看:滾動時機,按天,按小時,按分鍾?
// 滾動時機判定 // ch.qos.logback.core.rolling.helper.RollingCalendar public Date getEndOfNextNthPeriod(Date now, int periods) { return innerGetEndOfNextNthPeriod(this, this.periodicityType, now, periods); } static private Date innerGetEndOfNextNthPeriod(Calendar cal, PeriodicityType periodicityType, Date now, int numPeriods) { cal.setTime(now); switch (periodicityType) { case TOP_OF_MILLISECOND: cal.add(Calendar.MILLISECOND, numPeriods); break; case TOP_OF_SECOND: cal.set(Calendar.MILLISECOND, 0); cal.add(Calendar.SECOND, numPeriods); break; case TOP_OF_MINUTE: cal.set(Calendar.SECOND, 0); cal.set(Calendar.MILLISECOND, 0); cal.add(Calendar.MINUTE, numPeriods); break; case TOP_OF_HOUR: cal.set(Calendar.MINUTE, 0); cal.set(Calendar.SECOND, 0); cal.set(Calendar.MILLISECOND, 0); cal.add(Calendar.HOUR_OF_DAY, numPeriods); break; case TOP_OF_DAY: cal.set(Calendar.HOUR_OF_DAY, 0); cal.set(Calendar.MINUTE, 0); cal.set(Calendar.SECOND, 0); cal.set(Calendar.MILLISECOND, 0); cal.add(Calendar.DATE, numPeriods); break; case TOP_OF_WEEK: cal.set(Calendar.DAY_OF_WEEK, cal.getFirstDayOfWeek()); cal.set(Calendar.HOUR_OF_DAY, 0); cal.set(Calendar.MINUTE, 0); cal.set(Calendar.SECOND, 0); cal.set(Calendar.MILLISECOND, 0); cal.add(Calendar.WEEK_OF_YEAR, numPeriods); break; case TOP_OF_MONTH: cal.set(Calendar.DATE, 1); cal.set(Calendar.HOUR_OF_DAY, 0); cal.set(Calendar.MINUTE, 0); cal.set(Calendar.SECOND, 0); cal.set(Calendar.MILLISECOND, 0); cal.add(Calendar.MONTH, numPeriods); break; default: throw new IllegalStateException("Unknown periodicity type."); } return cal.getTime(); }
可以看到其滾動的粒度: TOP_OF_MILLISECOND/TOP_OF_SECOND/TOP_OF_MINUTE/TOP_OF_HOUR/TOP_OF_DAY/TOP_OF_WEEK/TOP_OF_MONTH, 要說起來,粒度還是很細的哦!至於能不能真的有用,另說了!
總結下logback的滾動方式!
1. 在寫入的時機進行滾動時機檢查,合適則進行滾動;
2. 同步滾動操作,保證線程安全;
3. 使用重命名的方式進行滾動文件處理,如果失敗會嘗試一次不同分區的文件復制操作;
4. 刪除過期日志有兩個時機,一個是判斷當前周期前 n 個周期文件,如果有則刪除;
5. 對於設置了最大文件大小限制時,另外進行允許周期內的文件大小判定,超過大小后按修改時間最早刪除;
6. 觸發滾動時機后,進行異步刪除,一般不影響業務;
第三方工具如: 經典版 cronolog, 時尚版 logrotate(麻煩)
cronolog 是一個很古老的日志滾動工具了(應該已經不維護了)。它可以接收應用的輸出日志,然后按照規則進行日志存儲,比如按照年月日時分秒來保存文件!
在網上其資料也已經不是很多了,很多人為了下載一個安裝包也是絞盡腦汁啊!我也提供一個便捷安裝包吧: 點此下載;
其 github 項目地址: https://github.com/fordmason/cronolog , 你完全可以自己去下載一個完全的包,自己安裝!
不過我還是要說一下其他兩個安裝方式:
1. 直接使用 yum 源安裝;(好像是要安裝 epel 源) (推薦)
yum install cronolog -y
2. 使用上面下載的包,直接解壓即可
tar -zxvf cronolog-bin.tar.gz -C /
3. 使用網上別人提供的源碼安裝
hehe...
說了這么多,還不是為了使用,如何與應用結合?
其實只需要在你原來應用啟動的后面再加上如下命令就可以了!
$> | /usr/local/sbin/cronolog -S /var/logs/ai_ln.out /var/logs/ai.%Y-%m-%d-%H.out
完整的操作示例如下:
exec nohup java -jar /www/aproj\.jar 2>&1 | /usr/local/sbin/cronolog -S /var/logs/ai_ln.out /var/logs/ai.%Y-%m-%d-%H.out >> /dev/null &
如上命令是網上大部分人是這么寫的,但是在某些情況下會有問題。比如我想遠程啟動這個服務的時候,就會一直拿不到結果!為啥?反正寫成下面這個就完美了!即在 cronolog 之后,再加一個重定向輸出 2>&1 。
exec nohup java -jar /www/aproj\.jar 2>&1 | /usr/local/sbin/cronolog -S /var/logs/ai_ln.out /var/logs/ai.%Y-%m-%d-%H.out >> /dev/null 2>&1 &
那么,這個工具和應用自己輸出日志相比,有什么好處嗎?它是怎么實現的呢?
好處前面已經說了,對代碼無侵入,控制更靈活!
其實現原理為,接收一個標准的輸入流,然后寫入到相應文件即可!它不負責文件的刪除,所以刪除過期文件還得依賴另外的腳本!
其主體源碼如下:
/* Loop, waiting for data on standard input */ for (;;) { /** * Read a buffer's worth of log file data, exiting on errors * or end of file. */ n_bytes_read = read(0, read_buf, sizeof read_buf); if (n_bytes_read == 0) { exit(3); } if (errno == EINTR) { continue; } else if (n_bytes_read < 0) { exit(4); } time_now = time(NULL) + time_offset; /** * If the current period has finished and there is a log file * open, close the log file */ if ((time_now >= next_period) && (log_fd >= 0)) { close(log_fd); log_fd = -1; } /** * If there is no log file open then open a new one. */ if (log_fd < 0) { log_fd = new_log_file(template, linkname, linktype, prevlinkname, periodicity, period_multiple, period_delay, filename, sizeof (filename), time_now, &next_period); } DEBUG(("%s (%d): wrote message; next period starts at %s (%d) in %d secs\n", timestamp(time_now), time_now, timestamp(next_period), next_period, next_period - time_now)); /** * Write out the log data to the current log file. */ if (write(log_fd, read_buf, n_bytes_read) != n_bytes_read) { perror(filename); exit(5); } }
大概操作就是:
1. cronolog 進程開啟后,會一直死循環,除非遇到錯誤如應用關閉等;
2. 阻塞從標准輸入讀取信息,讀取到后,再進行文件操作;
3. 每次讀取內容后判斷是否到達需要新滾動的周期,如果到了,就把原來的文件close掉,並重新創建一個用於寫的文件;
4. 只管向打開的文件中寫入緩沖內容即可;
5. 所有讀入數據是基於管道操作的,簡單實用;
看起來很簡單啊!會不會有什么問題呢?應該不會吧,它可是經過時間考驗的哦。越是簡單的,往往越是可靠的!
看着上面代碼,有同學肯定要說了,這么簡單的代碼誰不會啊,自己順手就來一個shell搞定。 且不論你的shell寫得是否可靠,但是你基於 shell, 別人是基於c的,恐怕不是一個量級的哦!
最后,還有個問題我們要處理下,那就是過期日志的清理問題?
這個簡單的腳本是不會給你做了,或者說我沒有發現它有這功能;所以,只能自己寫腳本清理了!一行代碼搞定!
# vim clean_log.sh find /var/logs/ai -mtime +8 -name "ai.*out" -exec rm -rf {} \; # 然后在 crontab 中加入執行時機即可,一般一天一次! 0 0 * * * sh clean_log.sh
搞定!
當然,你也可以寫完善點:
#!/bin/bash log_path_prefix=/opt/springboot/logs expire_hours=3; expire_minutes=$[ expire_hours * 60 ]; now_time=`date "+%Y-%m-%d %H:%M:%S"` echo "-At $now_time"; # del function function del_expire_logs() { find_cmd="find $1 -mmin +${2} -type f " if [ "$3" != "" ]; then find_cmd="$find_cmd -name '$3'"; fi; echo " -Cmd: $find_cmd"; f_expired_files=`eval $find_cmd`; echo " -Find result: $f_expired_files"; if [ "$f_expired_files" != "" ]; then file_list=($f_expired_files); for item in ${file_list[@]}; do echo " -Del file: $item"; rm -rf $item; done; fi; } del_expire_logs $log_path_prefix $expire_minutes "*.out"; log_path_prefix2=/opt/logs $expire_minutes2=2880; # for 2 day del_expire_logs $log_path_prefix2 $expire_minutes2;
以上,就是一些日志滾動的實現及原理解析了!是不是有一種豁然開朗的感覺?哈哈。。
事情其實並沒有想像中的難!