淺談日志收集


前言

各種程序輸出的日志重要性不言而喻,借助日志可以分析程序的運行狀態、用戶的操作行為等。最早常說的日志監控系統是ELK,即ElasticSearch(負責數據檢索)、Logstash(負責數據收集)、Kibana(負責數據展示)三個軟件的組合,隨着技術的發展,又出現了很多新的名詞,比如EFK,這個F可以指Filebeat,有時也指Fluentd,其實日志收集軟件的原理都是大概相同的,區別是它們的編程語言不同,功能不同,所以在選擇時要根據自己的實際情況,比如在K8S及docker環境中,就可以使用更輕量級的fluent-bit, 而在雲虛擬機和物理機上,則可以使用功能更強大的fluentd,目前我一直在線上使用fluentd系列的軟件來收集日志,而且它們在長期的線上環境上運行良好。
實際上有時候我們並不需要ELK(或者EFK)的全部功能,比如出於成本考慮,只需要把多台機器上(或者容器)中的日志匯總到一台專用的日志機器上,並通過日期和目錄區分,就已經方便技術人員登錄查看,這樣就節省了ES和Kibana的資源,當然這只是省錢的方法。后面的內容將寫到完整的日志收集場景和配置方法。 
 

場景1:從多台雲主機收集日志到統一的Logserver

由於對fluentd比較熟悉了,我們這里的操作就使用Fluentd來實現,原理是Fluentd在AppServer上tail日志文件,並將產生的內容發送到LogServer上的fluentd,並根據規則存盤,簡圖如下: 
由於想把多台AppServer上的日志集中到一台機器,那么應用程序的日志輸出位置應該是有規律的、固定的(和研發人員商定), 下面用我做過的一個具體需求舉例分析:
日志目錄結構: 
├── public
│   └── release-v0.2.20
│       └── sys_log.log
├── serv_arena
│   └── release-v0.1.10
├── serv_guild
│   └── release-v0.1.10
│       └── sys_log.log
├── serv_name
│   └── release-v0.1.10
│       └── sys_log.log

通過日志目錄結構可以發現,服務器上運行了public、serv_arena等微服務,微服務目錄里是程序版本,程序版本目錄可能有多個,程序版本目錄中是最終log。
 
日志原始格式:
這里輸出的日志是純文件格式,有些日志則可能是Json等其它格式,對於不同格式,可以選則不同的parser來處理
 
研發人員的需求是:
日志到LogServer后,目錄結果不變。 
 
下面是具體配置過程
step1:
在三台App Server上和LogServer上都安裝Fluentd:
  $ curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh 

 

我當前的版本在安裝完成后,會在/etc/td-agent/目錄下生成td-agent.conf文件和plugin目錄,我通常會再建立一個conf.d目錄,並把所有配置文件細分放進去,並在td-agent.d里include進來,這是個人習慣,感覺很清爽。所以目錄結構最后為:

# tree /etc/td-agent/
/etc/td-agent/
├── conf.d
├── plugin
└── td-agent.conf

 

step2:
Fluentd已經安裝好了,下面就要對其進行配置,三台App Server上的配置是一樣的,但是 LogServer上的配置略有不同,因為Fluentd在Logserver
端的角色是接收端。 
 
LogServer端Fluentd配置:
# cat td-agent.conf
<system>
  log_level info
</system>
<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>
@include /etc/td-agent/conf.d/*.conf 
# cat conf.d/raid.conf
<match raid.**>
  @type file
  path /mnt/logs/raid/%Y%m%d/${tag[4]}/${tag[5]}.${tag[6]}.${tag[7]}/${tag[8]}_%Y%m%d%H
  append true
  <buffer time, tag>
    @type file
    path /mnt/logs/raid/buffer/
    timekey 1h
    chunk_limit_size 5MB
    flush_interval 5s
    flush_mode interval
    flush_thread_count 8
    flush_at_shutdown true
  </buffer>
</match>

這里值得一說的是path選項,這里用到了tag選項來獲取一些信息,而tag信息是從AppServer端的Fluentd配置中傳過來的,最后path的結果如下:

# tree /mnt/logs/
/mnt/logs/
└── raid
    ├── 20200623
    │   ├── public
    │   └── serv_guild
    │       └── release-v0.1.10
    │           └── sys_log_2020062307.log
    └── buffer

 
AppServer端Fluentd配置:
# cat td-agent.conf
@include /etc/td-agent/conf.d/*.conf 
<source>
  @type tail
  path /mnt/logs/raid/public/*/*
  pos_file /var/log/td-agent/public.log.pos
  tag raid.*
  <parse>
    @type none
    time_format %Y-%m-%dT%H:%M:%S.%L
  </parse>
  refresh_interval 5s
</source>


<source>
  @type tail
  path /mnt/logs/raid/serv_arena/*/*
  pos_file /var/log/td-agent/serv_arena.log.pos
  tag raid.*
  <parse>
    @type none
    time_format %Y-%m-%dT%H:%M:%S.%L
  </parse>
</source>


<source>
  @type tail
  path /mnt/logs/raid/serv_guild/*/*
  pos_file /var/log/td-agent/serv_guild.log.pos
  tag raid.*
  <parse>
    @type none
    time_format %Y-%m-%dT%H:%M:%S.%L
  </parse>
</source>


<source>
  @type tail
  path /mnt/logs/raid/serv_name/*/*
  pos_file /var/log/td-agent/serv_name.log.pos
  tag raid.*
  <parse>
    @type none
    time_format %Y-%m-%dT%H:%M:%S.%L
  </parse>
</source>


<filter raid.**>
  @type record_transformer
  <record>
    host_param "#{Socket.gethostname}"
  </record>
</filter>


<match raid.**>
  @type forward
  <server>
    name raid-logserver
    host 10.83.36.106
    port 24224
  </server>
  <format>
    @type single_value
    message_key message
    add_newline true
  </format>
</match>

這時值得一說的仍然是path和tag選項,我們配置的tag是raid.*,但實際tag的內容到低是什么呢?以致於tag傳到LogServer后,我們能對tag進行一系列操作。
raid.*會匹配到path的路徑,並把/用.代替,也就是tag raid.* 實際的內容類似:raid.mnt.logs.raid.public.release-v0.2.20.sys_log.log_2020062304.log
這樣,我們后續根據tag進行目錄配置才成為可能。
這里值得一說的還有format single_value參數, 如果不指定format為single_value,那你看到最終日志是如下這樣,前面加了日期和tag, 這通常是我們不需要的,我們需要原樣把日志打到LogServer上。message_key是fluentd自動給我們加上的,docker中的message key可能叫log, docker中的message key可能叫log, 如果我們tail的文件沒有message key, 此處就不能指定,否則是無法匹配到並進行操作的,add_newline相當於是否換行。 

 

場景2:從LogServer把日志集中導入elasticSearch

通過場景一,日志已經可以集中到LogServer上,通過LogServer就可以進一步把日志導入es或其它的系統了。 
這里項目組有了新的需求,需要把在線人數在kibana中出圖顯示,日志格式為:
{"host":"5x.24x.6x.x","idleLoad":20000,"intVer":2000,"mode":1,"online":0,"onlineLimit":20000,"port":1000,"serverId":2,"serverName":"-game server-","serverType":21,"serviceMode":0,"sn":173221053,"status":1,"updateTime":1594278043935,"ver":"0.2.40.77","zoneId":2}

 
step1:
編寫fluentd配置,把日志傳到elasticsearch上(Logserver端配置)
<match online.**>
  @type elasticsearch
  host search-xxxxxxxxxxxxxxxx.es.amazonaws.com
  port 80
  logstash_format true
  logstash_prefix game-ccu
  default_elasticsearch_version 5
  reconnect_on_error true
  reload_connections false
  type_name doc
  <buffer>
    @type file
    path /mnt/logs/raid/online-buffer/
    chunk_limit_size 5MB
    flush_interval 30s
    flush_mode interval
    flush_thread_count 4
    flush_at_shutdown true
  </buffer>
</match>

收集端配置:

<source>
  @type tail
  path /mnt/logs/raid/game/*/monitor/*
  pos_file /var/log/td-agent/online.log.pos
  tag online
  <parse>
    @type json
  </parse>
  refresh_interval 5s
</source>


<match online.**>
  @type forward
  <server>
    name raid-logserver
    host 10.83.3x.xx
    port 24224
  </server>
</match>

收集端需要注意的是,parse type應該指定為json而不是之前的none,否則傳到es的日志帶有message這個log key, 用format也無法去掉,因為elasticache插件不支持format,將帶有message log key的日志傳到es, 不會被es完全解析。 
 
日志傳到es后,就可以create index pattern,並創建圖表了:

 

 

場景3:處理並拆分位於同一文件中的日志並打到ES

對於不同文件中的日志,我們可以使用tail分別抓取,但是如果日志混合在同一個文件中(比如docker的標准輸出,就能只輸出到同一個文件中)又該如何把日志拆分呢?這里就需要用到fluentd的grep retag等功能,根據關鍵字,對日志進行過濾,重新標記並執行相關動作。
 
解決此問題的思路是用fluentd監聽多個日志文件,然后對其進行過濾,首先過濾掉output非es的日志,然后對於不同關鍵字,進行重打tag操作,最后打入es。但值得注意的是,fluentd是不支持多個match匹配相同的tag的,否則只有第一個生效。
 
開發人員有需求如下:
監聽如下三個日志
/mnt/server/videoslotDevServer/logs/pomelo-tracking-social-server-1.log
/mnt/server/videoslotDevServer/logs/pomelo.log
/mnt/server/videoslotDevServer/pomelo-product-social-server-1.log
 
日志內容實例
{"LOGMSG":"MSGRouter-STC","LOGINDEX":"router","LOGOBJ":"{'type':'s2c_poll_check','token':'','qid':0,'errorCode':0,'list':{'s2c_get_server_time':{'type':'s2c_get_server_time','token':'','qid':0,'serverTime':1595246357723}}}","user_mid":3534,"OUTPUT":"ES","lft":"info","date":"2020-07-20T11:59:17.723Z","time":1595246357723}

1.日志中output值為es的才需要打入es里
2.根據不同的關鍵字在es建立不同的索引
 
step1:
Fluent配置文件
# cat slots-multi.conf
<source>
  @type tail
  keep_time_key true
  path /mnt/server/videoslotDevServer/logs/pomelo-tracking-social-server-1.log, /mnt/server/videoslotDevServer/logs/pomelo.log, /mnt/server/videoslotDevServer/logs/pomelo-product-social-server-1.log
  pos_file /var/log/td-agent/slots_multi.log.pos
  tag multi
  <parse>
    @type json
    time_key @timestamp # 日志默認打到es的日志時間不正確,所以我自己給加了一個時間
  </parse>
  refresh_interval 5s
</source>


<filter multi>          # 先對output 是es的進行匹配
  @type grep
  <regexp>
    key OUTPUT
    pattern "ES"
  </regexp>
  emit_invalid_record_to_error false
</filter>


<match multi>
  @type rewrite_tag_filter
  <rule>
    key     LOGINDEX
    pattern /(.+)/           # 這里對router coin logic關鍵字進行匹配,如果是其它關鍵字,則也可以自動匹配並自動打入es生成索引
    tag     videoslots.$1
  </rule>
  emit_invalid_record_to_error false
</match>


<match videoslots.**>
  @log_level debug
  @type elasticsearch
  host search-xxxxxxxxxxxxxxxxx.es.amazonaws.com
  port 80
  logstash_format true
  logstash_prefix sandbox.${tag}
  default_elasticsearch_version 7
  reconnect_on_error true
  reload_connections false
  <buffer>
    @type file
    path /sgn/logs/videoslots/buffer_multi/
    chunk_limit_size 5MB
    flush_interval 30s
    flush_mode interval
    flush_thread_count 4
    flush_at_shutdown true
  </buffer>
</match>

Fluentd實則非常靈活,這里還有一種比較啰嗦的寫法如下:

<source>
  @type tail
  keep_time_key true
  path /mnt/server/videoslotDevServer/logs/pomelo-tracking-social-server-1.log, /mnt/server/videoslotDevServer/logs/pomelo.log, /mnt/server/videoslotDevServer/logs/pomelo-product-social-server-1.log
  pos_file /var/log/td-agent/slots_multi.log.pos
  tag multi
  <parse>
    @type json
    time_key @timestamp
  </parse>
  refresh_interval 5s
</source>


<filter multi>
  @type grep
  <regexp>
    key OUTPUT
    pattern "ES"
  </regexp>
</filter>


<match multi>
@type copy
<store>
    @type rewrite_tag_filter
    <rule>
      key     LOGINDEX
      pattern /^logic$/
      tag     logic
    </rule>
  </store>
  <store>
    @type rewrite_tag_filter
    <rule>
      key     LOGINDEX
      pattern /^router$/
      tag     router
    </rule>
  </store>
  <store>
    @type rewrite_tag_filter
    <rule>
      key     LOGINDEX
      pattern /^coin$/
      tag     coin
    </rule>
  </store>
</match>


<match logic>
  @type elasticsearch
  @log_level debug
  host search-xxxxxxxx.es.amazonaws.com
  port 80
  logstash_format true
  logstash_prefix videoslots-logic
  default_elasticsearch_version 7
  reconnect_on_error true
  reload_connections false
  <buffer>
    @type file
    path /sgn/logs/videoslots/buffer_logic/
    chunk_limit_size 5MB
    flush_interval 30s
    flush_mode interval
    flush_thread_count 4
    flush_at_shutdown true
  </buffer>
</match>


<match router>
  @log_level debug
  @type elasticsearch
  host search-videoslots-xxxxxxxxxxx.us-west-2.es.amazonaws.com
  port 80
  logstash_format true
  logstash_prefix videoslots-router
  default_elasticsearch_version 7
  reconnect_on_error true
  reload_connections false
  <buffer>
    @type file
    path /sgn/logs/videoslots/buffer_router/
    chunk_limit_size 5MB
    flush_interval 30s
    flush_mode interval
    flush_thread_count 4
    flush_at_shutdown true
  </buffer>
</match>


<match coin>
  @log_level debug
  @type elasticsearch
  host search-videoslots-xxxxxxxxxxx.us-west-2.es.amazonaws.com
  port 80
  logstash_format true
  logstash_prefix videoslots-coin
  default_elasticsearch_version 7
  reconnect_on_error true
  reload_connections false
  <buffer>
    @type file
    path /sgn/logs/videoslots/buffer_coin/
    timekey_use_utc true
    chunk_limit_size 5MB
    flush_interval 30s
    flush_mode interval
    flush_thread_count 4
    flush_at_shutdown true
  </buffer>
</match>

 

step2:

到es上create index pattern

 

建立template,設置索引生成時的默認配置

PUT _template/sandbox_videoslots
{
  "index_patterns": ["sandbox.videoslots*"], # 要給哪些索引生效此類配置
  "settings": {
    "number_of_shards": 1                    # shard值默認1000, 當前為測試環境,只有一個節點,所以調整shard數量為1
  },
  "mappings": {
    "properties": {
      "LOGOBJ": {
        "type": "text"                       # 設置LOGOBJ字段的格式為字串格式
      }
    }
  }
}

7版本的es中,提供了im功能(索引管理),可以控制索引數據保存的周期,比如設置router日志可保留30天

 

場景4:把JAVA日志按指定格式打到ES上

Fluentd提供了對於多行數據的parser,可以用於解析Java Stacktrace Log

當前業務中有兩個類型的java log format:

 

# format1
2020-07-28 09:47:37,609 DEBUG [system.server] branch:release/v0.2.61 getHead name:HEAD,objectId:AnyObjectId[c9766419a4a7b691b4156fbf50]

# format2
2020-07-28 00:30:59,520 ERROR [SceneHeartbeat-39] [system.error] hanlder execute error msgId:920
java.lang.IndexOutOfBoundsException: readerIndex(21) + length(1) exceeds writerIndex(21): PooledSlicedByteBuf(ridx: 21, widx: 21, cap: 21/21, unwrapped: PooledUnsafeDirectByteBuf(ridx: 3, widx: 15, cap: 64))
        at io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1451)
        at io.netty.buffer.AbstractByteBuf.readByte(AbstractByteBuf.java:738)
        at com.cg.raid.core.msg.net.NetMsgHelper.getU32(NetMsgHelper.java:7)
        at com.cg.raid.core.msg.net.NetMsgBase.getU32(NetMsgBase.java:79)
        at com.cg.raid.core.msg.net.INetMsg.getInts(INetMsg.java:177

研發期望按如下格式表現日志:

"grok": {
        "field": "message",
        "patterns": ["%{TIMESTAMP_ISO8601:event_time} %{DATA:level} %{DATA:thread} %{DATA:logger} (?m)%{GREEDYDATA:msg}"],
        "on_failure": [
          {
            "set": {
              "field": "grok_error",
              "value": "{{ _ingest.on_failure_message }}"
            }
          }
        ]
      },

Fluentd配置文件:

<source>
  @type tail
  path /mnt/logs/raid/%Y%m%d/publish/*/*
  pos_file /var/log/td-agent/publish.log.pos
  tag es.raid.publish
  <parse>
    @type multiline
    format_firstline /\d{4}-\d{1,2}-\d{1,2}/
    format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<level>[^\s]+) \[(?<thread>.*)\] (?<message>.*)/
  </parse>
  refresh_interval 5s
</source>


<source>
  @type tail
  path /mnt/logs/raid/%Y%m%d/game/*/*
  pos_file /var/log/td-agent/game.log.pos
  tag es.raid.game
  <parse>
    @type multiline
    format_firstline /\d{4}-\d{1,2}-\d{1,2}/
    format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<level>[^\s]+) \[(?<thread>.*)\] \[(?<logger>.*)\] (?<message>.*)/
  </parse>
  refresh_interval 5s
</source>


<source>
  @type tail
  path /mnt/logs/raid/%Y%m%d/server/*/*
  pos_file /var/log/td-agent/server.log.pos
  tag es.raid.server
  <parse>
    @type multiline
    format_firstline /\d{4}-\d{1,2}-\d{1,2}/
    format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<level>[^\s]+) \[(?<thread>.*)\] \[(?<logger>.*)\] (?<message>.*)/
  </parse>
  refresh_interval 5s
</source>


<source>
  @type tail
  path /mnt/logs/raid/%Y%m%d/inter/*
  pos_file /var/log/td-agent/inter.log.pos
  tag es.raid.inter
  <parse>
    @type multiline
    format_firstline /\d{4}-\d{1,2}-\d{1,2}/
    format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<level>[^\s]+) \[(?<thread>.*)\] (?<message>.*)/
  </parse>
  refresh_interval 5s
</source>


<source>
  @type tail
  path /mnt/logs/raid/%Y%m%d/public/*/*
  pos_file /var/log/td-agent/public.log.pos
  tag es.raid.public
  <parse>
    @type multiline
    format_firstline /\d{4}-\d{1,2}-\d{1,2}/
    format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<level>[^\s]+) \[(?<thread>.*)\] \[(?<logger>.*)\] (?<message>.*)/
  </parse>
  refresh_interval 5s
</source>


<source>
  @type tail
  path /mnt/logs/raid/%Y%m%d/arena/*/*
  pos_file /var/log/td-agent/arena.log.pos
  tag es.raid.arena
  <parse>
    @type multiline
    format_firstline /\d{4}-\d{1,2}-\d{1,2}/
    format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<level>[^\s]+) \[(?<thread>.*)\] \[(?<logger>.*)\] (?<message>.*)/
  </parse>
  refresh_interval 5s
</source>


<source>
  @type tail
  path /mnt/logs/raid/%Y%m%d/guild/*/*
  pos_file /var/log/td-agent/guild.log.pos
  tag es.raid.guild
  <parse>
    @type multiline
    format_firstline /\d{4}-\d{1,2}-\d{1,2}/
    format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<level>[^\s]+) \[(?<thread>.*)\] \[(?<logger>.*)\] (?<message>.*)/
  </parse>
  refresh_interval 5s
</source>


<source>
  @type tail
  path /mnt/logs/raid/%Y%m%d/name/*/*
  pos_file /var/log/td-agent/name.log.pos
  tag es.raid.name
  <parse>
    @type multiline
    format_firstline /\d{4}-\d{1,2}-\d{1,2}/
    format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<level>[^\s]+) \[(?<thread>.*)\] \[(?<logger>.*)\] (?<message>.*)/
  </parse>
  refresh_interval 5s
</source>


<filter es.raid.**>
  @log_level debug
  @type grep
  <regexp>
    key level
    pattern "ERROR"
  </regexp>
</filter>


<match es.raid.**>
  @log_level debug
  @type elasticsearch
  host search-xxxxxx.es.amazonaws.com
  port 80
  logstash_format true
  logstash_prefix raid-log-${tag[2]}
  default_elasticsearch_version 5
  reconnect_on_error true
  reload_connections false
  type_name doc
  <buffer tag>
    @type file
    path /mnt/logs/raid/raid_es_buffer/
    chunk_limit_size 5MB
    flush_interval 5s
    flush_mode interval
    flush_thread_count 4
    flush_at_shutdown true
  </buffer>
</match>

ES索引設置:

PUT _template/raid-log
{
  "index_patterns": ["raid-log-*"],
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "properties": {
      "level": {
        "type": "keyword",
        "doc_values": true
      },
      "logger": {
        "type": "keyword",
        "doc_values": true
      },
      "thread": {
        "type": "keyword",
        "doc_values": true
      },
      "message": {
        "type": "text"
      }
    }
  }
}

# 測試環境number_of_shards指定1即可
# text類型比keyword消耗更多cpu資源,但查詢更靈活

最后建立index pattern即可。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM