ELK(Logstash+Elasticsearch+Kibana)的原理和詳細搭建


一、 Elastic Stack

  Elastic Stack是ELK的官方稱呼,網址:https://www.elastic.co/cn/products ,其作用是“構建在開源基礎之上, Elastic Stack 讓您能夠安全可靠地獲取任何來源、任何格式的數據,並且能夠實時地對數據進行搜索、分析和可視化。”

它主要包括三個元件:

  • Beats + Logstash:采集任何格式,任何來源的數據。

    Beats: Beats 是輕量型采集器的平台,從邊緣機器向 Logstash 和 Elasticsearch 發送數據。

    Beats 是數據采集的得力工具。將這些采集器安裝在您的服務器中,它們就會把數據匯總到 Elasticsearch。如果需要更加強大的處理性能,Beats 還能將數據輸送到 Logstash進行轉換和解析。官方提供了多種現成的beats以針對不同協議的數據:

      Filebeat:日志文件

      Metricbeat:指標

      Packagebeat: 網絡數據

      Winlogbeat: windows時間日志

      Auditbeat: 審計日志

      Heartbeat: 心跳日志

     beat實現可定制化:每款開源采集器都是以用於轉發數據的通用庫 libbeat 為基石。需要監控某個專用協議?您可以自己構建采集器。我們將為您提供所需的構建基塊。

     Logstash: Logstash 是動態數據收集管道,擁有可擴展的插件生態系統,能夠與 Elasticsearch 產生強大的協同作用。

      Logstash 是開源的服務器端數據處理管道,能夠同時 從多個來源采集數據、轉換數據,然后將數據發送到您最喜歡的 “存儲庫” 中。(我們的存儲庫當然是Elasticsearch。)

      多種輸入選擇:數據往往以各種各樣的形式,或分散或集中地存在於很多系統中。Logstash 支持各種輸入選擇 ,可以在同一時間從眾多常用來源捕捉事件。能夠以連續的流式傳輸方式,輕松地從您的日志、指標、Web 應用、數據存儲以及各種 AWS 服務采集數據。

      

      輸出:盡管 Elasticsearch 是我們的首選輸出方向,能夠為我們的搜索和分析帶來無限可能,但它並非唯一選擇。Logstash 提供眾多輸出選擇,您可以將數據發送到您要指定的地方,並且能夠靈活地解鎖眾多下游用例。

      

      過濾器:數據從源傳輸到存儲庫的過程中,Logstash 過濾器能夠解析各個事件,識別已命名的字段以構建結構,並將它們轉換成通用格式,以便更輕松、更快速地分析和實現商業價值。

      可擴展:Logstash 采用可插拔框架,擁有 200 多個插件。您可以將不同的輸入選擇、過濾器和輸出選擇混合搭配、精心安排,讓它們在管道中和諧地運行。您是從自定義應用程序采集數據?沒有看到所需的插件?Logstash 插件很容易構建。我們有一個極好的插件開發 API 和插件生成器,可幫助您開始和分享您的創作。

  • Elasticsearch 

    Elasticsearch 是一個分布式的 RESTful 風格的搜索和數據分析引擎,能夠解決不斷涌現出的各種用例。作為 Elastic Stack 的核心,它集中存儲您的數據,幫助您發現意料之中以及意料之外的情況。

    可擴展性:原型環境和生產環境可無縫切換;無論 Elasticsearch 是在一個節點上運行,還是在一個包含 300 節點的集群上運行,您都能夠以相同的方式與 Elasticsearch 進行通信。

    速度:而且由於每個數據都被編入了索引,因此您再也不用因為某些數據沒有索引而煩心。您可以用快到令人發指的速度使用和訪問您的所有數據。

  • Kibana

    Kibana 能夠以圖表的形式呈現數據,並且具有可擴展的用戶界面,供您全方位配置和管理 Elastic Stack。

    可視化與探索:Kibana 讓您能夠自由地選擇如何呈現您的數據。或許您一開始並不知道自己想要什么。不過借助 Kibana 的交互式可視化,您可以先從一個問題出發,看看能夠從中發現些什么。

    多配件:Kibana 核心搭載了一批經典功能:柱狀圖、線狀圖、餅圖、環形圖,等等。它們充分利用了 Elasticsearch 的聚合功能。

二、部署准備

  • 部署規划

    機器兩台:10.1.4.54,10.1.4.55 centos7

    部署方案:

      10.1.4.54:kibana,elasticsearch,logstash,filebeat

      10.1.4.55:elasticsearch,logstash,filebeat

    包准備:https://www.elastic.co/cn/products 下載所有相關包

    安裝環境:jdk1.7+,這里是1.8

三、記錄安裝步驟

  • elasticsearch

  在10.1.4.54上安裝elasticsearch,新建用戶elk,並上傳包

sts-MacBook-Pro:Downloads garfield$ scp elasticsearch-6.3.2.tar elk@10.1.4.54:/home/elk

   解壓

tar -xvf elasticsearch-6.3.2.tar

   修改配置文件:

 vi config/elasticsearch.yml

 

  配置文件修改:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /home/elk/elasticsearch-6.3.2/data
#
# Path to log files:
#
path.logs: /home/elk/elasticsearch-6.3.2/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.1.4.54
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
# 這個配置為master廣播配置,節點默認為master
discovery.zen.ping.unicast.hosts: ["10.1.4.54"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
# 建議配置為 n/2 + 1
discovery.zen.minimum_master_nodes: 1
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

  bin目錄下啟動:

./elasticsearch -d

  檢測9200端口發現未啟動成功,查看日志發現:

[2018-08-07T14:38:00,757][ERROR][o.e.b.Bootstrap          ] [node-1] node validation exception
[3] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2]: memory locking requested for elasticsearch process but memory is not locked
[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2018-08-07T14:38:00,759][INFO ][o.e.n.Node               ] [node-1] stopping ...
[2018-08-07T14:38:00,795][INFO ][o.e.n.Node               ] [node-1] stopped
[2018-08-07T14:38:00,796][INFO ][o.e.n.Node               ] [node-1] closing ...
[2018-08-07T14:38:00,848][INFO ][o.e.n.Node               ] [node-1] closed
[2018-08-07T14:38:00,850][INFO ][o.e.x.m.j.p.NativeController] Native controller process has stopped - no new native processes can be started

  root權限下提升一下這兩個配置:

vi /etc/security/limits.conf

  修改配置:

# End of file
*           soft   nofile       65536
*           hard   nofile       131072
*           soft    memlock unlimited
*           hard    memlock unlimited
*           hard    nproc   4096
*           soft    nproc   4096

  重新登錄后生效

  再修改另一個配置:

vi /etc/sysctl.conf

  修改配置:

vm.max_map_count=262144
vm.swappiness=1

  再次啟動,訪問

http://10.1.4.54:9200

 

  得:

{
  "name" : "node-1",
  "cluster_name" : "my-application",
  "cluster_uuid" : "hIYg-sDBToa0D4C9lzD-cQ",
  "version" : {
    "number" : "6.3.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "053779d",
    "build_date" : "2018-07-20T05:20:23.451332Z",
    "build_snapshot" : false,
    "lucene_version" : "7.3.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

  接着在10.1.4.55上進行同樣的操作(配置文件只需節點名與ip不同即可)

  elasticsearch集群搭建完畢,附常用命令地址:

查詢所有數據:curl http://10.1.4.54:9200/_search?pretty
集群健康狀態:curl -XGET http://10.1.4.54:9200/_cluster/health?pretty
刪除所有數據:curl -X DELETE 'http://10.1.4.54:9200/_all'
刪除指定索引:curl -X DELETE 'http://10.1.4.54:9200/索引名稱'
  • logstash

  解壓包:

tar -xvf logstash-6.3.2.tar

 

  在config目錄下新建配置文件stash.conf,這個配置文件用以啟動時說明,logstash所收集日志的來源,內容和輸出方向,分別對應input,fileter和output,下面是網上找的一個例子配置,我在后面加以理解的注釋,后面再貼上我自己的例子

input {
    beats {
       port => 5044  #端口注入,來源於beat
   }
 }
  
 filter {
 if [type] == "app_test" { #測試日志類型
    grok {
      match => { "message" => "((?<logdate>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:?%{MINUTE}(?::?%{SECOND})) %{WORD:level} (?<srcCode>\[(?:[a-zA-Z0-9-])+\]\[(?:[a-zA-Z0-9-\.])+:%{NUMBER}\]) - )?(?<srcLog>.+)"  } #匹配模式
     }
     mutate {remove_field => [ "@timestamp", "@version", "message" ]  } #字段變更
 } else if [type] == "mysql_test" { #mysql日志
   grok {
     match => { "message" => "((?<logdate>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:?%{MINUTE}(?::?%{SECOND})) %{WORD:level} (?<srcCode>\[(?:[a-zA-Z0-9-])+\]\[(?:[a-zA-Z0-9-\.])+:%{NUMBER}\]) - )?(?<srcLog>.+)" } #解析規則直接寫在配置文件中
    }
    mutate {remove_field => [ "@version", "message" ] }
 } else if [type] == "nginx_access_test" {
    grok {
      match => { "message" => "MAINNGINXLOG %{COMBINEDAPACHELOG} %{QS:x_forwarded_for}" } 
    #MAINNGINXLOG規則寫在目錄$logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.0.0/patterns/中
} } date { match => ["logdate", "yyyy-MM-dd HH:mm:ss.SSS"] } if "_grokparsefailure" in [tags] { } else { mutate {remove_field => [ "logdate", "@version", "message" ] } } if !([level]) { mutate { add_field => { "level" => "other" } } } } output { if "_grokparsefailure" in [tags] { #過濾器解析失敗時,日志寫入該目錄 file { path => "/var/log/logstash/grokparsefailure-%{type}-%{+YYYY.MM.dd}.log" } } elasticsearch { #elasticsearch目標地址 hosts => ["10.1.4.54:9200"] index => "test_%{type}-%{+YYYY.MM.dd}" document_type => "%{type}" template_overwrite => true } }

  三個模塊內容,輸入,輸出和過濾器,過濾器配置了nginx,mysql,以及測試應用日志,我自己測試的只有nginx的錯誤日志,配置如下:

input {
    beats {
       port => 5044
   }
 }
filter {
 if [type] == "nginx-error" { 
        grok {
        match => [
            "message", "(?<time>\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}(%{NUMBER:pid:int}#%{NUMBER}:\s{1,}\*%{NUMBER}|\*%{NUMBER}) %{DATA:err_message}(?:,\s{1,}client:\s{1,}(?<client_ip>%{IP}|%{HOSTNAME}))(?:,\s{1,}server:\s{1,}%{IPORHOST:server})(?:, request: %{QS:request})?(?:, host: %{QS:client_ip})?(?:, referrer: \"%{URI:referrer})?",
            "message", "(?<time>\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}%{GREEDYDATA:err_message}"]
        }
        date{
            match=>["time","yyyy/MM/dd HH:mm:ss"]
            target=>"logdate"
        }
        ruby{
            code => "event.set('logdateunix',event.get('logdate').to_i)"
        }
    }
}
output{
   elasticsearch{
        hosts => ["10.1.4.54:9200"]
        index => "logstash-nginx-%{+YYYY.MM.dd}"
    }
}

 

 

  啟動logstash

 nohup ./bin/logstash -f config/stash.conf &
  • Kibana

  解壓包

tar -xvf kibana-6.3.2-linux-x86_64.tar 

  由於我的kibana和elasticsearch 的master節點在同一台機器上,方便起見,全部保持默認配置,直接啟動,包括服務端口,節點,還有elasticsearch.url配置:

# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URL of the Elasticsearch instance to use for all your queries.
#elasticsearch.url: "http://localhost:9200"

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"

 

  啟動

nohup ./bin/kibana &

  額,啟動失敗:

{"type":"log","@timestamp":"2018-08-15T08:33:23Z","tags":["warning","elasticsearch","admin"],"pid":28642,"message":"No living connections"}
{"type":"log","@timestamp":"2018-08-15T08:33:25Z","tags":["warning","elasticsearch","admin"],"pid":28642,"message":"Unable to revive connection: http://localhost:9200/"}
{"type":"log","@timestamp":"2018-08-15T08:33:25Z","tags":["warning","elasticsearch","admin"],"pid":28642,"message":"No living connections"}
{"type":"log","@timestamp":"2018-08-15T08:33:27Z","tags":["warning","elasticsearch","data"],"pid":28642,"message":"Unable to revive connection: http://localhost:9200/"}

 

  連不上這個地址,可能localhost識別有問題,我們重新修改一下配置,改一下服務ip和elasticsearch地址:

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "10.1.4.54"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://10.1.4.54:9200"

 

  再次啟動並驗證:http://10.1.4.54:5601

 

  • filebeat

  解壓:

[elk@localhost ~]$ tar -xvf filebeat-6.3.2-linux-x86_64.tar 

  更改配置filebeat.yml,與logstash類似,filebeat也是可以對日志做一些簡單配置和過濾的,如下說明:

#filebeat#
filebeat.prospectors:
#nginx
- input_type: log
  enable: yes
  #tags: nginx-error
  paths:
    - /home/elk/filebeat-6.3.2-linux-x86_64/nginx/error/error*.log  #paths指定要監控的日志
  document_type: nginx-error  # i設定Elasticsearch輸出時的document的type字段也可以用來給日志進行分類。Default: log
  exclude_lines: ["^$"] # 在輸入中排除符合正則表達式列表的那些行
  fields:   # 向輸出的每一條日志添加額外的信息比如“level:debug”方便后續對日志進行分組統計。默認情況下會在輸出信息的fields子目錄下以指定的新增fields建立子目錄例如fields.level。
    type: "nginx-error"
  fields_under_root: true   # 如果該選項設置為true則新增fields成為頂級目錄而不是將其放在fields目錄下。自定義的field會覆蓋filebeat默認的field

output.logstash:
  hosts: ["10.1.4.54:5044"]
  #index: filebeat  # 輸出數據到指定index default is "filebeat"  可以使用變量[filebeat-]YYYY.MM.DD keys.

 

   啟動:

nohup ./filebeat &

 

 

 

   操作kibana:打開頁面,在頁面中Management中添加Index Pattern,

  輸入index pattern,這個名字與logstash中配置的index相同或包括,這樣就得到了一個樣例,之后,把准備的nginx錯誤日志放入相應目錄,也就是filebeat配置的目錄中:

[elk@test error]$ ls
error11.log  error13.log  error1.log  error2.log  error5.log  error6.log  error7.log  error.log

  尷尬的是,我的kibana,discover中並沒有顯示出日志,還是很空白,查看了logstash和elasticsearch日志后發現,報錯了:

[2018-08-24T13:55:34,727][DEBUG][o.e.a.b.TransportShardBulkAction] [logstash-nginx-2018.08.24][4] failed to execute bulk item (index) BulkShardRequest [[logstash-nginx-2018.08.24][4]] containing [17] requests
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [host]
        at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:302) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:481) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:496) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:390) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:380) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:95) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:69) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:261) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:708) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:685) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:666) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.lambda$executeIndexRequestOnPrimary$2(TransportShardBulkAction.java:553) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.executeOnPrimaryWhileHandlingMappingUpdates(TransportShardBulkAction.java:572) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:551) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequest(TransportShardBulkAction.java:142) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:248) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:125) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:112) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:74) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:1018) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:996) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:103) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:357) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:297) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:959) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:956) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:270) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:237) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationPermit(IndexShard.java:2221) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:968) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.access$500(TransportReplicationAction.java:98) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:318) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:293) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:280) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:259) [x-pack-security-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:317) [x-pack-security-6.3.2.jar:6.3.2]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:664) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:725) [elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.3.2.jar:6.3.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_102]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_102]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102]
Caused by: java.lang.IllegalStateException: Can't get text on a START_OBJECT at 1:205
        at org.elasticsearch.common.xcontent.json.JsonXContentParser.text(JsonXContentParser.java:86) ~[elasticsearch-x-content-6.3.2.jar:6.3.2]
        at org.elasticsearch.common.xcontent.support.AbstractXContentParser.textOrNull(AbstractXContentParser.java:269) ~[elasticsearch-x-content-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.mapper.TextFieldMapper.parseCreateField(TextFieldMapper.java:564) ~[elasticsearch-6.3.2.jar:6.3.2]
        at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:297) ~[elasticsearch-6.3.2.jar:6.3.2]
        ... 44 more
[2018-08-24T13:55:34,718][DEBUG][o.e.a.b.TransportShardBulkAction] [logstash-nginx-2018.08.24][4] failed to execute bulk item (index) BulkShardRequest [[logstash-nginx-2018.08.24][4]] containing [34] requests
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [host]

  可以看出,是有一個host字段解析不明白,這個問題死活搞不定,經過更改解析正則,更改日志一頓折騰之后,日志中終於打印出了應有的東西:

[2018-08-24T14:42:54,642][DEBUG][o.e.a.b.TransportShardBulkAction] [logstash-nginx-2018.08.24][3] failed to execute bulk item (index) BulkShardRequest [[logstash-nginx-2018.08.24][3]] containing [index {[logstash-nginx-2018.08.24][doc][2XeramUBZh4nWTGM5PIx], source[{"@version":"1","message":"2018/08/20 12:05:35 [error] 14965#0: *8117 connect() failed (111: Connection refused) while connecting to upstream, client: 111.207.251.32, server: localhost, request: \"POST /dc/v1/token/updateToken HTTP/1.1\", upstream: \"http://10.1.0.170:7077/dc/v1/token/updateToken\"","err_message":"14965#0: *8117 connect() failed (111: Connection refused) while connecting to upstream, client: 111.207.251.32, server: localhost, request: \"POST /dc/v1/token/updateToken HTTP/1.1\", upstream: \"http://10.1.0.170:7077/dc/v1/token/updateToken\"","@timestamp":"2018-08-24T06:42:52.521Z","offset":0,"logdate":"2018-08-20T04:05:35.000Z","logdateunix":1534737935,"type":"nginx-error","err_severity":"error","beat":{"hostname":"test","version":"6.3.2","name":"test"},"source":"/home/elk/filebeat-6.3.2-linux-x86_64/nginx/error/error119.log","tags":["beats_input_codec_plain_applied"],"time":"2018/08/20 12:05:35","host":{"name":"test"}}]}]
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [host]
        at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:302) ~[elasticsearch-6.3.2.jar:6.3.2]

  東西內容都沒有問題,但是最后有一個字段叫host,name為test,查詢了一下,主機名剛好為test,說明原有的解析並不適用,后來在elasticsearch社區中也找到了問題根源:https://elasticsearch.cn/question/4671 ,原來這是6.30以上版本的特有操作,修改logstash配置文件后重啟:

input {
    beats {
       port => 5044
   }
 }
filter {
 if [type] == "nginx-error" { 
        grok {
        match => [
             "message", "(?<time>\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}(%{NUMBER:pid:int}#%{NUMBER}:\s{1,}\*%{NUMBER}|\*%{NUMBER}) %{DATA:err_message}(?:,\s{1,}client:\s{1,}(?<client_ip>%{IP}|%{HOSTNAME}))(?:,\s{1,}server:\s{1,}%{IPORHOST:server})(?:, request: %{QS:request})?(?:, host: %{QS:client_ip})?(?:, referrer: \"%{URI:referrer})?",
            "message", "(?<time>\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}%{GREEDYDATA:err_message}"]
        }
        mutate { 
             rename => { "[host][name]" => "host" } 
        }
        date{
            match=>["time","yyyy/MM/dd HH:mm:ss"]
            target=>"logdate"
        }
        ruby{
            code => "event.set('logdateunix',event.get('logdate').to_i)"
        }
    }
}
output{
   elasticsearch{
        hosts => ["10.1.4.54:9200"]
        index => "logstash-nginx-%{+YYYY.MM.dd}"
    }
}

  最終看到日志正常顯示:

  

 

四、變更

  logstash獲取源由beat改為文件路徑獲取,部署在10.1.4.55,將logstash包拷貝到10.1.4.55,並修改配置文件config/nginx.conf並啟動

input {
    file {
    type => "nginx-error" 
    path => [ "/home/elk/filebeat-6.3.2-linux-x86_64/nginx/error/error*.log" ]
    tags => [ "nginx","error"]
    start_position => beginning
}
}
filter {
     if [type] == "nginx-error" { 
        grok {
        match => [
            "message", "(?<time>\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}(%{NUMBER:pid:int}#%{NUMBER}:\s{1,}\*%{NUMBER}|\*%{NUMBER}) %{DATA:err_message}(?:,\s{1,}client:\s{1,}(?<client_ip>%{IP}|%{HOSTNAME}))(?:,\s{1,}server:\s{1,}%{IPORHOST:server})(?:, request: %{QS:request})?(?:, host: %{QS:client_ip})?(?:, referrer: \"%{URI:referrer})?",
            "message", "(?<time>\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}%{GREEDYDATA:err_message}"]
        }
        date{
            match=>["time","yyyy/MM/dd HH:mm:ss"]
            target=>"logdate"
        }
        ruby{
            code => "event.set('logdateunix',event.get('logdate').to_i)"
        }
    }
}
output{
   elasticsearch{
        hosts => ["10.1.4.54:9200"]
        index => "logstash-nginx-%{+YYYY.MM.dd}"
    }
}

 

  在 /home/elk/filebeat-6.3.2-linux-x86_64/nginx/error/ 目錄中添加一個錯誤日志,error11.log,可以看到kibana頁面發生了變化:

  

  說明按照目錄導入了日志

  filebeat也可以直接導入 ,修改配置文件並啟動:

#filebeat#
filebeat.prospectors:
#nginx
- input_type: log
  enable: yes
  #tags: nginx-error
  paths:
    - /home/elk/filebeat-6.3.2-linux-x86_64/nginx/error/error*.log
  document_type: nginx-error
  exclude_lines: ["^$"]
  fields:
    type: "nginx-error"
  fields_under_root: true
output.elasticsearch:
  hosts: ["10.1.4.54:9200"]
  #index: filebeat

  查看kibana頁面

  

 五、拓展,x-pack

 

 

待續... 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM