docker:搭建ELK 開源日志分析系統


docker:搭建ELK 開源日志分析系統

 

ELK 是由三部分組成的一套日志分析系統,

 Elasticsearch: 基於json分析搜索引擎,Elasticsearch是個開源分布式搜索引擎,它的特點有:分布式,零配置,自動發現,索引自動分片,

                          索引副本機制,restful風格接口,多數據源,自動搜索負載等。

Logstash:動態數據收集管道,Logstash是一個完全開源的工具,它可以對你的日志進行收集、分析,並將其存儲供以后使用

Kibana:可視化視圖,將elasticsearh所收集的data通過視圖展現。kibana 是一個開源和免費的工具,它可以為 Logstash 和
              ElasticSearch 提供的日志分析友好的 Web 界面,可以幫助您匯總、分析和搜索重要數據日志。

 

一、使用docker集成鏡像
安裝docker
elk集成鏡像包 名字是 sebp/elk


1.安裝 docke、啟動

yum install docke

service docker start

 

2.下載 sebp/elk

docker pull sebp/elk

無法下載、報錯 :
unauthorized: authentication required
這是國外網絡的問題

 

 

解決1 用網易鏡像
vim /etc/docker/daemon.json 這個json文件不存在的,不需要擔心,直接編輯
把下面的貼進去,保存,重啟即可

{
"registry-mirrors": [ "http://hub-mirror.c.163.com"]
}

# service docker restart

沒效果,還是下不來


解決2 用阿里雲鏡像加速

注冊一個阿里雲賬號,登陸
到這里復制自己的加速地址

然后
安裝/升級你的Docker客戶端
  • 推薦安裝1.10.0以上版本的Docker客戶端,參考文檔 docker-ce
如何配置鏡像加速器
  • 針對Docker客戶端版本大於1.10.0的用戶

    您可以通過修改daemon配置文件/etc/docker/daemon.json來使用加速器:

    sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://34sii7qf.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker

下載:
# docker pull sebp/elk
……

617ab16bcfa1: Pull complete
Digest: sha256:b6b8dd20b1a9aaf47b11fe9c66395a462f5e65c50dcff725e9bf83576d4ed241
Status: Downloaded newer image for docker.io/sebp/elk:latest

查看鏡像:
[root@bogon docker]# docker images
REPOSITORY            TAG                  IMAGEID            CREATED              SIZE
docker.io/sebp/elk    latest              4b52312ebe8d     12 days ago           1.15 GB

 

3.啟動鏡像

# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk sebp/elk

 

報錯:

Couln't start Elasticsearch. Exiting.
Elasticsearch log follows below.

原因:

 

waiting for Elasticsearch to be up (xx/30) counter goes up to 30 and the container exits with Couln't start Elasticsearch. Exiting. and Elasticsearch's logs are dumped, then read the recommendations in the logs and consider that they must be applied.

 

In particular, in case (1) above, the message max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]means that the host's limits on mmap counts must be set to at least 262144.

 

Docker至少得分配3GB的內存;

 

Elasticsearch至少需要單獨2G的內存;

 

vm.max_map_count至少需要262144,修改vm.max_map_count 參數

 

解決:

      # vi /etc/sysctl.conf

      末尾添加一行
      vm.max_map_count=262144

      查看結果
        # sysctl -p

         vm.max_map_count = 262144

重新啟動:

# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk sebp/elk


/usr/bin/docker-current: Error response from daemon: Conflict. The container name "/elk" is already in use by container ece053f704db663e03355a679e25ec732bd08668dae1a87a89567e0e7c950749. You have to remove (or rename) that container to be able to reuse that name..
See '/usr/bin/docker-current run --help'.
[root@bogon ~]# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk123 sebp/elk
/usr/bin/docker-current: Error response from daemon: Conflict. The container name "/elk123" is already in use by container caf492e3acff7506c5e70b896bd82a33d757816ca3f55911a2aa9aef4cd74670. You have to remove (or rename) that container to be able to reuse that name..
See '/usr/bin/docker-current run --help'.
[root@bogon ~]# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk1 sebp/elk
* Starting periodic command scheduler cron [ OK ]
* Starting Elasticsearch Server [ OK ]
waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)
waiting for Elasticsearch to be up (3/30)
waiting for Elasticsearch to be up (4/30)
waiting for Elasticsearch to be up (5/30)
waiting for Elasticsearch to be up (6/30)
waiting for Elasticsearch to be up (7/30)
waiting for Elasticsearch to be up (8/30)
waiting for Elasticsearch to be up (9/30)
waiting for Elasticsearch to be up (10/30)
Waiting for Elasticsearch cluster to respond (1/30)
logstash started.
* Starting Kibana5 [ OK ]
==> /var/log/elasticsearch/elasticsearch.log <==
[2018-04-03T06:58:14,192][INFO ][o.e.d.DiscoveryModule ] [JI2Uv6k] using discovery type [zen]
[2018-04-03T06:58:15,132][INFO ][o.e.n.Node ] initialized
[2018-04-03T06:58:15,132][INFO ][o.e.n.Node ] [JI2Uv6k] starting ...
[2018-04-03T06:58:15,358][INFO ][o.e.t.TransportService ] [JI2Uv6k] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
[2018-04-03T06:58:15,378][INFO ][o.e.b.BootstrapChecks ] [JI2Uv6k] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-04-03T06:58:18,517][INFO ][o.e.c.s.MasterService ] [JI2Uv6k] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {JI2Uv6k}{JI2Uv6kAQ1ymPZoVhL5AAg}{Rr9Wa_H-RKGaBC4IDpjAMA}{172.17.0.2}{172.17.0.2:9300}
[2018-04-03T06:58:18,529][INFO ][o.e.c.s.ClusterApplierService] [JI2Uv6k] new_master {JI2Uv6k}{JI2Uv6kAQ1ymPZoVhL5AAg}{Rr9Wa_H-RKGaBC4IDpjAMA}{172.17.0.2}{172.17.0.2:9300}, reason: apply cluster state (from master [master {JI2Uv6k}{JI2Uv6kAQ1ymPZoVhL5AAg}{Rr9Wa_H-RKGaBC4IDpjAMA}{172.17.0.2}{172.17.0.2:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-04-03T06:58:18,563][INFO ][o.e.h.n.Netty4HttpServerTransport] [JI2Uv6k] publish_address {172.17.0.2:9200}, bound_addresses {[::]:9200}
[2018-04-03T06:58:18,563][INFO ][o.e.n.Node ] [JI2Uv6k] started
[2018-04-03T06:58:18,577][INFO ][o.e.g.GatewayService ] [JI2Uv6k] recovered [0] indices into cluster_state

==> /var/log/logstash/logstash-plain.log <==

==> /var/log/kibana/kibana5.log <==

 

4.驗證、測試 

打開瀏覽器,輸入:http://<your-host>:5601,看到如下界面說明安裝成功

 

端口之間關系

5601 (Kibana web interface).              前台界面

9200 (Elasticsearch JSON interface)   搜索.

5044 (Logstash Beats interface, receives logs from Beats such as Filebea   日志傳輸

 

測試:
測試傳遞一條信息

1、使用命令:docker exec -it <container-name> /bin/bash 進入容器內

2、執行命令:/opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'

  報錯如果看到這樣的報錯信息 Logstash could not be started because there is already another instance using the configured data directory. 
            If you wish to run multiple instances, you must change the "path.data" setting.

 

   解決:請執行命令:service logstash stop 然后在執行就可以了。

 

 再次執行:看到:Successfully started Logstash API endpoint {:port=>9600}  即可
 

3.輸入測試信息 :this is a  test 

4、打開瀏覽器,輸入:http://<your-host>:9200/_search?pretty 如圖,就會看到我們剛剛輸入的日志內容

 

 

ELK 是由三部分組成的一套日志分析系統,

 Elasticsearch: 基於json分析搜索引擎,Elasticsearch是個開源分布式搜索引擎,它的特點有:分布式,零配置,自動發現,索引自動分片,

                          索引副本機制,restful風格接口,多數據源,自動搜索負載等。

Logstash:動態數據收集管道,Logstash是一個完全開源的工具,它可以對你的日志進行收集、分析,並將其存儲供以后使用

Kibana:可視化視圖,將elasticsearh所收集的data通過視圖展現。kibana 是一個開源和免費的工具,它可以為 Logstash 和
              ElasticSearch 提供的日志分析友好的 Web 界面,可以幫助您匯總、分析和搜索重要數據日志。

 

一、使用docker集成鏡像
安裝docker
elk集成鏡像包 名字是 sebp/elk


1.安裝 docke、啟動

yum install docke

service docker start

 

2.下載 sebp/elk

docker pull sebp/elk

無法下載、報錯 :
unauthorized: authentication required
這是國外網絡的問題

 

 

解決1 用網易鏡像
vim /etc/docker/daemon.json 這個json文件不存在的,不需要擔心,直接編輯
把下面的貼進去,保存,重啟即可

{
"registry-mirrors": [ "http://hub-mirror.c.163.com"]
}

# service docker restart

沒效果,還是下不來


解決2 用阿里雲鏡像加速

注冊一個阿里雲賬號,登陸
到這里復制自己的加速地址

然后
安裝/升級你的Docker客戶端
  • 推薦安裝1.10.0以上版本的Docker客戶端,參考文檔 docker-ce
如何配置鏡像加速器
  • 針對Docker客戶端版本大於1.10.0的用戶

    您可以通過修改daemon配置文件/etc/docker/daemon.json來使用加速器:

    sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://34sii7qf.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker

下載:
# docker pull sebp/elk
……

617ab16bcfa1: Pull complete
Digest: sha256:b6b8dd20b1a9aaf47b11fe9c66395a462f5e65c50dcff725e9bf83576d4ed241
Status: Downloaded newer image for docker.io/sebp/elk:latest

查看鏡像:
[root@bogon docker]# docker images
REPOSITORY            TAG                  IMAGEID            CREATED              SIZE
docker.io/sebp/elk    latest              4b52312ebe8d     12 days ago           1.15 GB

 

3.啟動鏡像

# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk sebp/elk

 

報錯:

Couln't start Elasticsearch. Exiting.
Elasticsearch log follows below.

原因:

 

waiting for Elasticsearch to be up (xx/30) counter goes up to 30 and the container exits with Couln't start Elasticsearch. Exiting. and Elasticsearch's logs are dumped, then read the recommendations in the logs and consider that they must be applied.

 

In particular, in case (1) above, the message max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]means that the host's limits on mmap counts must be set to at least 262144.

 

Docker至少得分配3GB的內存;

 

Elasticsearch至少需要單獨2G的內存;

 

vm.max_map_count至少需要262144,修改vm.max_map_count 參數

 

解決:

      # vi /etc/sysctl.conf

      末尾添加一行
      vm.max_map_count=262144

      查看結果
        # sysctl -p

         vm.max_map_count = 262144

重新啟動:

# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk sebp/elk


/usr/bin/docker-current: Error response from daemon: Conflict. The container name "/elk" is already in use by container ece053f704db663e03355a679e25ec732bd08668dae1a87a89567e0e7c950749. You have to remove (or rename) that container to be able to reuse that name..
See '/usr/bin/docker-current run --help'.
[root@bogon ~]# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk123 sebp/elk
/usr/bin/docker-current: Error response from daemon: Conflict. The container name "/elk123" is already in use by container caf492e3acff7506c5e70b896bd82a33d757816ca3f55911a2aa9aef4cd74670. You have to remove (or rename) that container to be able to reuse that name..
See '/usr/bin/docker-current run --help'.
[root@bogon ~]# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk1 sebp/elk
* Starting periodic command scheduler cron [ OK ]
* Starting Elasticsearch Server [ OK ]
waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)
waiting for Elasticsearch to be up (3/30)
waiting for Elasticsearch to be up (4/30)
waiting for Elasticsearch to be up (5/30)
waiting for Elasticsearch to be up (6/30)
waiting for Elasticsearch to be up (7/30)
waiting for Elasticsearch to be up (8/30)
waiting for Elasticsearch to be up (9/30)
waiting for Elasticsearch to be up (10/30)
Waiting for Elasticsearch cluster to respond (1/30)
logstash started.
* Starting Kibana5 [ OK ]
==> /var/log/elasticsearch/elasticsearch.log <==
[2018-04-03T06:58:14,192][INFO ][o.e.d.DiscoveryModule ] [JI2Uv6k] using discovery type [zen]
[2018-04-03T06:58:15,132][INFO ][o.e.n.Node ] initialized
[2018-04-03T06:58:15,132][INFO ][o.e.n.Node ] [JI2Uv6k] starting ...
[2018-04-03T06:58:15,358][INFO ][o.e.t.TransportService ] [JI2Uv6k] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
[2018-04-03T06:58:15,378][INFO ][o.e.b.BootstrapChecks ] [JI2Uv6k] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-04-03T06:58:18,517][INFO ][o.e.c.s.MasterService ] [JI2Uv6k] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {JI2Uv6k}{JI2Uv6kAQ1ymPZoVhL5AAg}{Rr9Wa_H-RKGaBC4IDpjAMA}{172.17.0.2}{172.17.0.2:9300}
[2018-04-03T06:58:18,529][INFO ][o.e.c.s.ClusterApplierService] [JI2Uv6k] new_master {JI2Uv6k}{JI2Uv6kAQ1ymPZoVhL5AAg}{Rr9Wa_H-RKGaBC4IDpjAMA}{172.17.0.2}{172.17.0.2:9300}, reason: apply cluster state (from master [master {JI2Uv6k}{JI2Uv6kAQ1ymPZoVhL5AAg}{Rr9Wa_H-RKGaBC4IDpjAMA}{172.17.0.2}{172.17.0.2:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-04-03T06:58:18,563][INFO ][o.e.h.n.Netty4HttpServerTransport] [JI2Uv6k] publish_address {172.17.0.2:9200}, bound_addresses {[::]:9200}
[2018-04-03T06:58:18,563][INFO ][o.e.n.Node ] [JI2Uv6k] started
[2018-04-03T06:58:18,577][INFO ][o.e.g.GatewayService ] [JI2Uv6k] recovered [0] indices into cluster_state

==> /var/log/logstash/logstash-plain.log <==

==> /var/log/kibana/kibana5.log <==

 

4.驗證、測試 

打開瀏覽器,輸入:http://<your-host>:5601,看到如下界面說明安裝成功

 

端口之間關系

5601 (Kibana web interface).              前台界面

9200 (Elasticsearch JSON interface)   搜索.

5044 (Logstash Beats interface, receives logs from Beats such as Filebea   日志傳輸

 

測試:
測試傳遞一條信息

1、使用命令:docker exec -it <container-name> /bin/bash 進入容器內

2、執行命令:/opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'

  報錯如果看到這樣的報錯信息 Logstash could not be started because there is already another instance using the configured data directory. 
            If you wish to run multiple instances, you must change the "path.data" setting.

 

   解決:請執行命令:service logstash stop 然后在執行就可以了。

 

 再次執行:看到:Successfully started Logstash API endpoint {:port=>9600}  即可
 

3.輸入測試信息 :this is a  test 

4、打開瀏覽器,輸入:http://<your-host>:9200/_search?pretty 如圖,就會看到我們剛剛輸入的日志內容

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM