二、ELKStack集群架構設計


一、ELKStack介紹與入門實踐

二、Elasticsearch 集群架構圖

wKiom1e_36mwL1v7AABT-LrNWf4924.png

 

服務器配置:Centos6.6 x86_64 CPU:1核心 MEM:2G (做實驗,配置比較低一些)

注:這里配置elasticsearch集群用了3台服務器,可以根據自己的實際情況進行調整。

三、開始安裝配置nginx和logstash

注:這里使用yum安裝,如果需要較高版本的,可以使用編譯安裝。

在10.0.18.144上操作,10.0.18.145配置方式和144是一樣的。

1、安裝nginx

配置yum源並安裝nginx

1
2
3
4
5
6
7
8
9
10
11
#vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http: //nginx .org /packages/centos/ $releasever/$basearch/
gpgcheck=0
enabled=1
安裝
#yum install nginx -y
查看版本
#rpm -qa nginx
nginx-1.10.1-1.el6.ngx.x86_64

修改nginx配置文件,修改為如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
user  nginx;
worker_processes  1;
error_log   /var/log/nginx/error .log  notice;       #默認是warn
pid        /var/run/nginx .pid;
  
events {
     worker_connections  1024;
}
  
http {
     include       mime.types;
     default_type  application /octet-stream ;
  
     log_format main  '$remote_addr - $remote_user [$time_local] "$request" '
                       '$status $body_bytes_sent "$http_referer" '
                       '"$http_user_agent" $http_x_forwarded_for $request_length $msec $connection_requests $request_time' ;
  ##添加了$request_length $msec $connection_requests $request_time
     sendfile        on;
     keepalive_timeout  65;
  
     server {
         listen       80;
         server_name  localhost;
         access_log   /var/log/nginx/access .log  main;
  
         location / {
             root    /usr/share/nginx/html ;
             index  index.html index.htm;
         }
  
         error_page   500 502 503 504   /50x .html;
         location =  /50x .html {
             root    /usr/share/nginx/html ;
         }
     }
}
修改nginx默認頁面
#vi /usr/share/nginx/html/index.html
<body>
<h1>Welcome to nginx!< /h1 >
改為
<body>
<h1>Welcome to nginx! 144< /h1 >

啟動nginx,並訪問測試:

1
2
3
4
5
6
7
8
9
10
11
12
#service nginx start
#chkconfig --add nginx
#chkconfig nginx on
查看啟動情況
#netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID /Program  name   
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1023 /sshd           
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      1101 /master         
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      1353 /nginx          
tcp        0      0 :::22                       :::*                        LISTEN      1023 /sshd           
tcp        0      0 ::1:25                      :::*                        LISTEN      1101 /master

在瀏覽器訪問測試,如下:

wKioL1e_5mSSriqaAABUVVuZVew858.png

2、安裝配置java環境

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
直接使用rpm包安裝,比較方便
#rpm -ivh jdk-8u92-linux-x64.rpm 
Preparing...                 ########################################### [100%]
    1:jdk1.8.0_92             ########################################### [100%]
Unpacking JAR files...
         tools.jar...
         plugin.jar...
         javaws.jar...
         deploy.jar...
         rt.jar...
         jsse.jar...
         charsets.jar...
         localedata.jar...
#java -version
java version  "1.8.0_92"
Java(TM) SE Runtime Environment (build 1.8.0_92-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)

3、安裝配置logstash

配置logstash的yum源,如下:

1
2
3
4
5
6
7
8
9
10
11
12
#vim /etc/yum.repos.d/logstash.repo
[logstash-2.3]
name=Logstash repository  for  2.3.x packages
baseurl=https: //packages .elastic.co /logstash/2 .3 /centos
gpgcheck=1
gpgkey=https: //packages .elastic.co /GPG-KEY-elasticsearch
enabled=1
安裝logstash
#yum install logstash -y
查看版本
#rpm -qa logstash
logstash-2.3.4-1.noarch

配置logstash的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#cd /etc/logstash/conf.d
#vim logstash.conf
input {
      file  {
           path => [ "/var/log/nginx/access.log" ]
           type  =>  "nginx_log"
           start_position =>  "beginning" 
         }
}
output {
      stdout {
      codec => rubydebug
       }
}
檢測語法是否有錯
#/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --configtest
Configuration OK     #語法OK

啟動並查看收集nginx日志情況:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#列出一部分
#/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf 
Settings: Default pipeline workers: 1
Pipeline main started
{
        "message"  =>  "10.0.90.8 - - [26/Aug/2016:15:30:18 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.3; .NET4.0C; .NET4.0E)\" \"-\" 415 1472196618.085 1 0.000" ,
       "@version"  =>  "1" ,
     "@timestamp"  =>  "2016-08-26T07:30:32.699Z" ,
           "path"  =>  "/var/log/nginx/access.log" ,
           "host"  =>  "0.0.0.0" ,
           "type"  =>  "nginx_log"
}
{
        "message"  =>  "10.0.90.8 - - [26/Aug/2016:15:30:18 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.3; .NET4.0C; .NET4.0E)\" \"-\" 415 1472196618.374 2 0.000" ,
       "@version"  =>  "1" ,
     "@timestamp"  =>  "2016-08-26T07:30:32.848Z" ,
           "path"  =>  "/var/log/nginx/access.log" ,
           "host"  =>  "0.0.0.0" ,
           "type"  =>  "nginx_log"
}
………………
PS:在網上看到其他版本logstash的pipeline workers是默認為4,但我安裝的2.3.4版本這個默認值為1
這是因為這個默認值和服務器本身的cpu核數有關,我這里的服務器cpu都是1核,故默認值為1。
可以通過  /opt/logstash/bin/logstash  -h 命令查看一些參數

修改logstash的配置文件,將日志數據輸出到redis

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#cat /etc/logstash/conf.d/logstash.conf
input {
      file  {
           path => [ "/var/log/nginx/access.log" ]
           type  =>  "nginx_log"
           start_position =>  "beginning" 
         }
}
output {
      redis {
             host =>  "10.0.18.146"
             key =>  'logstash-redis'
             data_type =>  'list'
       }
}

檢查語法並啟動服務

1
2
3
4
5
6
7
8
#/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf  --configtest
Configuration OK
#service logstash start
logstash started.
查看啟動進程
#ps -ef | grep logstash
logstash  2029     1 72 15:37 pts /0     00:00:18  /usr/bin/java  -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless= true  -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir= /var/lib/logstash  -Xmx1g -Xss2048k -Djffi.boot.library.path= /opt/logstash/vendor/jruby/lib/jni  -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless= true  -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir= /var/lib/logstash  -XX:HeapDumpPath= /opt/logstash/heapdump .hprof -Xbootclasspath /a : /opt/logstash/vendor/jruby/lib/jruby .jar -classpath : -Djruby.home= /opt/logstash/vendor/jruby  -Djruby.lib= /opt/logstash/vendor/jruby/lib  -Djruby.script=jruby -Djruby.shell= /bin/sh  org.jruby.Main --1.9  /opt/logstash/lib/bootstrap/environment .rb logstash /runner .rb agent -f  /etc/logstash/conf .d -l  /var/log/logstash/logstash .log
root      2076  1145  0 15:37 pts /0     00:00:00  grep  logstash

四、安裝配置redis

下載並安裝redis

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#yum install wget gcc gcc-c++ -y   #安裝過的,就不需要再安裝了
#wget http://download.redis.io/releases/redis-3.0.7.tar.gz
#tar xf redis-3.0.7.tar.gz
#cd redis-3.0.7
#make 
make 沒問題之后,創建目錄
#mkdir -p /usr/local/redis/{conf,bin}
#cp ./*.conf /usr/local/redis/conf/
#cp runtest* /usr/local/redis/
#cd utils/
#cp mkrelease.sh   /usr/local/redis/bin/
#cd ../src
#cp redis-benchmark redis-check-aof redis-check-dump redis-cli redis-sentinel redis-server redis-trib.rb /usr/local/redis/bin/
創建redis數據存儲目錄
#mkdir -pv /data/redis/db
#mkdir -pv /data/log/redis

修改redis配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#cd /usr/local/redis/conf
#vi redis.conf
dir  ./  修改為 dir  /data/redis/db/
保存退出
啟動redis
#nohup /usr/local/redis/bin/redis-server /usr/local/redis/conf/redis.conf &
查看redis進程
#ps -ef | grep redis
root      4425  1149  0 16:21 pts /0     00:00:00  /usr/local/redis/bin/redis-server  *:6379                          
root      4435  1149  0 16:22 pts /0     00:00:00  grep  redis
#netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID /Program  name   
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1402 /sshd           
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      1103 /master         
tcp        0      0 0.0.0.0:6379                0.0.0.0:*                   LISTEN      4425 /redis-server 
tcp        0      0 :::22                       :::*                        LISTEN      1402 /sshd           
tcp        0      0 ::1:25                      :::*                        LISTEN      1103 /master         
tcp        0      0 :::6379                     :::*                        LISTEN      4425 /redis-server  *

五、安裝配置logstash server

1、安裝jdk

1
2
3
4
5
6
7
8
9
10
11
12
#rpm -ivh jdk-8u92-linux-x64.rpm 
Preparing...                 ########################################### [100%]
    1:jdk1.8.0_92             ########################################### [100%]
Unpacking JAR files...
         tools.jar...
         plugin.jar...
         javaws.jar...
         deploy.jar...
         rt.jar...
         jsse.jar...
         charsets.jar...
         localedata.jar...

2、安裝logstash

1
2
3
4
5
6
7
8
9
10
配置yum源
#vim /etc/yum.repos.d/logstash.repo
[logstash-2.3]
name=Logstash repository  for  2.3.x packages
baseurl=https: //packages .elastic.co /logstash/2 .3 /centos
gpgcheck=1
gpgkey=https: //packages .elastic.co /GPG-KEY-elasticsearch
enabled=1
安裝logstash
#yum install logstash -y

配置logstash server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
配置文件如下:
#cd /etc/logstash/conf.d
#vim logstash_server.conf
input {
     redis {
         port =>  "6379"
         host =>  "10.0.18.146"
         data_type =>  "list"
         key =>  "logstash-redis"
         type  =>  "redis-input"
    }
}
output {
     stdout {
     codec => rubydebug
     }
}
檢查語法
#/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash_server.conf --configtest
Configuration OK

語法沒問題之后,測試查看收集nginx日志的情況,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
#/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash_server.conf 
Settings: Default pipeline workers: 1
Pipeline main started
 
{
        "message"  =>  "10.0.90.8 - - [26/Aug/2016:15:42:01 +0800] \"GET /favicon.ico HTTP/1.1\" 404 571 \"-\" \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36\" \"-\" 263 1472197321.350 1 0.000" ,
       "@version"  =>  "1" ,
     "@timestamp"  =>  "2016-08-26T08:45:25.214Z" ,
           "path"  =>  "/var/log/nginx/access.log" ,
           "host"  =>  "0.0.0.0" ,
           "type"  =>  "nginx_log"
}
{
        "message"  =>  "10.0.90.8 - - [26/Aug/2016:16:40:53 +0800] \"GET / HTTP/1.1\" 200 616 \"-\" \"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36\" \"-\" 374 1472200853.324 1 0.000" ,
       "@version"  =>  "1" ,
     "@timestamp"  =>  "2016-08-26T08:45:25.331Z" ,
           "path"  =>  "/var/log/nginx/access.log" ,
           "host"  =>  "0.0.0.0" ,
           "type"  =>  "nginx_log"
}
{
        "message"  =>  "10.0.90.8 - - [26/Aug/2016:16:40:53 +0800] \"GET /favicon.ico HTTP/1.1\" 404 571 \"http://10.0.18.144/\" \"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36\" \"-\" 314 1472200853.486 2 0.000" ,
       "@version"  =>  "1" ,
     "@timestamp"  =>  "2016-08-26T08:45:25.332Z" ,
           "path"  =>  "/var/log/nginx/access.log" ,
           "host"  =>  "0.0.0.0" ,
           "type"  =>  "nginx_log"
}
{
        "message"  =>  "10.0.90.8 - - [26/Aug/2016:16:42:05 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36\" \"-\" 481 1472200925.259 1 0.000" ,
       "@version"  =>  "1" ,
     "@timestamp"  =>  "2016-08-26T08:45:25.332Z" ,
           "path"  =>  "/var/log/nginx/access.log" ,
           "host"  =>  "0.0.0.0" ,
           "type"  =>  "nginx_log"
}
{
        "message"  =>  "10.0.90.9 - - [26/Aug/2016:16:47:35 +0800] \"GET / HTTP/1.1\" 200 616 \"-\" \"Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko\" \"-\" 298 1472201255.813 1 0.000" ,
       "@version"  =>  "1" ,
     "@timestamp"  =>  "2016-08-26T08:47:36.623Z" ,
           "path"  =>  "/var/log/nginx/access.log" ,
           "host"  =>  "0.0.0.0" ,
           "type"  =>  "nginx_log"
}
{
        "message"  =>  "10.0.90.9 - - [26/Aug/2016:16:47:42 +0800] \"GET /favicon.ico HTTP/1.1\" 404 169 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64; Trident/7.0; rv:11.0) like Gecko\" \"-\" 220 1472201262.653 1 0.000" ,
       "@version"  =>  "1" ,
     "@timestamp"  =>  "2016-08-26T08:47:43.649Z" ,
           "path"  =>  "/var/log/nginx/access.log" ,
           "host"  =>  "0.0.0.0" ,
           "type"  =>  "nginx_log"
}
{
        "message"  =>  "10.0.90.8 - - [26/Aug/2016:16:48:09 +0800] \"GET / HTTP/1.1\" 200 616 \"-\" \"Mozilla/5.0 (Windows; U; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; BIDUBrowser 8.4)\" \"-\" 237 1472201289.662 1 0.000" ,
       "@version"  =>  "1" ,
     "@timestamp"  =>  "2016-08-26T08:48:09.684Z" ,
           "path"  =>  "/var/log/nginx/access.log" ,
           "host"  =>  "0.0.0.0" ,
           "type"  =>  "nginx_log"
}
…………………………

注:執行此命令之后不會立即有信息顯示,需要等一會,也可以在瀏覽器刷新144和145的nginx頁面或者同一網段的其他機器訪問144、145,就會由如上信息出現。

3、修改logstash配置文件,將搜集到的數據輸出到ES集群中

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#vim /etc/logstash/conf.d/logstash_server.conf
input {
     redis {
         port =>  "6379"
         host =>  "10.0.18.146"
         data_type =>  "list"
         key =>  "logstash-redis"
         type  =>  "redis-input"
    }
}
output {
      elasticsearch {
          hosts =>  "10.0.18.149"         #其中一台ES 服務器
          index =>  "nginx-log-%{+YYYY.MM.dd}"   #定義的索引名稱,后面會用到
     }
}
啟動logstash
#service logstash start
logstash started.
查看logstash server 進程
#ps -ef | grep logstash
logstash  1740     1 24 17:24 pts /0     00:00:25  /usr/bin/java  -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless= true  -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir= /var/lib/logstash  -Xmx1g -Xss2048k -Djffi.boot.library.path= /opt/logstash/vendor/jruby/lib/jni  -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless= true  -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir= /var/lib/logstash  -XX:HeapDumpPath= /opt/logstash/heapdump .hprof -Xbootclasspath /a : /opt/logstash/vendor/jruby/lib/jruby .jar -classpath : -Djruby.home= /opt/logstash/vendor/jruby  -Djruby.lib= /opt/logstash/vendor/jruby/lib  -Djruby.script=jruby -Djruby.shell= /bin/sh  org.jruby.Main --1.9  /opt/logstash/lib/bootstrap/environment .rb logstash /runner .rb agent -f  /etc/logstash/conf .d -l  /var/log/logstash/logstash .log
root      1783  1147  0 17:25 pts /0     00:00:00  grep  logstash

六、安裝配置Elasticsearch

在10.0.18.148、10.0.18.149、10.0.18.150三台ES上安裝jdk和Elasticsearch!jdk的安裝都是一樣的,這里不做贅述。

1、添加elasticsearch用戶,因為Elasticsearch服務器啟動的時候,需要在普通用戶權限下來啟動。

1
2
3
4
5
6
7
#adduser elasticsearch
#passwd elasticsearch   #為用戶設置密碼
#su - elasticsearch
下載Elasticsearch包
$wget https: //download .elastic.co /elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2 .3.4 /elasticsearch-2 .3.4. tar .gz
$ tar  xf elasticsearch-2.3.4. tar .gz 
$ cd  elasticsearch-2.3.4

將elasticsearch的配置文件末尾添加如下:

1
2
3
4
5
6
7
8
9
#vim conf/elasticsearch.yml
cluster.name: serverlog       #集群名稱,可以自定義
node.name: node-1          #節點名稱,也可以自定義
path.data:  /home/elasticsearch/elasticsearch-2 .3.4 /data         #data存儲路徑
path.logs:  /home/elasticsearch/elasticsearch-2 .3.4 /logs         #log存儲路徑
network.host: 10.0.18.148              #節點ip
http.port: 9200              #節點端口
discovery.zen. ping .unicast.hosts: [ "10.0.18.149" , "10.0.18.150" ]   #集群ip列表
discovery.zen.minimum_master_nodes: 3                             #集群幾點數

啟動服務

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ cd  elasticsearch-2.3.4
$. /bin/elasticsearch  -d
查看進程
$ ps  -ef |  grep  elasticsearch
root      1550  1147  0 17:44 pts /0     00:00:00  su  - elasticsearch
500       1592     1  4 17:56 pts /0     00:00:13  /usr/bin/java  -Xms256m -Xmx1g -Djava.awt.headless= true  -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys= true  -Des.path.home= /home/elasticsearch/elasticsearch-2 .3.4 - cp  /home/elasticsearch/elasticsearch-2 .3.4 /lib/elasticsearch-2 .3.4.jar: /home/elasticsearch/elasticsearch-2 .3.4 /lib/ * org.elasticsearch.bootstrap.Elasticsearch start -d
500       1649  1551  0 18:00 pts /0     00:00:00  grep  elasticsearch
查看端口
$ netstat  -tunlp
(Not all processes could be identified, non-owned process info
  will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID /Program  name   
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      -                   
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      -                   
tcp        0      0 ::ffff:10.0.18.148:9300     :::*                        LISTEN      1592 /java           
tcp        0      0 :::22                       :::*                        LISTEN      -                   
tcp        0      0 ::1:25                      :::*                        LISTEN      -                   
tcp        0      0 ::ffff:10.0.18.148:9200     :::*                        LISTEN      1592 /java

啟動連個端口:9200集群之間事務通信,9300集群之間選舉通信。

啟動之后,查看三台Elasticsearch的日志,會看到“選舉”產生的master節點

第一台:10.0.18.148

1
2
3
4
5
6
7
8
9
10
11
12
$ tail  -f logs /serverlog .log 
…………………………
[2016-08-26 17:56:05,771][INFO ][ env                       ] [node-1] heap size [1015.6mb], compressed ordinary object pointers [ true ]
[2016-08-26 17:56:05,774][WARN ][ env                       ] [node-1] max  file  descriptors [4096]  for  elasticsearch process likely too low, consider increasing to at least [65536]
[2016-08-26 17:56:09,416][INFO ][node                     ] [node-1] initialized
[2016-08-26 17:56:09,416][INFO ][node                     ] [node-1] starting ...
[2016-08-26 17:56:09,594][INFO ][transport                ] [node-1] publish_address {10.0.18.148:9300}, bound_addresses {10.0.18.148:9300}
[2016-08-26 17:56:09,611][INFO ][discovery                ] [node-1] serverlog /py6UOr4rRCCuK3KjA-Aj-Q
[2016-08-26 17:56:39,622][WARN ][discovery                ] [node-1] waited  for  30s and no initial state was  set  by the discovery
[2016-08-26 17:56:39,633][INFO ][http                     ] [node-1] publish_address {10.0.18.148:9200}, bound_addresses {10.0.18.148:9200}
[2016-08-26 17:56:39,633][INFO ][node                     ] [node-1] started
[2016-08-26 17:59:33,303][INFO ][cluster.service          ] [node-1] detected_master {node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300}, added {{node-3}{lRKjIPpFSd-_NVn7-0-JeA}{10.0.18.150}{10.0.18.150:9300},{node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300},}, reason: zen-disco-receive(from master [{node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300}])

可以看到自動“選舉”node-2,即10.0.18.149為master節點

第二台:10.0.18.149

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ tail  -f logs /serverlog .log
…………………… 
[2016-08-26 17:58:20,854][WARN ][bootstrap                ] unable to  install  syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled  in
[2016-08-26 17:58:21,480][INFO ][node                     ] [node-2] version[2.3.4], pid[1552], build[e455fd0 /2016-06-30T11 :24:31Z]
[2016-08-26 17:58:21,491][INFO ][node                     ] [node-2] initializing ...
[2016-08-26 17:58:22,537][INFO ][plugins                  ] [node-2] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-08-26 17:58:22,574][INFO ][ env                       ] [node-2] using [1] data paths, mounts [[/ ( /dev/mapper/vg_template-lv_root )]], net usable_space [14.9gb], net total_space [17.1gb], spins? [possibly], types [ext4]
[2016-08-26 17:58:22,575][INFO ][ env                       ] [node-2] heap size [1015.6mb], compressed ordinary object pointers [ true ]
[2016-08-26 17:58:22,578][WARN ][ env                       ] [node-2] max  file  descriptors [4096]  for  elasticsearch process likely too low, consider increasing to at least [65536]
[2016-08-26 17:58:26,437][INFO ][node                     ] [node-2] initialized
[2016-08-26 17:58:26,440][INFO ][node                     ] [node-2] starting ...
[2016-08-26 17:58:26,783][INFO ][transport                ] [node-2] publish_address {10.0.18.149:9300}, bound_addresses {10.0.18.149:9300}
[2016-08-26 17:58:26,815][INFO ][discovery                ] [node-2] serverlog /k0vpt0khTOG0Kmen8EepAg
[2016-08-26 17:58:56,838][WARN ][discovery                ] [node-2] waited  for  30s and no initial state was  set  by the discovery
[2016-08-26 17:58:56,853][INFO ][http                     ] [node-2] publish_address {10.0.18.149:9200}, bound_addresses {10.0.18.149:9200}
[2016-08-26 17:58:56,854][INFO ][node                     ] [node-2] started
[2016-08-26 17:59:33,130][INFO ][cluster.service          ] [node-2] new_master {node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300}, added {{node-1}{py6UOr4rRCCuK3KjA-Aj-Q}{10.0.18.148}{10.0.18.148:9300},{node-3}{lRKjIPpFSd-_NVn7-0-JeA}{10.0.18.150}{10.0.18.150:9300},}, reason: zen-disco- join (elected_as_master, [2] joins received)
[2016-08-26 17:59:33,686][INFO ][gateway                  ] [node-2] recovered [0] indices into cluster_state

也可以看到自動“選舉”node-2,即10.0.18.149為master節點

第三台:10.0.18.150

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ tail  -f logs /serverlog .log 
…………………………
[2016-08-26 17:59:25,644][INFO ][node                     ] [node-3] initializing ...
[2016-08-26 17:59:26,652][INFO ][plugins                  ] [node-3] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-08-26 17:59:26,689][INFO ][ env                       ] [node-3] using [1] data paths, mounts [[/ ( /dev/mapper/vg_template-lv_root )]], net usable_space [14.9gb], net total_space [17.1gb], spins? [possibly], types [ext4]
[2016-08-26 17:59:26,689][INFO ][ env                       ] [node-3] heap size [1015.6mb], compressed ordinary object pointers [ true ]
[2016-08-26 17:59:26,693][WARN ][ env                       ] [node-3] max  file  descriptors [4096]  for  elasticsearch process likely too low, consider increasing to at least [65536]
[2016-08-26 17:59:30,398][INFO ][node                     ] [node-3] initialized
[2016-08-26 17:59:30,398][INFO ][node                     ] [node-3] starting ...
[2016-08-26 17:59:30,549][INFO ][transport                ] [node-3] publish_address {10.0.18.150:9300}, bound_addresses {10.0.18.150:9300}
[2016-08-26 17:59:30,564][INFO ][discovery                ] [node-3] serverlog /lRKjIPpFSd-_NVn7-0-JeA
[2016-08-26 17:59:33,924][INFO ][cluster.service          ] [node-3] detected_master {node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300}, added {{node-1}{py6UOr4rRCCuK3KjA-Aj-Q}{10.0.18.148}{10.0.18.148:9300},{node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300},}, reason: zen-disco-receive(from master [{node-2}{k0vpt0khTOG0Kmen8EepAg}{10.0.18.149}{10.0.18.149:9300}])
[2016-08-26 17:59:33,999][INFO ][http                     ] [node-3] publish_address {10.0.18.150:9200}, bound_addresses {10.0.18.150:9200}
[2016-08-26 17:59:34,000][INFO ][node                     ] [node-3] started

也是可以看到自動“選舉”node-2,即10.0.18.149為master節點!

2、其他信息查看

查看健康信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#curl -XGET 'http://10.0.18.148:9200/_cluster/health?pretty'
{
   "cluster_name"  "serverlog" ,
   "status"  "green" ,
   "timed_out"  false ,
   "number_of_nodes"  : 3,
   "number_of_data_nodes"  : 3,
   "active_primary_shards"  : 0,
   "active_shards"  : 0,
   "relocating_shards"  : 0,
   "initializing_shards"  : 0,
   "unassigned_shards"  : 0,
   "delayed_unassigned_shards"  : 0,
   "number_of_pending_tasks"  : 0,
   "number_of_in_flight_fetch"  : 0,
   "task_max_waiting_in_queue_millis"  : 0,
   "active_shards_percent_as_number"  : 100.0
}

3、查看節點數

1
2
3
4
5
#curl -XGET 'http://10.0.18.148:9200/_cat/nodes?v'
host        ip          heap.percent  ram .percent load node.role master name   
10.0.18.148 10.0.18.148            7          51 0.00 d         m      node-1 
10.0.18.150 10.0.18.150            5          50 0.00 d         m      node-3 
10.0.18.149 10.0.18.149            7          51 0.00 d         *      node-2

注意:*表示當前master節點

4、查看節點分片的信息

1
2
#curl -XGET 'http://10.0.18.148:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size

還沒有看到分片的信息,后面會介紹原因。

5、在三台Elasticsearch節點上安裝插件,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
#su - elasticsearch
$ cd  elasticsearch-2.3.4
$. /bin/plugin  install  license          #license插件
-> Installing license...
Trying https: //download .elastic.co /elasticsearch/release/org/elasticsearch/plugin/license/2 .3.4 /license-2 .3.4.zip ...
Downloading .......DONE
Verifying https: //download .elastic.co /elasticsearch/release/org/elasticsearch/plugin/license/2 .3.4 /license-2 .3.4.zip checksums  if  available ...
Downloading .DONE
Installed license into  /home/elasticsearch/elasticsearch-2 .3.4 /plugins/license
$ . /bin/plugin  install  marvel-agent    #marvel-agent插件
-> Installing marvel-agent...
Trying https: //download .elastic.co /elasticsearch/release/org/elasticsearch/plugin/marvel-agent/2 .3.4 /marvel-agent-2 .3.4.zip ...
Downloading ..........DONE
Verifying https: //download .elastic.co /elasticsearch/release/org/elasticsearch/plugin/marvel-agent/2 .3.4 /marvel-agent-2 .3.4.zip checksums  if  available ...
Downloading .DONE
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@     WARNING: plugin requires additional permissions     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.lang.RuntimePermission setFactory
* javax.net.ssl.SSLPermission setHostnameVerifier
See http: //docs .oracle.com /javase/8/docs/technotes/guides/security/permissions .html
for  descriptions of what these permissions allow and the associated risks.
 
Continue with installation? [y /N ]y         #輸入y,表示同意安裝此插件
Installed marvel-agent into  /home/elasticsearch/elasticsearch-2 .3.4 /plugins/marvel-agent
$ . /bin/plugin  install  mobz /elasticsearch-head      #安裝head插件
-> Installing mobz /elasticsearch-head ...
Trying https: //github .com /mobz/elasticsearch-head/archive/master .zip ...
Downloading ...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https: //github .com /mobz/elasticsearch-head/archive/master .zip checksums  if  available ...
NOTE: Unable to verify checksum  for  downloaded plugin (unable to  find  .sha1 or .md5  file  to verify)
Installed  head  into  /home/elasticsearch/elasticsearch-2 .3.4 /plugins/head
安裝bigdesk插件
$ cd  plugins/
$ mkdir  bigdesk
$ cd  bigdesk
$git clone https: //github .com /lukas-vlcek/bigdesk  _site
Initialized empty Git repository  in  /home/elasticsearch/elasticsearch-2 .3.4 /plugins/bigdesk/_site/ .git/
remote: Counting objects: 5016,  done .
remote: Total 5016 (delta 0), reused 0 (delta 0), pack-reused 5016
Receiving objects: 100% (5016 /5016 ), 17.80 MiB | 1.39 MiB /s done .
Resolving deltas: 100% (1860 /1860 ),  done .
修改_site /js/store/BigdeskStore .js文件,大致在142行,如下:
return  (major == 1 && minor >= 0 && maintenance >= 0 && (build !=  'Beta1'  || build !=  'Beta2' ));
修改為:
return  (major >= 1 && minor >= 0 && maintenance >= 0 && (build !=  'Beta1'  || build !=  'Beta2' ));
添加插件的properties文件:
$ cat  >plugin-descriptor.properties<<EOF
description=bigdesk - Live charts and statistics  for  Elasticsearch cluster.
version=2.5.1
site= true
name=bigdesk
EOF
安裝kopf插件
$. /bin/plugin  install  lmenezes /elasticsearch-kopf
-> Installing lmenezes /elasticsearch-kopf ...
Trying https: //github .com /lmenezes/elasticsearch-kopf/archive/master .zip ...
Downloading ................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https: //github .com /lmenezes/elasticsearch-kopf/archive/master .zip checksums  if  available ...
NOTE: Unable to verify checksum  for  downloaded plugin (unable to  find  .sha1 or .md5  file  to verify)
Installed kopf into  /home/elasticsearch/elasticsearch-2 .3.4 /plugins/kopf

查看安裝的插件,如下:

1
2
3
4
5
6
7
8
$ cd  elasticsearch-2.3.4
$ . /bin/plugin  list
Installed plugins  in  /home/elasticsearch/elasticsearch-2 .3.4 /plugins :
     head
     - license
     - bigdesk
     - marvel-agent
     - kopf

七、安裝配置kibana

說明:在10.0.18.150服務器上安裝kibana!

1、配置yum源

1
2
3
4
5
6
7
8
9
10
11
12
13
#vi /etc/yum.repos.d/kibana.repo
[kibana-4.5]
name=Kibana repository  for  4.5.x packages
baseurl=http: //packages .elastic.co /kibana/4 .5 /centos
gpgcheck=1
gpgkey=http: //packages .elastic.co /GPG-KEY-elasticsearch
enabled=1
安裝kibana
#yum install kibana -y
查看kibana
#rpm -qa kibana
kibana-4.5.4-1.x86_64
注:使用yum安裝的kibana是默認安裝到 /opt 目錄下的

2、安裝插件

1
2
3
4
5
6
7
8
9
10
#cd /opt/kibana/bin/kibana
#./kibana plugin --install elasticsearch/marvel/latest
Installing marvel
Attempting to transfer from https: //download .elastic.co /elasticsearch/marvel/marvel-latest . tar .gz
Transferring 2421607 bytes....................
Transfer complete
Extracting plugin archive
Extraction complete
Optimizing and caching browser bundles...
Plugin installation complete

3、修改kibana配置文件

1
2
3
4
# vim /opt/kibana/config/kibana.yml  #修改為下面3個參數
server.port: 5601
server.host:  "0.0.0.0"
elasticsearch.url:  "http://10.0.18.150:9200"

4、啟動kibana

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#service kibana start
kibana started
查看進程
#ps -ef | grep kibana
kibana    2050     1 12 20:40 pts /0     00:00:03  /opt/kibana/bin/ .. /node/bin/node  /opt/kibana/bin/ .. /src/cli
root      2075  1149  0 20:40 pts /0     00:00:00  grep  kibana
設置開機自啟動
#chkconfig --add kibana
#chkconfig kibana on
查看啟動端口
#netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID /Program  name   
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1025 /sshd           
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      1103 /master         
tcp        0      0 0.0.0.0:5601                0.0.0.0:*                   LISTEN      2050 /node     #已經啟動成功          
tcp        0      0 ::ffff:10.0.18.150:9300     :::*                        LISTEN      1547 /java           
tcp        0      0 :::22                       :::*                        LISTEN      1025 /sshd           
tcp        0      0 ::1:25                      :::*                        LISTEN      1103 /master         
tcp        0      0 ::ffff:10.0.18.150:9200     :::*                        LISTEN      1547 /java

在瀏覽器訪問kibana端口並創建index,如下:

wKioL1fAQ2LSpVxKAAE5Djow5f8443.png

紅方框中的索引名稱是我在logstash server 服務器的配置文件中配置的index名稱,但是無法創建,提示的信息Unable to fetch mapping…… 說明是Elasticsearch沒有讀取到這個index名稱,逐步排查,看日志,最后在10.0.18.144、10.0.18.145上查看logstash的日志,報錯如下:

1
2
3
4
5
6
7
#tail logstash.log 
{:timestamp=> "2016-08-26T20:33:28.404000+0800" , :message=> "failed to open /var/log/nginx/access.log: Permission denied - /var/log/nginx/access.log" , :level=>:warn}
{:timestamp=> "2016-08-26T20:38:29.110000+0800" , :message=> "failed to open /var/log/nginx/access.log: Permission denied - /var/log/nginx/access.log" , :level=>:warn}
{:timestamp=> "2016-08-26T20:43:30.834000+0800" , :message=> "failed to open /var/log/nginx/access.log: Permission denied - /var/log/nginx/access.log" , :level=>:warn}
{:timestamp=> "2016-08-26T20:48:31.559000+0800" , :message=> "failed to open /var/log/nginx/access.log: Permission denied - /var/log/nginx/access.log" , :level=>:warn}
{:timestamp=> "2016-08-26T20:53:32.298000+0800" , :message=> "failed to open /var/log/nginx/access.log: Permission denied - /var/log/nginx/access.log" , :level=>:warn}
{:timestamp=> "2016-08-26T20:58:33.028000+0800" , :message=> "failed to open /var/log/nginx/access.log: Permission denied - /var/log/nginx/access.log" , :level=>:warn}
1
2
在兩台nginx服務器操作
#chmod 755 /var/log/nginx/access.log

重新刷新kibana頁面,並創建index名為nginx-log-*的索引,這次就可以了,如下:

wKioL1fARSuwGS6RAAEe2CHU9hc603.png

點擊綠色按鈕“Create”,就可以創建成功了!然后查看kibana界面的“Discovery”,就會看到搜集的nginx日志了,如下:

wKioL1fARxGwqfOEAAF7RJ-7_ec860.png

 

可以看到已經搜集到日志數據了!

5、訪問head,查看集群是否一致,如下圖:

wKioL1fASPXwsAC8AADNZXgtFmE964.png

6、訪問bigdesk,查看信息,如下圖:

wKiom1fASV6D6E2CAAERJTFiQAE407.png

上圖中也標記了node-2為master節點(有星星標記),上圖顯示的數據是不斷刷新的!

7、訪問kopf,查看信息,如下圖:

wKiom1fASfPQ1HWiAAC29LcXxD8869.png

上面提到了查看節點分片的信息,結果是沒有數據(因為剛配置好,還沒有創建索引,所以分片信息還沒有),現在再測試一次,就可以看到數據了,如下圖:

1
2
3
4
#curl -XGET '10.0.18.148:9200/_cat/indices?v'
health status index                pri rep docs.count docs.deleted store.size pri.store.size 
green   open    .kibana                1   1          3            0     45.2kb         23.9kb 
green   open    nginx-log-2016.08.26   5   1        222            0    549.7kb        272.4kb

8、在kibana界面可以查看到nginx-log-*這個index搜集到的nginx日志數據,也可以看到Elasticsearch集群的index--marvel-es-1-*關於集群的一些信息,如下圖:

wKiom1fD7jmhLbmSAAIQSWAgHYs604.png

八、ELK遇到的一些問題

1、關於kibana端口

配置過kibana的都知道kibana的默認端口是5601,我想修改為80,結果啟動kibana報錯,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#cat /var/log/kibana/kibana.stderr 
FATAL { [Error: listen EACCES 0.0.0.0:80]
   cause: 
    { [Error: listen EACCES 0.0.0.0:80]
      code:  'EACCES' ,
      errno:  'EACCES' ,
      syscall:  'listen' ,
      address:  '0.0.0.0' ,
      port: 80 },
   isOperational:  true ,
   code:  'EACCES' ,
   errno:  'EACCES' ,
   syscall:  'listen' ,
   address:  '0.0.0.0' ,
   port: 80 }
FATAL { [Error: listen EACCES 10.0.18.150:80]
   cause: 
    { [Error: listen EACCES 10.0.18.150:80]
      code:  'EACCES' ,
      errno:  'EACCES' ,
      syscall:  'listen' ,
      address:  '10.0.18.150' ,
      port: 80 },
   isOperational:  true ,
   code:  'EACCES' ,
   errno:  'EACCES' ,
   syscall:  'listen' ,
   address:  '10.0.18.150' ,
   port: 80 }
   #tail /var/log/kibana/kibana.stdout 
   { "type" : "log" , "@timestamp" : "2016-08-29T02:54:21+00:00" , "tags" :[ "fatal" ], "pid" :3217, "level" : "fatal" , "message" : "listen EACCES 10.0.18.150:80" , "error" :{ "message" : "listen EACCES 10.0.18.150:80" , "name" : "Error" , "stack" : "Error: listen EACCES 10.0.18.150:80\n    at Object.exports._errnoException (util.js:873:11)\n    at exports._exceptionWithHostPort (util.js:896:20)\n    at Server._listen2 (net.js:1237:19)\n    at listen (net.js:1286:10)\n    at net.js:1395:9\n    at nextTickCallbackWith3Args (node.js:453:9)\n    at process._tickDomainCallback (node.js:400:17)" , "code" : "EACCES" }}

沒有找到解決方法,只能將端口改為默認的5601了。

2、關於nginx日志問題

本次實驗是使用yum安裝的nginx,版本是1.10.1,開始是因為nginx日志權限,導致無法讀取nginx日志,后來將日志文件權限修改為了755,就可以了。但是,nginx日志是每天進行logrotate的,新生成的日志依然是640的權限,所以依然獲取不到日志數據。所以只能修改nginx的默認logrotate文件了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#cd /etc/logrotate.d
#cat nginx       #默認如下
/var/log/nginx/ *.log {
         daily
         missingok
         rotate 52
         compress
         delaycompress
         notifempty
         create 640 nginx adm      #可以看到默認權限是640,屬主和屬組分別是nginx和adm
         sharedscripts
         postrotate
                 [ -f  /var/run/nginx .pid ] &&  kill  -USR1 ` cat  /var/run/nginx .pid`
         endscript
}
修改后如下:
/var/log/nginx/ *.log {
         daily
         missingok
         rotate 52
         compress
         delaycompress
         notifempty
         create 755 nginx nginx    #修改為755,屬主和屬組都是nginx
         sharedscripts
         postrotate
                 [ -f  /var/run/nginx .pid ] &&  kill  -USR1 ` cat  /var/run/nginx .pid`
         endscript
}
然后重啟nginx,以后在logrotage的日志權限就是755了。

3、關於Marvel的問題

說明:Marvel是Elasticsearch集群的monitor ,英文解釋如下:

Marvel is the best way to monitor your Elasticsearch cluster and provide actionable insights to help you get the most out of your cluster. It is free to use in both development and production.

Marvel是監控你的Elasticsearch集群,並提供可操作的見解,以幫助您充分利用集群的最佳方式,它是免費的在開發和生產中使用。

問題:Elasticsearch集群都搭建好之后,在瀏覽器訪問Marvel,查看監控信息的時候頁面報錯,無法顯示監控的信息大致意識是no-data之類的,后來通過排查三台Elasticsearch的log,有一些錯誤,具體沒有搞清楚是什么錯,於是重啟了三台Elasticsearch的elasticsearch服務,再訪問Marvel的監控頁面,就OK了,如下圖:

wKioL1fD1oyAFP_TAAFHypQ_XiQ040.png

可以看到serverlog是我配置的集群名稱,點進去繼續查看,如下圖:

wKioL1fD11jjZrQOAAID6IrTAyk185.png

4、節點分片信息相關的問題

在本次實驗的過程中,第一次查看分片信息是沒有的,因為沒有創建索引,后面等創建過索引之后,就可以看到創建的索引信息了,但是還有集群的信息沒有顯示出來,問題應該和第2個一樣,Elasticsearch有問題,重啟之后,就查看到了如下:

1
2
3
4
5
6
7
8
9
查看節點分片信息:
#curl -XGET '10.0.18.148:9200/_cat/indices?v'
health status index                   pri rep docs.count docs.deleted store.size pri.store.size 
green   open    nginx-log-2016.08.29      5   1       2374            0      1.7mb        902.9kb 
green   open    nginx-log-2016.08.27      5   1       2323            0        1mb        528.6kb 
green   open    .marvel-es-data-1         1   1          5            3     17.6kb          8.8kb 
green   open    .kibana                   1   1          3            0     45.2kb         21.3kb 
green   open    .marvel-es-1-2016.08.29   1   1      16666          108     12.1mb          6.1mb 
green   open    nginx-log-2016.08.26      5   1       1430            0    800.4kb        397.8kb

5、關於創建多個index索引名稱,存儲不同類型日志的情況

也許我們不止nginx這一種日志需要搜集分析,還有httpd、tomcat、mysql等日志,但是如果都搜集在nginx-log-*這個索引下面,會很亂,不易於排查問題,如果每一種類型的日志都創建一個索引,這樣分類創建索引,會比較直觀,實現是在logstash server 服務器上創建多個conf文件,然后逐個啟動,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
#cd /etc/logstash/conf.d/
#cat logstash_server.conf
input {
     redis {
         port =>  "6379"
         host =>  "10.0.18.146"
         data_type =>  "list"
         key =>  "logstash-redis"
         type  =>  "redis-input"
    }
}
output {
      elasticsearch {
          hosts =>  "10.0.18.149"
          index =>  "nginx-log-%{+YYYY.MM.dd}"
     }
}
#cat logstash_server1.conf 
input {
     redis {
         port =>  "6379"
         host =>  "10.0.18.146"
         data_type =>  "list"
         key =>  "logstash-redisa"
         type  =>  "redis-input"
    }
}
output {
      elasticsearch {
          hosts =>  "10.0.18.149"
          index =>  "httpd-log-%{+YYYY.MM.dd}"
     }
}
如果還有其他日志,仿照上面的conf文件即可,不同的是index名稱和key
然后逐個啟動
#nohup /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash_server.conf &
#nohup /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash_server1.conf & 
再對應的日志服務器(稱為客戶端)本身配置conf文件,如下:
#cat /etc/logstash/conf.d/logstash-web.conf 
input {
      file  {
           path => [ "/var/log/httpd/access_log" ]
           type  =>  "httpd_log"              #type
           start_position =>  "beginning" 
         }
}
output {
       redis {
               host =>  "10.0.18.146"
               key =>  'logstash-redisa'      #key
               data_type =>  'list'
       }
}
然后啟動logstash服務,再到kibana界面創建新的索引httpd-log-*,就可以在這個索引下面查看到搜集到的httpd日志了!

6、elasticsearch啟動之后,提示最大文件數太小的問題

ELK集群搭建好之后,開啟elasticsearch,提示下面的warn:

1
2
3
4
[WARN ][ env                       ] [node-1] max  file  descriptors [65535]  for  elasticsearch process likely too low, consider increasing to at least [65536]
於是修改文件 /etc/security/limits .conf ,添加如下:
* soft nofile 65536
* hard nofile 65536


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM