5:ELK+Kafka集群安裝


一、架構

1、拓撲

 

二、准備工作

 1、軟件信息

 注意:Logstash kafka插件對kafka有版本要求,最高只能使用kafka_2.10-0.10.1.0版本

 

 2、環境規划

 

 

三、安裝配置

1、ES集群安裝(兩台ES安裝)

=========================================ELK1-1.225配置
rpm -ivh elasticsearch-6.2.3.rpm
cat /etc/elasticsearch/elasticsearch.yml  |  grep -Ev "^$|^#"
cluster.name: es
node.name: es01
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.system_call_filter: false
network.host: 192.168.1.225
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.225", "192.168.1.226"]
discovery.zen.minimum_master_nodes: 1
node.master: true
node.data: true
transport.tcp.compress: true

vim /etc/sysconfig/elasticsearch
JAVA_HOME=/usr/local/jdk1.8.0_131/

chkconfig --add elasticsearch
service elasticsearch start
=========================================ELK2-1.226配置 cat /etc/elasticsearch/elasticsearch.yml | grep -Ev "^$|^#" cluster.name: es node.name: es02 path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch bootstrap.system_call_filter: false network.host: 192.168.1.226 http.port: 9200 discovery.zen.ping.unicast.hosts: ["192.168.1.225", "192.168.1.226"] discovery.zen.minimum_master_nodes: 1 node.master: false node.data: true transport.tcp.compress: true vim /etc/sysconfig/elasticsearch JAVA_HOME=/usr/local/jdk1.8.0_131/ chkconfig --add elasticsearch service elasticsearch start =========================================查看ES集群狀態 curl -XGET 'http://192.168.1.225:9200/_cluster/health?pretty' { "cluster_name" : "es", "status" : "green", "timed_out" : false, "number_of_nodes" : 2, "number_of_data_nodes" : 2, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }

 

2、Kibana安裝漢化

rpm -ivh kibana-6.2.3-x86_64.rpm
cat /etc/kibana/kibana.yml  | grep -Ev "^$|^#"
server.port: 5601
server.host: "192.168.1.225"
elasticsearch.url: "http://192.168.1.225:9200"    #輸入任意一個ES IP即可
logging.dest: /var/log/kibana.log

touch /var/log/kibana.log ; chmod 777 /var/log/kibana.log
git clone https://github.com/anbai-inc/Kibana_Hanization.git
cd Kibana_Hanization/
python main.py /usr/share/kibana/
chkconfig --add kibana
/etc/init.d/kibana start 

 

3、Logstash安裝(兩台ES安裝,生產業務建議ES和Logstash分離)

ln -s /usr/local/jdk1.8.0_131/bin/java  /usr/bin/java
rpm -ivh logstash-6.2.3.rpm
cat /etc/logstash/logstash.yml  | grep -Ev "^$|^#"
path.data: /var/lib/logstash
http.host: "192.168.1.225"    #兩台IP不同
path.logs: /var/log/logstash

chkconfig --add logstash
/etc/init.d/logstash start 

 

4、ZK集群安裝(三台ZK安裝)

===============================================zookeeper安裝
cd /usr/local/src
tar xf zookeeper-3.4.10.tar.gz 
mv zookeeper-3.4.10 /usr/local/
ln -s /usr/local/zookeeper-3.4.10/ /usr/local/zookeeper 
cp /usr/local/zookeeper/conf/zoo_sample.cfg  /usr/local/zookeeper/conf/zoo.cfg 
mkdir -p /data/{zookeeper,logs/zookeeper,logs/kafka}
cat /usr/local/zookeeper/conf/zoo.cfg |  grep -Ev "^$|^#"
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
dataLogDir=/data/logs/zookeeper
clientPort=2181
server.1=192.168.1.227:2888:3888
server.2=192.168.1.228:2888:3888
server.3=192.168.1.229:2888:3888

touch /data/zookeeper/myid | echo  1 > /data/zookeeper/myid   #另外兩台配置文件相同,myid改為配置文件中一致
/usr/local/zookeeper/bin/zkServer.sh start   #啟動zookeeper
/usr/local/zookeeper/bin/zkServer.sh status   #查看三台zookeeper集群狀態,正確的是一主兩從
Mode: leader
/usr/local/zookeeper/bin/zkServer.sh status  
Mode: follower
/usr/local/zookeeper/bin/zkServer.sh status  
Mode: follower

===============================================kafka安裝
cd /usr/local/src
tar xf kafka_2.10-0.10.1.0.tgz.gz
mv kafka_2.10-0.10.1.0 /usr/local/
ln -s /usr/local/kafka_2.10-0.10.1.0/ /usr/local/kafka 
cat /usr/local/kafka/config/server.properties |  grep -Ev "^$|^#"
broker.id=1    #每台ID不同
listeners=PLAINTEXT://192.168.1.227:9092  
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/logs/kafka
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.1.227:2181,192.168.1.228:2181,192.168.1.229:2181
zookeeper.connection.timeout.ms=6000

#啟動服務
/usr/local/kafka/bin/kafka-server-start.sh  -daemon /usr/local/kafka/config/server.properties  #三台服務器都啟動

#檢查啟動情況,默認開啟的端口為2181(zookeeper)和9202(kafka)

#測試消息生產和消費
/usr/local/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.1.227:2181 --replication-factor 1 --partition 1 --topic test     #創建topic
/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.1.227:2181     #查看創建的topic
/usr/local/kafka/bin/kafka-console-producer.sh --broker-list 192.168.1.227:9092 --topic test   #模擬客戶端發送消息
/usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.227:9092 --topic test  --from-beginning    #模擬客戶端接收信息
/usr/local/kafka/bin/kafka-topics.sh --delete --zookeeper 192.168.1.227:2181 --topic test    #刪除topic

 

5、filebeat安裝(所有業務主機)

rpm -ivh filebeat-6.2.3-x86_64.rpm
chkconfig --add filebeat
/etc/init.d/filebeat start 

 

6、filebeat配置收集業務日志

vim /etc/filebeat/filebeat.yml
filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /opt/efang_hr/efanghr_warn.log
  fields:
    log_topics: efang
  max_bytes: 1048576

- type: log
  enabled: true
  paths:
    - /var/log/messages
  fields:
    log_topics: syslog
  max_bytes: 1048576

- type: log
  enabled: true
  paths:
    - /usr/local/tomcat/logs/catalina.out
  fields:
    log_topics: tomcat_efang
  max_bytes: 1048576

output.kafka:
  hosts: ["192.168.1.227:9092","192.168.1.228:9092","192.168.1.229:9092"]
  topic: '%{[fields.log_topic]}'
  compression: gzip
  max_message_bytes: 1000000

chkconfig --add filebeat
/usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml   #檢查配置文件語法合規性
/etc/init.d/filebeat restart  #重啟服務

 

7、配置Logstash從ZK將日志傳入ES

cat /etc/logstash/conf.d/logstash.conf
input {
  kafka {
    bootstrap_servers => "192.168.1.227:9092,192.168.1.228:9092,192.168.1.229:9092"
    topics => ["efang","tomcat_efang","syslog"]
    group_id => "test-consumer-group"
    codec => "plain"
    consumer_threads => 1
    decorate_events => true

  }
}

output {
 if[fields][log_topics]=="efang"{
  elasticsearch {
    action => "index"
    index => "efang"
    hosts => ["192.168.1.225:9200","192.168.1.226:9200"]
  }
 }

 if[fields][log_topics]=="tomcat_efang"{
  elasticsearch {
    action => "index"
    index => "tomcat_efang"
    hosts => ["192.168.1.225:9200","192.168.1.226:9200"]
  }
 }

 if[fields][log_topics]=="syslog"{
  elasticsearch {
    action => "index"
    index => "syslog"
    hosts => ["192.168.1.225:9200","192.168.1.226:9200"]
  } 
 }
}
/usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/logstash.conf --config.test_and_exit   #檢查配置文件合規性
/etc/init.d/logstash restart    #重啟服務

 

8、ZK查看topic和topic的消息隊列

# 通過 Zookeeper 去查看
/usr/local/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.1.227:2181,192.168.1.228:2181,192.168.1.229:2181  --topic efang --from-beginning

# 通過 Kafka 本身去查看
/usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.227:2181,192.168.1.228:2181,192.168.1.229:2181 --topic efang --from-beginning

# 如果不想從頭看,就把 --from-beginning 去掉,也就是從最新的開始看

 

9、坑

(1)、ES無法啟動

[2018-04-09T10:25:18,400][ERROR][o.e.b.Bootstrap          ] [es01] node validation exception
[2] bootstrap checks failed
[1]: max number of threads [1024] for user [elasticsearch] is too low, increase to at least [4096]
[2]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

 解決:

vim /etc/elasticsearch/elasticsearch.yml  #默認bootstrap.system_call_filter為true進行檢測,所以導致檢測失敗,失敗后直接導致ES不能啟動。
bootstrap.system_call_filter: false
vim /etc/security/limits.d/90-nproc.conf   #es可創建的線程太小了,改大點
*          soft    nproc     4096

 (2)、Logstash啟動警告和沒有啟動腳本

which: no java in (/sbin:/bin:/usr/sbin:/usr/bin:/usr/X11R6/bin)
could not find java; set JAVA_HOME or ensure java is in PATH
解決:
ln -s /usr/local/jdk1.8.0_131/bin/java  /usr/bin/java

chmod: cannot access `/etc/default/logstash': No such file or directory
warning: %post(logstash-1:6.2.3-1.noarch) scriptlet failed, exit status 1
解決:
/usr/share/logstash/bin/system-install /etc/logstash/startup.options sysv

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM