寫給技術看的 所以不寫太多文字了 直接上流程
首先准備多台機器,命名為centos01,centos02...(本章准備了四台機器 附修改主機命令hostnamectl set-hostname centos01)
1.安裝java 刪除openjdk 安裝對應java環境
vim /etc/profile export JAVA_HOME=/home/java/jdk1.8.0_201 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH
source /etc/profile
chmod 777 /home/java/jdk1.8.0_201/bin/java
2.安裝zookeeper
cd /home wget http://mirror.bit.edu.cn/apache/zookeeper/stable/apache-zookeeper-3.5.8-bin.tar.gz tar -xf apache-zookeeper-3.5.8-bin.tar.gz mv apache-zookeeper-3.5.8-bin zookeeper mkdir /home/zookeeper/dataDir
mkdir /home/zookeeper/dataLogDir
3.配置zk
cd /home/zookeeper/conf mv zoo_sample.cfg zoo.cfg vim /home/zookeeper/conf/zoo.cfg data=/home/zookeeper/data dataLogDir=/home/zookeeper/dataLogDir server.1=centos01:2888:3888 server.2=centos02:2888:3888 server.3=centos03:2888:3888 server.4=centos04:2888:3888
4.設置集群編號 (其他的分別為2,3,4)
echo "1" > /home/zookeeper/dataDir/myid
5.配置服務器名稱(其他分別為02,03,04) 一般出現java.net.ConnectException: 拒絕連接 的問題 都是這里沒有刪除127.0.0.1導致的
vim /etc/hosts
ip centos01
6.啟動 關閉 查看zk集群指令集
/home/zookeeper/bin/zkServer.sh start
/home/zookeeper/bin/zkServer.sh status
/home/zookeeper/bin/zkServer.sh stop
-------------------------------kafka--------------------------------------
1.下載kafka並預制目錄
cd /home wget http://mirror.bit.edu.cn/apache/kafka/2.5.0/kafka_2.13-2.5.0.tgz scp -r kafka_2.13-2.5.0.tgz root@centos02:/home scp -r kafka_2.13-2.5.0.tgz root@centos03:/home scp -r kafka_2.13-2.5.0.tgz root@centos04:/home tar -xf kafka_2.13-2.5.0.tgz mv kafka_2.13-2.5.0 kafka mkdir /home/kafka/kafkalogs
2.配置信息(修改以下) 機器全部都修改
vim /home/kafka/config/server.properties broker.id=1 listeners=PLAINTEXT://centos01:9092 advertised.listeners=PLAINTEXT://centos01:9092 log.dirs=/home/kafka/kafkalogs
zookeeper.connect=centos01:2181,centos02:2181,centos03:2181,centos04:2181
3.啟動kafka
/home/kafka/bin/kafka-server-start.sh -daemon /home/kafka/config/server.properties
/home/kafka/bin/kafka-server-stop.sh -daemon /home/kafka/config/server.properties
4.測試kafka
cd /home/kafka
創建一個名稱為TestTopic的 4副本4分區的topic
./bin/kafka-topics.sh --create --zookeeper centos01:2181,centos02:2181,centos03:2181,centos04:2181 --replication-factor 4 --partitions 4 --topic TestTopic
展示topic
./bin/kafka-topics.sh --list --zookeeper centos01:2181,centos02:2181,centos03:2181,centos04:2181
查看topic詳細信息
./bin/kafka-topics.sh --describe --zookeeper centos01:2181,centos02:2181,centos03:2181,centos04:2181 --topic TestTopic
模擬生產者發送消息(該操作在centos01節點上執行)
./bin/kafka-console-producer.sh --broker-list centos01:2181 --topic TestTopic
模擬消費者消費消息(該操作在centos02節點上執行)
./bin/kafka-console-consumer.sh --bootstrap-server centos01:2181,centos02:2181,centos03:2181,centos04:2181 --topic TestTopic
刪除名稱為TestTopic的topic
./bin/kafka-topics.sh --delete --topic TestTopic --zookeeper centos01:2181,centos02:2181,centos03:2181,centos04:2181
查看TestTopic是否還存在
./bin/kafka-topics.sh --describe --zookeeper centos01:2181,centos02:2181,centos03:2181,centos04:2181
----------------------------------------------------ES集群-------------------------------------------
1.下載es copy到2 3 4 並安裝
cd /home wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.0-x86_64.rpm yum localinstall elasticsearch-7.6.0-x86_64.rpm
2.配置文件
mkdir /home/elasticsearch mkdir /home/elasticsearch/data mkdir /home/elasticsearch/logs vim /etc/elasticsearch/elasticsearch.yml cluster.name: ESCluster node.name: node-1 node.attr.hotwarm_type: hot/cold path.data: /home/elasticsearch/data path.logs: /home/elasticsearch/logs bootstrap.memory_lock: true network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
discovery.seed_hosts: ["centos01", "centos02", "centos03", "centos04"]
cluster.initial_master_nodes: ["node-1","node-2","node-3","node-4"]
gateway.recover_after_nodes: 2 gateway.recover_after_time: 5m gateway.expected_nodes: 3 http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type indices.fielddata.cache.size: 20%
3.配置權限
chmod 777 /home/elasticsearch/data
chmod 777 /home/elasticsearch/logs
4.服務啟動/關閉
systemctl start elasticsearch
systemctl stop elasticsearch
systemctl status elasticsearch
systemctl enable elasticsearch
----------------------------------------------------------kibana------------------------------------
1.安裝kibana
cd /home wget https://artifacts.elastic.co/downloads/kibana/kibana-7.6.0-x86_64.rpm yum localinstall kibana-7.6.0-x86_64.rpm
2.配置
vim /etc/kibana/kibana.yml server.port: 5601 server.host: "具體IP" elasticsearch.hosts: ["http://192.168.121.30:19201"] i18n.locale: "zh-CN"
3.啟動服務 (訪問地址:http://ip:5601/)
systemctl start kibana
systemctl status kibana
systemctl enable kibana
-------------------------------------------------------logstash--------------------------------------------
1.下載安裝
cd /home wget https://artifacts.elastic.co/downloads/logstash/logstash-7.6.0.rpm yum localinstall logstash-7.6.0.rpm
2.配置 conf文件 並放到conf.d文件夾下
input { #kafka輸入源配置 kafka { bootstrap_servers => ["centos01:9092,centos02:9092,centos03:9092,centos04:9092"] #從kafka中哪個topic讀取數據,這里的topic名要與filebeat中使用的topic保持一致 topics => ["filebeat-logstash"] #這是kafka中的消費組者ID,默認值是“logstash”。kafka將消息發到每個消費者組中,同一個組中的消費者收到的數據不重復。例如有兩個消費者組G1、G2,G1中有成員A、B,G2中有成員C、D。kafka從輸入中收到了10條消息,會將這10條消息同時發送給G1和G2,A和B各會收到這10條消息中的一部分,他們收到消息的並集就是這10條消息,C和D同理。 group_id => "filebeat-logstash" #kafka消費者組中每個消費者的ID,默認值是“logstash” client_id => "logstashnode1" #logstash的消費線程,一般一個線程對應kafka中的一個partition(分區),同一組logstash的consumer_threads之和應該不大於一個topic的partition,超過了就是資源的浪費,一般的建議是相等。 consumer_threads => 1 #由於beat傳輸數據給kafka集群的時候,會附加很多tag,默認情況下,logstash就會將這串tag也認為是message的一部分。這樣不利於后期的數據處理。所有需要添加codec處理。得到原本的message數據。 codec => json } } output { elasticsearch { hosts => ["http://localhost:9200"] index => "%{[@metadata][.kibana]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } }
3.啟動logstash
systemctl start logstash
systemctl status logstash
如果啟動不了 可以使用下面的命令啟動
/usr/share/logstash/bin/logstash --path.settings /etc/logstash
------------------------------------------------------------------------filebeat------------------------------------------
1.下載安裝
cd /home wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.0-x86_64.rpm yum localinstall filebeat-7.6.0-x86_64.rpm
2.配置
# Change to true to enable this input configuration. enabled: true
paths:
- /home/log/*.log
#------------------------------Kafaka output -----------------------------------
output.kafka:
enabled: true
hosts: ["centos01:9092","centos02:9092","centos03:9092","centos04:9092"]
topic: filebeat-logstash
3.啟動filebeat
systemctl start filebeat
到此,如果沒有錯誤的話 ,則大功告成 問題也遇到很多 單開章去說吧