写给技术看的 所以不写太多文字了 直接上流程
首先准备多台机器,命名为centos01,centos02...(本章准备了四台机器 附修改主机命令hostnamectl set-hostname centos01)
1.安装java 删除openjdk 安装对应java环境
vim /etc/profile export JAVA_HOME=/home/java/jdk1.8.0_201 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH
source /etc/profile
chmod 777 /home/java/jdk1.8.0_201/bin/java
2.安装zookeeper
cd /home wget http://mirror.bit.edu.cn/apache/zookeeper/stable/apache-zookeeper-3.5.8-bin.tar.gz tar -xf apache-zookeeper-3.5.8-bin.tar.gz mv apache-zookeeper-3.5.8-bin zookeeper mkdir /home/zookeeper/dataDir
mkdir /home/zookeeper/dataLogDir
3.配置zk
cd /home/zookeeper/conf mv zoo_sample.cfg zoo.cfg vim /home/zookeeper/conf/zoo.cfg data=/home/zookeeper/data dataLogDir=/home/zookeeper/dataLogDir server.1=centos01:2888:3888 server.2=centos02:2888:3888 server.3=centos03:2888:3888 server.4=centos04:2888:3888
4.设置集群编号 (其他的分别为2,3,4)
echo "1" > /home/zookeeper/dataDir/myid
5.配置服务器名称(其他分别为02,03,04) 一般出现java.net.ConnectException: 拒绝连接 的问题 都是这里没有删除127.0.0.1导致的
vim /etc/hosts
ip centos01
6.启动 关闭 查看zk集群指令集
/home/zookeeper/bin/zkServer.sh start
/home/zookeeper/bin/zkServer.sh status
/home/zookeeper/bin/zkServer.sh stop
-------------------------------kafka--------------------------------------
1.下载kafka并预制目录
cd /home wget http://mirror.bit.edu.cn/apache/kafka/2.5.0/kafka_2.13-2.5.0.tgz scp -r kafka_2.13-2.5.0.tgz root@centos02:/home scp -r kafka_2.13-2.5.0.tgz root@centos03:/home scp -r kafka_2.13-2.5.0.tgz root@centos04:/home tar -xf kafka_2.13-2.5.0.tgz mv kafka_2.13-2.5.0 kafka mkdir /home/kafka/kafkalogs
2.配置信息(修改以下) 机器全部都修改
vim /home/kafka/config/server.properties broker.id=1 listeners=PLAINTEXT://centos01:9092 advertised.listeners=PLAINTEXT://centos01:9092 log.dirs=/home/kafka/kafkalogs
zookeeper.connect=centos01:2181,centos02:2181,centos03:2181,centos04:2181
3.启动kafka
/home/kafka/bin/kafka-server-start.sh -daemon /home/kafka/config/server.properties
/home/kafka/bin/kafka-server-stop.sh -daemon /home/kafka/config/server.properties
4.测试kafka
cd /home/kafka
创建一个名称为TestTopic的 4副本4分区的topic
./bin/kafka-topics.sh --create --zookeeper centos01:2181,centos02:2181,centos03:2181,centos04:2181 --replication-factor 4 --partitions 4 --topic TestTopic
展示topic
./bin/kafka-topics.sh --list --zookeeper centos01:2181,centos02:2181,centos03:2181,centos04:2181
查看topic详细信息
./bin/kafka-topics.sh --describe --zookeeper centos01:2181,centos02:2181,centos03:2181,centos04:2181 --topic TestTopic
模拟生产者发送消息(该操作在centos01节点上执行)
./bin/kafka-console-producer.sh --broker-list centos01:2181 --topic TestTopic
模拟消费者消费消息(该操作在centos02节点上执行)
./bin/kafka-console-consumer.sh --bootstrap-server centos01:2181,centos02:2181,centos03:2181,centos04:2181 --topic TestTopic
删除名称为TestTopic的topic
./bin/kafka-topics.sh --delete --topic TestTopic --zookeeper centos01:2181,centos02:2181,centos03:2181,centos04:2181
查看TestTopic是否还存在
./bin/kafka-topics.sh --describe --zookeeper centos01:2181,centos02:2181,centos03:2181,centos04:2181
----------------------------------------------------ES集群-------------------------------------------
1.下载es copy到2 3 4 并安装
cd /home wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.0-x86_64.rpm yum localinstall elasticsearch-7.6.0-x86_64.rpm
2.配置文件
mkdir /home/elasticsearch mkdir /home/elasticsearch/data mkdir /home/elasticsearch/logs vim /etc/elasticsearch/elasticsearch.yml cluster.name: ESCluster node.name: node-1 node.attr.hotwarm_type: hot/cold path.data: /home/elasticsearch/data path.logs: /home/elasticsearch/logs bootstrap.memory_lock: true network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
discovery.seed_hosts: ["centos01", "centos02", "centos03", "centos04"]
cluster.initial_master_nodes: ["node-1","node-2","node-3","node-4"]
gateway.recover_after_nodes: 2 gateway.recover_after_time: 5m gateway.expected_nodes: 3 http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type indices.fielddata.cache.size: 20%
3.配置权限
chmod 777 /home/elasticsearch/data
chmod 777 /home/elasticsearch/logs
4.服务启动/关闭
systemctl start elasticsearch
systemctl stop elasticsearch
systemctl status elasticsearch
systemctl enable elasticsearch
----------------------------------------------------------kibana------------------------------------
1.安装kibana
cd /home wget https://artifacts.elastic.co/downloads/kibana/kibana-7.6.0-x86_64.rpm yum localinstall kibana-7.6.0-x86_64.rpm
2.配置
vim /etc/kibana/kibana.yml server.port: 5601 server.host: "具体IP" elasticsearch.hosts: ["http://192.168.121.30:19201"] i18n.locale: "zh-CN"
3.启动服务 (访问地址:http://ip:5601/)
systemctl start kibana
systemctl status kibana
systemctl enable kibana
-------------------------------------------------------logstash--------------------------------------------
1.下载安装
cd /home wget https://artifacts.elastic.co/downloads/logstash/logstash-7.6.0.rpm yum localinstall logstash-7.6.0.rpm
2.配置 conf文件 并放到conf.d文件夹下
input { #kafka输入源配置 kafka { bootstrap_servers => ["centos01:9092,centos02:9092,centos03:9092,centos04:9092"] #从kafka中哪个topic读取数据,这里的topic名要与filebeat中使用的topic保持一致 topics => ["filebeat-logstash"] #这是kafka中的消费组者ID,默认值是“logstash”。kafka将消息发到每个消费者组中,同一个组中的消费者收到的数据不重复。例如有两个消费者组G1、G2,G1中有成员A、B,G2中有成员C、D。kafka从输入中收到了10条消息,会将这10条消息同时发送给G1和G2,A和B各会收到这10条消息中的一部分,他们收到消息的并集就是这10条消息,C和D同理。 group_id => "filebeat-logstash" #kafka消费者组中每个消费者的ID,默认值是“logstash” client_id => "logstashnode1" #logstash的消费线程,一般一个线程对应kafka中的一个partition(分区),同一组logstash的consumer_threads之和应该不大于一个topic的partition,超过了就是资源的浪费,一般的建议是相等。 consumer_threads => 1 #由于beat传输数据给kafka集群的时候,会附加很多tag,默认情况下,logstash就会将这串tag也认为是message的一部分。这样不利于后期的数据处理。所有需要添加codec处理。得到原本的message数据。 codec => json } } output { elasticsearch { hosts => ["http://localhost:9200"] index => "%{[@metadata][.kibana]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } }
3.启动logstash
systemctl start logstash
systemctl status logstash
如果启动不了 可以使用下面的命令启动
/usr/share/logstash/bin/logstash --path.settings /etc/logstash
------------------------------------------------------------------------filebeat------------------------------------------
1.下载安装
cd /home wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.0-x86_64.rpm yum localinstall filebeat-7.6.0-x86_64.rpm
2.配置
# Change to true to enable this input configuration. enabled: true
paths:
- /home/log/*.log
#------------------------------Kafaka output -----------------------------------
output.kafka:
enabled: true
hosts: ["centos01:9092","centos02:9092","centos03:9092","centos04:9092"]
topic: filebeat-logstash
3.启动filebeat
systemctl start filebeat
到此,如果没有错误的话 ,则大功告成 问题也遇到很多 单开章去说吧