系統:centos 7.4 x64
主機ip:192.168.0.160
軟件包:elasticsearch-7.3.0-linux-x86_64.tar.gz
配置步驟
vim /etc/security/limits.conf
* soft nofile 65537
* hard nofile 65537
* soft nproc 65537
* hard nproc 65537
vim /etc/sysctl.conf
vm.max_map_count = 262144
net.core.somaxconn = 65535
net.ipv4.ip_forward = 1
cd /usr/local/src
tar -zxv -f elasticsearch-7.3.0-linux-x86_64.tar.gz -C /usr/local/
cd /usr/local/
cp elasticsearch-7.3.0 elasticsearch-7.3.0_node1
cp elasticsearch-7.3.0 elasticsearch-7.3.0_node2
cp elasticsearch-7.3.0 elasticsearch-7.3.0_node3
useradd elastic
chown -R elastic:elastic elasticsearch-7.3.0_node1
chown -R elastic:elastic elasticsearch-7.3.0_node2
chown -R elastic:elastic elasticsearch-7.3.0_node3
# 如下是每個節點的配置文件內容
# 1
cluster.name: my-application
node.name: node-1
# 主節點
node.master: true
# 數據節點
node.data: true
network.host: 192.168.0.160
http.port: 9200
transport.port: 9300
discovery.seed_hosts: ["192.168.0.160:9300", "192.168.0.160:9301","192.168.0.160:9302"]
cluster.initial_master_nodes: ["node-1"] # 確保當前節點是主節點
http.cors.enabled: true
http.cors.allow-origin: "*"
# 2
cluster.name: my-application
node.name: node-2
node.master: false
node.data: true
network.host: 192.168.0.160
http.port: 9201
transport.port: 9301
discovery.seed_hosts: ["192.168.0.160:9300", "192.168.0.160:9301","192.168.0.160:9302"]
cluster.initial_master_nodes: ["node-1", "node-2","node3"]
http.cors.enabled: true
http.cors.allow-origin: "*"
# 3
cluster.name: my-application
node.name: node-3
node.master: false
node.data: true
network.host: 192.168.0.160
http.port: 9202
transport.port: 9302
discovery.seed_hosts: ["192.168.0.160:9300", "192.168.0.160:9301","192.168.0.160:9302"]
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
http.cors.enabled: true
http.cors.allow-origin: "*"
若報錯說找不到主節點,可以先啟動主節點,等主節點集群建立后,再啟動從節點,觀察從節點日志輸出,確保從節點加入集群。
分別啟動的話需要先切換到elastic普通用戶,然后運行/usr/local/elasticsearch-7.3.0-cluster-node1/bin/elasticsearch
可以先分別啟動查看狀態,待配置無誤后再用腳本啟動
啟動腳本
#!/bin/bash
/usr/bin/su - elastic -c '/usr/local/elasticsearch-7.3.0-cluster-node1/bin/elasticsearch -p /tmp/elasticsearch_9200_pid -d'
/usr/bin/su - elastic -c '/usr/local/elasticsearch-7.3.0-cluster-node2/bin/elasticsearch -p /tmp/elasticsearch_9201_pid -d'
/usr/bin/su - elastic -c '/usr/local/elasticsearch-7.3.0-cluster-node3/bin/elasticsearch -p /tmp/elasticsearch_9202_pid -d'
關閉腳本
#!/bin/bash
kill -9 `ps -u elastic|awk '{print $1}'`
測試
查看集群主從分配
http://192.168.0.160:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.0.160 35 97 0 0.04 0.23 0.22 di - node-3
192.168.0.160 15 97 0 0.04 0.23 0.22 dim * node-1
192.168.0.160 31 97 0 0.04 0.23 0.22 di - node-2
查看集群狀態
http://192.168.0.160:9200/_cluster/health?pretty
{
"cluster_name": "my-application",
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 100.0
}
其他部署方式
還有一種部署方式是創建不同的配置文件路徑來進行啟動,比較麻煩就不實驗了