Centos7.4安裝elasticsearch+kibana集群
主機環境
配置:
節點數 | 4 |
---|---|
操作系統 | CentOS Linux release 7.4.1708 (Core) |
內存 | 16GB |
軟件環境
軟件 | 版本 | 下載地址 |
---|---|---|
jdk | jdk-8u172-linux-x64 | 點擊下載 |
elasticsearch | elasticsearch-6.3.1 | 點擊下載 |
kibana | kibana-6.3.1-linux-x86_64 | 點擊下載 |
主機規划
4個節點角色規划如下:
主機名 | pycdhnode1 | pycdhnode2 | pycdhnode3 | pycdhnode4 |
---|---|---|---|---|
IP | 192.168.0.158 | 192.168.0.159 | 192.168.0.160 | 192.168.0.161 |
master節點 | yes | yes | yes | yes |
data節點 | yes | yes | yes | yes |
kibana | yes | no | no | no |
注: 在實際生產中,還是建議master節點和data節點分離
主機安裝前准備
- 關閉所有節點的
SELinux
sed -i 's/^SELINUX=.*$/SELINUX=disabled/g' /etc/selinux/config setenforce 0
- 關閉所有節點防火牆
firewalld
oriptables
systemctl disable firewalld; systemctl stop firewalld; systemctl disable iptables; systemctl stop iptables;
- 開啟所有節點時間同步
ntpdate
echo "*/5 * * * * /usr/sbin/ntpdate asia.pool.ntp.org | logger -t NTP" >> /var/spool/cron/root
- 設置所有節點語言編碼以及時區
echo 'export TZ=Asia/Shanghai' >> /etc/profile echo 'export LANG=en_US.UTF-8' >> /etc/profile . /etc/profile
- 所有節點添加elasticsearch用戶
useradd -m elasticsearch
echo 'elasticsearch' | passwd --stdin elasticsearch
修改家目錄
mv /home/elasticsearch /application chown -R elasticsearch. /application/elasticsearch
vi /etc/passwd
,修改elasticsearch用戶家目錄:
elasticsearch:x:1001:1001::/application/elasticsearch:/bin/bash
設置PS1
su - elasticsearch
echo 'export PS1="\u@\h:\$PWD>"' >> ~/.bash_profile echo "alias mv='mv -i' alias rm='rm -i'" >> ~/.bash_profile . ~/.bash_profile
- 設置elasticsearch用戶之間免密登錄 首先在pycdhnode1主機生成秘鑰
su - elasticsearch ssh-keygen -t rsa # 一直回車即可生成elasticsearch用戶的公鑰和私鑰 cd .ssh vi id_rsa.pub # 去掉私鑰末尾的主機名 elasticsearch@pycdhnode1 cat id_rsa.pub > authorized_keys chmod 600 authorized_keys
壓縮.ssh文件夾
su - elasticsearch
zip -r ssh.zip .ssh
隨后分發ssh.zip到pycdhnode2-4主機elasticsearch用戶家目錄解壓即完成免密登錄
- 主機內核參數優化以及最大文件打開數、最大進程數等參數優化 不同主機優化參數有可能不一樣,故這里不作出具體優化方法,但如果elasticsearch環境用於正式生產,必須優化,linux默認參數可能會導致elasticsearch無法啟動或者集群性能低下。
注: 以上操作需要使用 root
用戶,到目前為止操作系統環境已經准備完成,以下開始正式安裝,后面的操作如果不做特殊說明均使用 elasticsearch
用戶
安裝jdk1.8
所有節點都需要安裝,安裝方式都一樣 解壓 jdk-8u172-linux-x64.tar.gz
tar zxvf jdk-8u172-linux-x64.tar.gz mkdir -p /application/elasticsearch/app mv jdk-8u172-linux-x64 /application/elasticsearch/app/jdk rm -f jdk-8u172-linux-x64.tar.gz
配置環境變量 vi ~/.bash_profile
添加以下內容:
#java export JAVA_HOME=/application/elasticsearch/app/jdk export CLASSPATH=.:$JAVA_HOME/lib:$CLASSPATH export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
加載環境變量
. ~/.bash_profile
查看是否安裝成功 java -version
java version "1.8.0_172" Java(TM) SE Runtime Environment (build 1.8.0_172-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)
如果出現以上結果證明安裝成功。
安裝elasticsearch
首先在pycdhnode1上安裝 解壓 elasticsearch-6.3.1.tar.gz
tar zxvf elasticsearch-6.3.1.tar.gz mv elasticsearch-6.3.1 /application/elasticsearch/app/elasticsearch rm -f elasticsearch-6.3.1.tar.gz
設置環境變量 vi ~/.bash_profile
添加以下內容:
#elasticsearch export ELASTICSEARCH_HOME=/application/elasticsearch/app/elasticsearch export PATH=$PATH:$ELASTICSEARCH_HOME/bin
加載環境變量
. ~/.bash_profile
添加配置文件 vi /application/elasticsearch/app/elasticsearch/config/elasticsearch.yml
:
# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # #cluster.name: py_es_6.3 # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # #node.name: pyesnode-1 # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # #path.data: /path/to/data # # Path to log files: # #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # #network.host: 192.168.0.1 # # Set a custom port for HTTP: # #http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # #discovery.zen.ping.unicast.hosts: ["host1", "host2"] # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # #discovery.zen.minimum_master_nodes: # # For more information, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true #集群的名稱 cluster.name: pyes6.3 #節點名稱,其余3個節點分別為pyesnode-2,pyesnode-3,pyesnode-4 node.name: pyesnode-1 #指定該節點是否有資格被選舉成為master節點,默認是true,es是默認集群中的第一台機器為master,如果這台機掛了就會重新選舉master node.master: true #允許該節點存儲數據(默認開啟) node.data: true #實際生產可以master節點和data數據分離 #索引數據的存儲路徑,多個目錄使用 , 分割 path.data: /application/elasticsearch/data/esdata #日志文件的存儲路徑 path.logs: /application/elasticsearch/app/elasticsearch/logs #設置為true來鎖住內存。因為內存交換到磁盤對服務器性能來說是致命的,當jvm開始swapping時es的效率會降低,所以要保證它不swap #bootstrap.memory_lock: true bootstrap.memory_lock: false #服務器內存小,設置允許使用swap #綁定的ip地址 network.host: 0.0.0.0 #設置對外服務的http端口,默認為9200 http.port: 9200 # 設置節點間交互的tcp端口,默認是9300 transport.tcp.port: 9300 #Elasticsearch將綁定到可用的環回地址,並將掃描端口9300到9305以嘗試連接到運行在同一台服務器上的其他節點。 #這提供了自動集群體驗,而無需進行任何配置。數組設置或逗號分隔的設置。每個值的形式應該是host:port或host #(如果沒有設置,port默認設置會transport.profiles.default.port 回落到transport.tcp.port)。 #請注意,IPv6主機必須放在括號內。默認為127.0.0.1, [::1] discovery.zen.ping.unicast.hosts: ["pycdhnode1:9300", "pycdhnode2:9300", "pycdhnode3:9300", "pycdhnode4:9300"] #如果沒有這種設置,遭受網絡故障的集群就有可能將集群分成兩個獨立的集群 - 分裂的大腦 - 這將導致數據丟失,一般設置(N/2)+1 discovery.zen.minimum_master_nodes: 3 #為了使新加入的節點快速確定master位置,可以將data節點的默認的master發現方式有multicast修改為unicast:選擇性配置 #discovery.zen.ping.multicast.enabled: false #discovery.zen.ping.unicast.hosts: ["pycdhnode1", "pycdhnode2", "pycdhnode3", "pycdhnode4"]
- 其中的
node.name
配置每個節點必須不一樣
設置節點內存使用量 vi /application/elasticsearch/app/elasticsearch/config/jvm.options
-Xms3g -Xmx3g
- 最小與最大必須設置一樣
- 由於jvm內存回收的原因,當內存使用超過32G時,性能會降低,故每個節點推薦最高設置31G
- elasticsearch 2.x 版本設置內存使用在 $ELASTICSEARCH_HOME/bin/elasticsearch.in.sh中
ES_MIN_MEM=3g
與ES_MAX_MEM=3g
創建所需目錄
mkdir -p /application/elasticsearch/data/esdata
復制elasticsearch到pycdhnode2-4
scp ~/.bash_profile pycdhnode2:/application/elasticsearch scp ~/.bash_profile pycdhnode3:/application/elasticsearch scp ~/.bash_profile pycdhnode4:/application/elasticsearch scp -pr /application/elasticsearch/app/elasticsearch pycdhnode2:/application/elasticsearch/app scp -pr /application/elasticsearch/app/elasticsearch pycdhnode3:/application/elasticsearch/app scp -pr /application/elasticsearch/app/elasticsearch pycdhnode4:/application/elasticsearch/app ssh pycdhnode2 "mkdir -p /application/elasticsearch/data/esdata" ssh pycdhnode3 "mkdir -p /application/elasticsearch/data/esdata" ssh pycdhnode4 "mkdir -p /application/elasticsearch/data/esdata"
- 修改pycdhnode1-4
/application/elasticsearch/app/elasticsearch/config/elasticsearch.yml
中的node.name
pycdhnode1為:pyesnode-1 ;pycdhnode2為:pyesnode-2 ;pycdhnode3為:pyesnode-3 ;pycdhnode4為:pyesnode-4
優化所有主機參數,否則無法啟動 vi /etc/sysctl.conf
vm.max_map_count=655360
生效
sysctl -p
vi /etc/security/limits.conf
添加以下內容:
* soft nofile 65536 * hard nofile 65536 * soft nproc 65536 * hard nproc 65536
vi /etc/security/limits.d/20-nproc.conf
添加以下內容:
* soft nproc 65536 root soft nproc unlimited
重啟登錄 ulimit -a
查看是否生效
$ ulimit -a
core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 63488 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
啟動elasticsearch 4個節點均啟動
/application/elasticsearch/app/elasticsearch/bin/elasticsearch -d
- -d 后台服務的方式啟動
- 如果啟動異常,查看日志
/application/elasticsearch/app/elasticsearch/logs/pyes6.3.log
查看進程
jps
其中 Elasticsearch
進程即為 elasticsearch
停止elasticsearch
kill pid
查看集群狀態
$ curl pycdhnode1:9200/_cat/health?v epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 1531123674 16:07:54 pyes6.3 green 4 4 0 0 0 0 0 0 - 100.0%
- es 集群一共3種狀態:
green
,yellow
,red
- 可以看到集群節點有4個,集群狀態為
green
,正常
head插件安裝
ElasticSearch-head是一個H5編寫的ElasticSearch集群操作和管理工具,可以對集群進行傻瓜式操作。
- 顯示集群的拓撲,並且能夠執行索引和節點級別操作
- 搜索接口能夠查詢集群中原始json或表格格式的檢索數據
- 能夠快速訪問並顯示集群的狀態
- 有一個輸入窗口,允許任意調用RESTful API。這個接口包含幾個選項,可以組合在一起以產生有趣的結果;
- 5.0版本之前可以通過plugin安裝,直接解壓便可運行,很綠色,5.0之后安裝就需要使用nodejs,然后以獨立服務的方式啟動,不太方便,可以直接通過安裝谷歌瀏覽器插件 elasticsearch-head-chrome。
首先在es集群所有節點添加配置文件 vi /application/elasticsearch/app/elasticsearch/config/elasticsearch.yml
http.cors.enabled: true http.cors.allow-origin: "*"
在pycdhnode1上面安裝,然后其他主機可以選裝,安裝方法一樣。
安裝NodeJS
wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.5.0-linux-x64.tar.gz tar zxvf node-v4.5.0-linux-x64.tar.gz mv node-v4.5.0-linux-x64 app/node rm -f node-v4.5.0-linux-x64.tar.gz
添加環境變量 vi ~/.bash_profile
#node export NODE_HOME=/application/elasticsearch/app/node export PATH=$PATH:$NODE_HOME/bin export NODE_PATH=$NODE_HOME/lib/node_modules
加載環境變量
. ~/.bash_profile
安裝npm與grunt
npm install -g cnpm --registry=https://registry.npm.taobao.org npm install -g grunt npm install -g grunt-cli --registry=https://registry.npm.taobao.org --no-proxy
下載head插件並安裝
wget https://github.com/mobz/elasticsearch-head/archive/master.zip unzip master.zip mv elasticsearch-head-master app
修改配置文件 vi /application/elasticsearch/app/elasticsearch-head-master/Gruntfile.js
, 修改以下內容
connect: { server: { options: { hostname: '0.0.0.0', port: 9100, base: '.', keepalive: true } } }
- 可以不修改,默認監聽9100
繼續編輯 vi /application/elasticsearch/app/elasticsearch-head-master/_site/app.js
, 修改以下內容
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://pycdhnode1:9200";
- 如不修改,默認連接
http://pycdhnode1:9200
,這里可以修改為集群任一主機
下載依賴安裝
cd /application/elasticsearch/app/elasticsearch-head-master npm install
- 必須在head插件目錄中操作
啟動 head 插件 方法1:使用npm
cd /application/elasticsearch/app/elasticsearch-head-master npm run start
方法2:直接使用grunt
cd /application/elasticsearch/app/elasticsearch-head-master
grunt server
- 必須在head插件目錄中操作
- npm 啟動方式本質上都是調用grunt啟動
- 兩種啟動方式都不是后台啟動,如需后台運行,請使用nohup
訪問 head:
停止 head: 首先通過 ps aux|grep grunt
查找到進程 pid
,然后 kill pid
ElasticHQ管理工具安裝
ElasticHQ 是一款開源的具有良好體驗、直觀和功能強大的 ElasticSearch 的管理和監控工具。提供實時監控、全集群管理、搜索和查詢,無需額外軟件安裝。最新版本支持ElasticSearch 2.x, 5.x, 6.x。 特點: 1、激活ES集群和節點實時監控; 2、管理索引、分片、映射、別名、節點; 3、為多個索引查詢提供查詢UI; 4、REST UI,不需要cURL和繁瑣的JSON格式; 5、100%基於瀏覽器,不需下載軟件; 6、免費;
ElasticHQ 是基於python的Django開發的,最新版本的安裝需要python3.4以上,安裝與啟動程序比較簡單,但要安裝python3.4以上環境比較麻煩,故我們直接采用官方提供的docker容器安裝,簡單方便
首先在pull最新官方鏡像
docker pull elastichq/elasticsearch-hq
啟動容器
docker run -d -p 9999:5000 --name es elastichq/elasticsearch-hq
訪問
- 打開首頁后在輸入框輸入es集群隨意一台節點地址確認即可
更多詳情參見:https://github.com/ElasticHQ/elasticsearch-HQ
kibana安裝
Kibana 是一個開源的分析和可視化平台,旨在與 Elasticsearch 合作。Kibana 提供搜索、查看和與存儲在 Elasticsearch 索引中的數據進行交互的功能。開發者或運維人員可以輕松地執行高級數據分析,並在各種圖表、表格和地圖中可視化數據。
kibana本身只提供單點安裝,如果想避免單點故障,需要結合lvs,haproxy,nginx等負載均衡軟件實現高可用,在這里我們 只在pycdhnode1上面安裝,然后其他主機可以選裝,安裝方法一樣。
安裝kibana
tar -zxvf kibana-6.3.1-linux-x86_64.tar.gz mv kibana-6.3.1-linux-x86_64 app/kibana rm -f kibana-6.3.1-linux-x86_64.tar.gz
添加環境變量 vi ~/.bash_profile
#kibana export KIBANA_HOME=/application/elasticsearch/app/kibana export PATH=$PATH:$KIBANA_HOME/bin
加載環境變量
. ~/.bash_profile
配置文件 vi /application/elasticsearch/app/kibana/config/kibana.yml
# Kibana is served by a back end server. This setting specifies the port to use. server.port: 5601 # 監聽端口 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "0.0.0.0" # 監聽地址 # Enables you to specify a path to mount Kibana at if you are running behind a proxy. # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath # from requests it receives, and to prevent a deprecation warning at startup. # This setting cannot end in a slash. #server.basePath: "" # Specifies whether Kibana should rewrite requests that are prefixed with # `server.basePath` or require that they are rewritten by your reverse proxy. # This setting was effectively always `false` before Kibana 6.3 and will # default to `true` starting in Kibana 7.0. #server.rewriteBasePath: false # The maximum payload size in bytes for incoming server requests. #server.maxPayloadBytes: 1048576 # The Kibana server's name. This is used for display purposes. server.name: "pycdhnode1" # The URL of the Elasticsearch instance to use for all your queries. elasticsearch.url: "http://pycdhnode1:9200" # es連接地址,只能配置一個節點地址,如果需要高可用,需要es集群配合lvs,haproxy負載均衡提供 # When this setting's value is true Kibana uses the hostname specified in the server.host # setting. When the value of this setting is false, Kibana uses the hostname of the host # that connects to this Kibana instance. #elasticsearch.preserveHost: true # Kibana uses an index in Elasticsearch to store saved searches, visualizations and # dashboards. Kibana creates a new index if the index doesn't already exist. #kibana.index: ".kibana" # The default application to load. #kibana.defaultAppId: "home" # If your Elasticsearch is protected with basic authentication, these settings provide # the username and password that the Kibana server uses to perform maintenance on the Kibana # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which # is proxied through the Kibana server. #elasticsearch.username: "user" #elasticsearch.password: "pass" # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively. # These settings enable SSL for outgoing requests from the Kibana server to the browser. #server.ssl.enabled: false #server.ssl.certificate: /path/to/your/server.crt #server.ssl.key: /path/to/your/server.key # Optional settings that provide the paths to the PEM-format SSL certificate and key files. # These files validate that your Elasticsearch backend uses the same key files. #elasticsearch.ssl.certificate: /path/to/your/client.crt #elasticsearch.ssl.key: /path/to/your/client.key # Optional setting that enables you to specify a path to the PEM file for the certificate # authority for your Elasticsearch instance. #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ] # To disregard the validity of SSL certificates, change this setting's value to 'none'. #elasticsearch.ssl.verificationMode: full # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of # the elasticsearch.requestTimeout setting. #elasticsearch.pingTimeout: 1500 # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value # must be a positive integer. #elasticsearch.requestTimeout: 30000 # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side # headers, set this value to [] (an empty list). #elasticsearch.requestHeadersWhitelist: [ authorization ] # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration. #elasticsearch.customHeaders: {} # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable. #elasticsearch.shardTimeout: 30000 # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying. #elasticsearch.startupTimeout: 5000 # Logs queries sent to Elasticsearch. Requires logging.verbose set to true. #elasticsearch.logQueries: false # Specifies the path where Kibana creates the process ID file. #pid.file: /var/run/kibana.pid # Enables you specify a file where Kibana stores log output. #logging.dest: stdout # Set the value of this setting to true to suppress all logging output. #logging.silent: false # Set the value of this setting to true to suppress all logging output other than error messages. #logging.quiet: false # Set the value of this setting to true to log all events, including system usage information # and all requests. #logging.verbose: false # Set the interval in milliseconds to sample system and process performance # metrics. Minimum is 100ms. Defaults to 5000. #ops.interval: 5000 # The default locale. This locale can be used in certain circumstances to substitute any missing # translations. #i18n.defaultLocale: "en" xpack.security.enabled: false # 關閉xpack驗證;由於集群為配置xpack,故必須關閉,否則無法正常連接es集群
啟動 kibana 方法1:控制台啟動
kibana
- 退出回話或者
ctrl + c
會退出
方法2:使用nohup后台啟動
cd /application/elasticsearch/app/kibana mkdir logs nohup kibana > logs/server.log 2>&1 &
訪問 kibana:
停止 kibana: 首先通過 ps aux|grep kibana
查找到進程 pid
,然后 kill pid
更多kibana使用方法參考官網:https://www.elastic.co/guide/en/kibana/6.3/index.html