elasticsearch權限驗證(Auth+Transport SSL)


  轉載:https://knner.wang/2019/11/26/install-elasticsearch-cluster-7-4.html

 

  在新版的Elastic中,基礎版(免費)的已經提供了基礎的核心安全功能,可以在生產中部署,不再需要Nginx + Basic Auth代理了。

 

默認情況下Elastic中的安全功能是被禁用的,那么在本文中,就是采用基礎版,自動申請Basic License的,然后分別開啟Auth認證,以及Nodes間加密通信SSL。

 

下載:

$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.2-linux-x86_64.tar.gz
$ tar xf elasticsearch-7.4.2-linux-x86_64.tar.gz

單機測試服務:

$  cd elasticsearch-7.4.2
$ ./bin/elasticsearch

 

此時默認是以development 方式啟動的,一些前提條件如果不符合其要求只會提示,但並不會無法啟動,此時只會監聽在127.0.0.1:9200上,只能用於測試;當你更改了``elasticsearch.yml配置文件中的network.host`參數時,就會以生產的方式啟動。

我們這采用生產的方式,也就是說他的前提依賴都必須滿足,否則無法啟動。

 

目錄結構:

Type Description Default Location Setting
home Elasticsearch home directory or $ES_HOME Directory created by unpacking the archive ES_ HOME
bin Binary scripts including elasticsearch to start a node and elasticsearch-plugin to install plugins $ES_HOME/bin  
conf Configuration files including elasticsearch.yml $ES_HOME/config ES_PATH_CONF
data The location of the data files of each index / shard allocated on the node. Can hold multiple locations. $ES_HOME/data path.data
logs Log files location. $ES_HOME/logs path.logs
plugins Plugin files location. Each plugin will be contained in a subdirectory. $ES_HOME/plugins  
repo Shared file system repository locations. Can hold multiple locations. A file system repository can be placed in to any subdirectory of any directory specified here. Not configured path.repo
script Location of script files. $ES_HOME/scripts path.scripts

 

系統設置:

ulimits

編輯配置文件/etc/security/limits.conf,因為我這里使用默認的用戶ec2-user來運行ES,所以這里的賬號填ec2-user,你可以根據自己的情況填寫,或者寫成星號;

# - nofile - max number of open file descriptors 最大打開的文件描述符數
# - memlock - max locked-in-memory address space (KB) 最大內存鎖定
# - nproc - max number of processes 最大進程數
$ vim /etc/security/limits.conf
ec2-user  -  nofile  65535
ec2-user  -  memlock  unlimited
ec2-user  -  nproc  4096

# 然后退出重新登陸

  

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63465
max locked memory       (kbytes, -l) unlimited ## 這里已經生效
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535 ## 這里已經生效
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096 ## 這里已經生效
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

 

禁用交換分區 swap

執行命令以立刻禁用swap:

$ sudo swapoff -a

這里只是臨時的禁用了,系統重啟后還是會啟動的,編輯以下配置文件將swap的掛載去掉:

$ sudo vim /etc/fstab

  

配置swappiness 以及虛擬內存

這是減少了內核的交換趨勢,並且在正常情況下不應該導致交換,同時仍然允許整個系統在緊急情況下交換。

# 增加如下兩行
$ sudo vim /etc/sysctl.conf
vm.swappiness=1
vm.max_map_count=262144

# 使之生效
$ sudo sysctl -p

開啟ES的內存鎖定:

在ES的配置文件中config/elasticsearch.yml增加如下行:

bootstrap.memory_lock: true

 

Elasticsearch 基礎概念

Cluster

Elasticsearch 集群,由一台或多台的Elasticsearch 節點(Node)組成。

Node

Elasticsearch 節點,可以認為是Elasticsearch的服務進程,在同一台機器上啟動兩個Elasticsearch實例(進程),就是兩個node節點。

Index

索引,具有相同結構的文檔的集合,類似於關系型數據庫的數據庫實例(6.0.0版本type廢棄后,索引的概念下降到等同於數據庫表的級別)。一個集群中可以有多個索引。

Type

類型,在索引中內進行邏輯細分,在新版的Elasticsearch中已經廢棄。

Document

文檔,Elasticsearch中的最小的數據存儲單元,JSON數據格式,很多相同結構的文檔組成索引。文檔類似於關系型數據庫中表內的一行記錄。

Shard

分片,單個索引切分成多個shard,分布在多台Node節點上存儲。可以利用shard很好的橫向擴展,以存儲更多的數據,同時shard分布在多台node上,可以提升集群整體的吞吐量和性能。在創建索引的時候可以直接指定分片的數量即可,一旦指定就不能再修改了。

Replica

索引副本,完全拷貝shard的內容,一個shard可以有一個或者多個replica,replica就是shard的數據拷貝,以提高冗余。

replica承擔三個任務:

  • shard故障或者node宕機時,其中的一個replica可以升級成shard
  • replica保證數據不丟失,保證高可用
  • replica可以分擔搜索請求,提高集群的吞吐和性能

shard的全稱叫primary shard,replica全稱叫replica shard,primary shard數量在創建索引時指定,后期不能修改,replica shard后期可以修改。默認每個索引的primary shard值為5,replica shard值為1,含義是5個primary shard,5個replica shard,共10個shard。因此Elasticsearch最小的高可用配置是2台服務器。

Elasticsearch Note 說明:

參考官方

在ES集群中的Note有如下幾種類型:

  • Master-eligiblenode.master:true的節點,使其有資格唄選舉為控制集群的主節點。主節點負責集群范圍內的輕量級操作,例如創建或刪除索引,跟蹤哪些節點是集群的一部分以及確定將哪些碎片分配給哪些節點

  • datanode.data:true的節點,數據節點,保存數據並執行與數據有關的操作,例如CRUD(增刪改查),搜索和聚合。

  • ingestnode.ingest:true的節點,能夠將管道(Pipeline)應用於文檔,以便在建立所以之前轉換和豐富文檔。

  • machine-learningxpack.ml.enabled and node.ml set to true ,適用於x-pack版本,OSS版本不能增加,否則無法啟動。

  • coordinating node: 協調節點,諸如搜索請求或批量索引請求之類的請求可能涉及保存在不同數據節點上的數據。例如,搜索請求在兩個階段中執行,由接收客戶端請求的節點(協調節點)進行協調

    分散階段,協調節點將請求轉發到保存數據的數據節點。每個數據節點在本地執行該請求,並將其結果返回給協調節點。在收集 階段,協調節點將每個數據節點的結果縮減為單個全局結果集。

    每個節點都隱式地是一個協調節點。這意味着,有三個節點node.masternode.datanode.ingest都設置為false只充當一個協調節點,不能被禁用。結果,這樣的節點需要具有足夠的內存和CPU才能處理收集階段。

ingest
英 /ɪnˈdʒest/ 美 /ɪnˈdʒest/ 全球(美國)
vt. 攝取;咽下;吸收;接待
過去式 ingested過去分詞 ingested現在分詞 ingesting第三人稱單數 ingests

coordinating
英 /kəʊˈɔːdɪneɪtɪŋ/ 美 /koˈɔrdɪnetɪŋ/ 全球(英國)
v. (使)協調;協同動作;(衣服等)搭配;調節,協調;配合;與……形成共價鍵(coordinate 的現在分詞)
adj. 協調的;並列的;同位的;對等的

默認值:

  • node.master: ture
  • node.voting_only: false
  • node.data: true
  • node.ml: true
  • xpack.ml.enabled: true
  • cluster.remote.connect: false

Master-eligible,合格主節點,主合格節點

主節點負責集群范圍內的輕量級操作,例如創建或刪除索引,跟蹤哪些節點是集群的一部分以及確定將哪些碎片分配給哪些節點。擁有穩定的主節點對於群集健康非常重要。

可以通過主選舉過程來選舉不是僅投票節點的任何符合主資格的節點成為主節點。

索引和搜索數據是占用大量CPU,內存和I / O的工作,這可能會對節點的資源造成壓力。為確保您的主節點穩定且不受壓力,在較大的群集中,最好將符合角色的專用主節點和專用數據節點分開。

雖然主節點也可以充當協調節點, 並將搜索和索引請求從客戶端路由到數據節點,但最好不要為此目的使用專用的主節點。對於符合主機要求的節點,其工作量應盡可能少,這對於群集的穩定性很重要。

設置節點成為主合格節點:

node.master: true 
node.voting_only: false 
node.data: false 
node.ingest: false 
node.ml: false 
xpack.ml.enabled: true 
cluster.remote.connect: false

對於OSS版本:

node.master: true 
node.data: false 
node.ingest: false 
cluster.remote.connect: false

僅投票節點

是參與投票過程,但是不能成為主節點的節點,只投票節點在選舉中充當決勝局。

設置節點成為僅投票節點:

node.master: true 
node.voting_only: true 
node.data: false 
node.ingest: false 
node.ml: false 
xpack.ml.enabled: true 
cluster.remote.connect: false

 

注意:

  • OSS版本不支持這個參數,如果設置了,將無法啟動。

  • 只有符合主機資格的節點才能標記為僅投票。

高可用性(HA)群集至少需要三個主節點,其中至少兩個不是僅投票節點,可以將另一個節點設置成僅投票節點。這樣,即使其中一個節點發生故障,這樣的群集也將能夠選舉一個主節點。

 

數據節點

數據節點包含包含您已建立索引的文檔的分片。數據節點處理與數據相關的操作,例如CRUD,搜索和聚合。這些操作是I / O,內存和CPU密集型的。監視這些資源並在過載時添加更多數據節點非常重要。

具有專用數據節點的主要好處是將主角色和數據角色分開。

要在默認分發中創建專用數據節點,請設置:

node.master: false 
node.voting_only: false 
node.data: true 
node.ingest: false 
node.ml: false 
cluster.remote.connect: false

 

Ingest 節點

接收節點可以執行由一個或多個接收處理器組成的預處理管道。根據攝取處理器執行的操作類型和所需的資源,擁有專用的攝取節點可能有意義,該節點僅執行此特定任務。

要在默認分發中創建專用的攝取節點,請設置:

node.master: false 
node.voting_only: false 
node.data: false 
node.ingest: true 
node.ml: false 
cluster.remote.connect: false

在OSS上設置:

node.master: false 
node.data: false 
node.ingest: true 
cluster.remote.connect: false

 

僅協調節點

如果您不具備處理主要職責,保存數據和預處理文檔的能力,那么您將擁有一個僅可路由請求,處理搜索縮減階段並分配批量索引的協調節點。本質上,僅協調節點充當智能負載平衡器。

僅協調節點可以通過從數據和符合資格的主節點上卸載協調節點角色來使大型集群受益。他們像其他節點一樣加入集群並接收完整的集群狀態,並且使用集群狀態將請求直接路由到適當的位置。

在集群中添加過多的僅協調節點會增加整個集群的負擔,因為選擇的主節點必須等待每個節點的集群狀態更新確認!僅協調節點的好處不應被誇大-數據節點也可以很好地達到相同的目的。

設置僅協調節點:

node.master: false 
node.voting_only: false 
node.data: false 
node.ingest: false 
node.ml: false 
cluster.remote.connect: false

在OSS上設置:

node.master: false 
node.data: false 
node.ingest: false 
cluster.remote.connect: false

 

機器學習節點

機器學習功能提供了機器學習節點,該節點運行作業並處理機器學習API請求。如果xpack.ml.enabled設置為true且node.ml設置為false,則該節點可以處理API請求,但不能運行作業。

如果要在群集中使用機器學習功能,則必須在所有符合主機資格的節點上啟用機器學習(設置xpack.ml.enabledtrue)。如果您只有OSS發行版,請不要使用這些設置。

有關這些設置的更多信息,請參閱機器學習設置

要在默認分發中創建專用的機器學習節點,請設置:

node.master: false 
node.voting_only: false 
node.data: false 
node.ingest: false 
node.ml: true 
xpack.ml.enabled: true 
cluster.remote.connect: false

 

配置Elasticsearch

拷貝三台ES目錄:

$ ls
elasticsearch-7.4.2
$ mv elasticsearch-7.4.2{,-01}
$ ls
elasticsearch-7.4.2-01
$ cp -a elasticsearch-7.4.2-01 elasticsearch-7.4.2-02
$ cp -a elasticsearch-7.4.2-01 elasticsearch-7.4.2-03
$ ln -s elasticsearch-7.4.2-01 es01
$ ln -s elasticsearch-7.4.2-02 es02
$ ln -s elasticsearch-7.4.2-03 es03
$ ll
total 0
drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-01
drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-02
drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-03
lrwxrwxrwx  1 ec2-user ec2-user  22 Nov 26 15:00 es01 -> elasticsearch-7.4.2-01
lrwxrwxrwx  1 ec2-user ec2-user  22 Nov 26 15:00 es02 -> elasticsearch-7.4.2-02
lrwxrwxrwx  1 ec2-user ec2-user  22 Nov 26 15:00 es03 -> elasticsearch-7.4.2-03

 

配置Elasticsearch 名稱解析

我這里直接使用hosts文件:

cat >> /etc/hosts <<EOF
172.17.0.87 es01 es02 es03
EOF

 

編輯ES配置文件config/elasticsearch.yml

默認的配置文件在$ES_HOME/config/elasticsearch.yml 中,配置文件是以yaml的格式配置,其中有三種配置方式:

path:
    data: /var/lib/elasticsearch
    logs: /var/log/elasticsearch

或者寫成單行的格式:
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
再或者通過環境變量的方式:這種方式在Docker,Kubernetes環境中很有用。
node.name:    ${HOSTNAME}
network.host: ${ES_NETWORK_HOST}

 

Elasticsearch 配置詳解

配置ES 的PATH路徑,path.data & path.logs

如果不配置默認是$ES_HOME中的子目錄datalogs

path:
  logs: /var/log/elasticsearch
  data: /var/data/elasticsearch

path.data,可以設置多個目錄:

path:
  logs: /data/ES01/logs
  data:
    - /data/ES01-A
    - /data/ES01-B
    - /data/ES01-C
配置ES集群名稱:cluster.name

一個node節點只能加入一個集群當中,不同的節點配置同一個cluster.name可以組成ES集群。請確保不同的cluster集群中使用不同的cluster.name

cluster.name: logging-prod
配置ES節點名稱:node.name

node.name代表節點名稱,是人類可讀用於區分node節點;如果不配置,默認是主機名

node.name: prod-data-002
配置ES節點監聽地址:network.host

如果不配置,默認是監聽在127.0.0.1 和 [::1],同時以development的方式啟動。

# 監聽在指定IP上
network.host: 192.168.1.10

# 監聽在所有的IP上
network.host: 0.0.0.0

 

network.host 可用的配置:

_[networkInterface]_ Addresses of a network interface, for example _eth0_. 指定網卡
_local_ Any loopback addresses on the system, for example 127.0.0.1. 本地回環IP
_site_ Any site-local addresses on the system, for example 192.168.0.1. 內網IP
_global_ Any globally-scoped addresses on the system, for example 8.8.8.8. 公網IP

 

 

配置ES節點的發現和集群組成設置

這里主要有兩個主要的配置:發現和集群組成設置,集群間的node節點可以實現彼此發現、選舉主節點,從而組成ES集群。

discovery.seed_hosts

如果不配置,默認ES在啟動的時候會監聽本地回環地址,同時會掃描本地端口:9300-9305,用於發現在本機啟動的其他節點。

所以如果不進行的任何配置,將$ES_HOME目錄拷貝三份,然后全部啟動,默認也是可以組成ES集群的,用於測試使用。 如果你需要在多台機器上啟動ES節點,以便組成集群,那么這個參數必須配置,以便nodes之間能夠發現彼此。

discovery.seed_hosts是一個列表,多個元素用逗號隔開,元素可以寫成:

  • host:port,指定自定義的transport集群間通信端口
  • host,使用默認的transport集群間通信端口:9300-9400;參考
  • 域名,可以解析成多個IP,會自動的與每個解析到的IP去連接測試
  • 其他自定義可以解析的名稱
cluster.initial_master_nodes

在deveplopment模式中是一台主機上自動發現的nodes彼此之間自動配置的。但是在生產的模式中必須要配置。

這個參數用於在新的集群第一次啟動的時使用,以指定可以參與選舉合格主節點列表(node.master: true)。在集群重啟或者增加新節點的時候這個參數不起作用,因為在每個node節點上都已經保存有集群的狀態信息。

cluster.initial_master_nodes也是一個列表,多個元素用逗號隔開,元素可以寫成:參考

  • 配置的node.name名稱。
  • 如果沒有配置node.name,那么使用完整主機名
  • FQDN
  • host,如果沒有配置node.name,使用network.host配置的公開地址
  • host:port 如果沒有配置node.name,這里的端口是transport端口
ES節點http和transport的配置

http 和 transport

配置參考:httptransport

http用於暴露Elasticsearch的API,便於client端與ES通信;transport用於ES集群間節點通信使用。

http 配置參考:

Setting Description
http.port http端口配置 A bind port range. Defaults to 9200-9300.
http.publish_port The port that HTTP clients should use when communicating with this node. Useful when a cluster node is behind a proxy or firewall and the http.port is not directly addressable from the outside. Defaults to the actual port assigned via http.port.
http.bind_host http監聽的IP The host address to bind the HTTP service to. Defaults to http.host (if set) or network.bind_host.
http.publish_host The host address to publish for HTTP clients to connect to. Defaults to http.host (if set) or network.publish_host.
http.host Used to set the http.bind_host and the http.publish_host.
http.max_content_length The max content of an HTTP request. Defaults to 100mb.
http.max_initial_line_length The max length of an HTTP URL. Defaults to 4kb
http.max_header_size The max size of allowed headers. Defaults to 8kB
http.compression 壓縮 Support for compression when possible (with Accept-Encoding). Defaults to true.
http.compression_level 壓縮級別 Defines the compression level to use for HTTP responses. Valid values are in the range of 1 (minimum compression) and 9 (maximum compression). Defaults to 3.
http.cors.enabled 跨域配置 Enable or disable cross-origin resource sharing, i.e. whether a browser on another origin can execute requests against Elasticsearch. Set to true to enable Elasticsearch to process pre-flight CORS requests. Elasticsearch will respond to those requests with the Access-Control-Allow-Origin header if the Origin sent in the request is permitted by the http.cors.allow-origin list. Set to false (the default) to make Elasticsearch ignore the Origin request header, effectively disabling CORS requests because Elasticsearch will never respond with the Access-Control-Allow-Origin response header. Note that if the client does not send a pre-flight request with an Origin header or it does not check the response headers from the server to validate the Access-Control-Allow-Origin response header, then cross-origin security is compromised. If CORS is not enabled on Elasticsearch, the only way for the client to know is to send a pre-flight request and realize the required response headers are missing.
http.cors.allow-origin Which origins to allow. Defaults to no origins allowed. If you prepend and append a / to the value, this will be treated as a regular expression, allowing you to support HTTP and HTTPs. for example using /https?:\/\/localhost(:[0-9]+)?/ would return the request header appropriately in both cases. * is a valid value but is considered a security risk as your Elasticsearch instance is open to cross origin requests from anywhere.
http.cors.max-age Browsers send a “preflight” OPTIONS-request to determine CORS settings. max-age defines how long the result should be cached for. Defaults to 1728000 (20 days)
http.cors.allow-methods Which methods to allow. Defaults to OPTIONS, HEAD, GET, POST, PUT, DELETE.
http.cors.allow-headers Which headers to allow. Defaults to X-Requested-With, Content-Type, Content-Length.
http.cors.allow-credentials Whether the Access-Control-Allow-Credentials header should be returned. Note: This header is only returned, when the setting is set to true. Defaults to false
http.detailed_errors.enabled Enables or disables the output of detailed error messages and stack traces in response output. Note: When set to false and the error_trace request parameter is specified, an error will be returned; when error_trace is not specified, a simple message will be returned. Defaults to true
http.pipelining.max_events The maximum number of events to be queued up in memory before an HTTP connection is closed, defaults to 10000.
http.max_warning_header_count The maximum number of warning headers in client HTTP responses, defaults to unbounded.
http.max_warning_header_size The maximum total size of warning headers in client HTTP responses, defaults to unbounded.

transport 配置參考:

Setting Description
transport.port transport端口 A bind port range. Defaults to 9300-9400.
transport.publish_port The port that other nodes in the cluster should use when communicating with this node. Useful when a cluster node is behind a proxy or firewall and the transport.port is not directly addressable from the outside. Defaults to the actual port assigned via transport.port.
transport.bind_host transport監聽的IP The host address to bind the transport service to. Defaults to transport.host (if set) or network.bind_host.
transport.publish_host The host address to publish for nodes in the cluster to connect to. Defaults to transport.host (if set) or network.publish_host.
transport.host Used to set the transport.bind_host and the transport.publish_host.
transport.connect_timeout The connect timeout for initiating a new connection (in time setting format). Defaults to 30s.
transport.compress Set to true to enable compression (DEFLATE) between all nodes. Defaults to false.
transport.ping_schedule Schedule a regular application-level ping message to ensure that transport connections between nodes are kept alive. Defaults to 5s in the transport client and -1 (disabled) elsewhere. It is preferable to correctly configure TCP keep-alives instead of using this feature, because TCP keep-alives apply to all kinds of long-lived connections and not just to transport connections.

 

配置ES節點的JVM設置

默認的JVM配置文件是:$ES_HOME/config/jvm.options

# 配置內存占用最大最小都為1G。
$ vim jvm.options
-Xms1g
-Xmx1g

注意:

生產環境,請根據實際情況進行設置。同時不同的角色需要設置不同的資源大小。

建議不要超過32GB,如果有足夠的內存建議配置在26G-30G。參考

此時的JVM也可以通過環境變量的方式設置:

$ export ES_JAVA_OPTS="-Xms1g -Xmx1g $ES_JAVA_OPTS" ./bin/elasticsearch

 

說明:

  • node.attr.xxx: yyy 用於設定這台node節點的屬性,比如機架,可用區,或者以后可以設置冷熱數據的分別存儲都是基於這個。
  • 因為我的環境中只用了一台主機,所以采用了區分端口的方式。分別配置了http.porttransport.tcp.port
  • 我這里的服務發現使用的是自定義可解析名稱,通過在/etc/hosts 指定解析完成的,方便后期更換IP地址。
  • 我這里的三台node節點,在初次啟動時都可以競選主節點,生產環境要注意選擇合格主節點``node.master: true`

 

es01

$ cat es01/config/elasticsearch.yml |grep -Ev "^$|^#"
cluster.name: es-cluster01
node.name: es01
node.attr.rack: r1
node.attr.zone: A
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9331
discovery.seed_hosts: ["es02:9332", "es03:9333"]
cluster.initial_master_nodes: ["es01", "es02", "es03"]

es02

$ cat es02/config/elasticsearch.yml |grep -Ev "^$|^#"
cluster.name: es-cluster01
node.name: es02
node.attr.rack: r1
node.attr.zone: B
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9201
transport.tcp.port: 9332
discovery.seed_hosts: ["es01:9331", "es03:9333"]
cluster.initial_master_nodes: ["es01", "es02", "es03"]

es03

$ cat es03/config/elasticsearch.yml |grep -Ev "^$|^#"
cluster.name: es-cluster01
node.name: es03
node.attr.rack: r1
node.attr.zone: C
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9202
transport.tcp.port: 9333
discovery.seed_hosts: ["es02:9332", "es01:9331"]
cluster.initial_master_nodes: ["es01", "es02", "es03"]

 

 

啟動Elasticsearch

首先查看一下Elasticsearch的命令幫助:

$ ./es01/bin/elasticsearch --help
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
starts elasticsearch

Option                Description                                               
------                -----------                                               
-E <KeyValuePair>     Configure a setting                                       
-V, --version         Prints elasticsearch version information and exits        
-d, --daemonize       Starts Elasticsearch in the background     # 后台啟動                
-h, --help            show help                                                 
-p, --pidfile <Path>  Creates a pid file in the specified path on start     # 指定pid文件
-q, --quiet           Turns off standard output/error streams logging in console  # 安靜的方式
-s, --silent          show minimal output                                       
-v, --verbose         show verbose output

 

分別啟動三台ES:

$ ll
total 0
drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-01
drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-02
drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-03
lrwxrwxrwx  1 ec2-user ec2-user  22 Nov 26 15:00 es01 -> elasticsearch-7.4.2-01
lrwxrwxrwx  1 ec2-user ec2-user  22 Nov 26 15:00 es02 -> elasticsearch-7.4.2-02
lrwxrwxrwx  1 ec2-user ec2-user  22 Nov 26 15:00 es03 -> elasticsearch-7.4.2-03

$ ./es01/bin/elasticsearch &
$ ./es02/bin/elasticsearch &
$ ./es03/bin/elasticsearch &

可以通過在$ES_HOME/logs/\<CLUSTER_NAME\>.log 查看日志。

測試,我們來查看一下集群中的節點:

$ curl localhost:9200/_cat/nodes?v
ip          heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.17.0.87           32          92  15    0.01    0.04     0.17 dilm      -      es03
172.17.0.87           17          92  15    0.01    0.04     0.17 dilm      *      es02
172.17.0.87           20          92  15    0.01    0.04     0.17 dilm      -      es01

查看集群的健康狀況:

分為三種狀態:

  • green,綠色,代表所有數據都健康。
  • yellow,黃色,代表數據部分正常,但是沒有數據丟失,可以恢復到green。
  • red,紅色,代表有數據丟失,且無法恢復了。

 

$ curl localhost:9200
{
  "name" : "es01", # 當前節點名稱
  "cluster_name" : "es-cluster01", # 集群名稱
  "cluster_uuid" : "n7DDNexcTDik5mU9Y_qrcA",
  "version" : { # 版本
    "number" : "7.4.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
    "build_date" : "2019-10-28T20:40:44.881551Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

$ curl localhost:9200/_cat/health
1574835925 06:25:25 es-cluster01 green 3 3 0 0 0 0 0 0 - 100.0%

$ curl localhost:9200/_cat/health?v
epoch      timestamp cluster      status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1574835928 06:25:28  es-cluster01 green           3         3      0   0    0    0        0             0                  -                100.0%

 

查看所有/_cat接口:

$ curl localhost:9200/_cat
=^.^=
/_cat/allocation
/_cat/shards
/_cat/shards/{index}
/_cat/master
/_cat/nodes
/_cat/tasks
/_cat/indices
/_cat/indices/{index}
/_cat/segments
/_cat/segments/{index}
/_cat/count
/_cat/count/{index}
/_cat/recovery
/_cat/recovery/{index}
/_cat/health
/_cat/pending_tasks
/_cat/aliases
/_cat/aliases/{alias}
/_cat/thread_pool
/_cat/thread_pool/{thread_pools}
/_cat/plugins
/_cat/fielddata
/_cat/fielddata/{fields}
/_cat/nodeattrs
/_cat/repositories
/_cat/snapshots/{repository}
/_cat/templates

 

查看我們之前給每台機器定義的屬性:

$ curl localhost:9200/_cat/nodeattrs
es03 172.17.0.87 172.17.0.87 ml.machine_memory 16673112064
es03 172.17.0.87 172.17.0.87 rack              r1 # 自定義的
es03 172.17.0.87 172.17.0.87 ml.max_open_jobs  20
es03 172.17.0.87 172.17.0.87 xpack.installed   true
es03 172.17.0.87 172.17.0.87 zone              C # 自定義的
es02 172.17.0.87 172.17.0.87 ml.machine_memory 16673112064
es02 172.17.0.87 172.17.0.87 rack              r1 # 自定義的
es02 172.17.0.87 172.17.0.87 ml.max_open_jobs  20
es02 172.17.0.87 172.17.0.87 xpack.installed   true
es02 172.17.0.87 172.17.0.87 zone              B # 自定義的
es01 172.17.0.87 172.17.0.87 ml.machine_memory 16673112064
es01 172.17.0.87 172.17.0.87 rack              r1 # 自定義的
es01 172.17.0.87 172.17.0.87 ml.max_open_jobs  20
es01 172.17.0.87 172.17.0.87 xpack.installed   true
es01 172.17.0.87 172.17.0.87 zone              A # 自定義的

 

我們發現,所有的這些API接口都是能夠直接訪問的,不需要任何的認證的,對於生產來說非常的不安全,同時任一台node節點都可以加入到集群中,這些都非常的不安全;下面介紹如果開啟auth以及node間的ssl認證。

開啟ES集群的Auth認證和Node間SSL

開啟ES集群的Auth認證

在最新版的ES中,已經開源了X-pack組件,但是開源 != 免費,但是一些基礎的安全是免費的,例如本例中的Auth以及Node間SSL就是免費的。

首先我們嘗試生成密碼:命令是$ES_HOME/bin/elasticsearch-setup-passwords,查看一下幫助:

$ ./es01/bin/elasticsearch-setup-passwords --help
Sets the passwords for reserved users

Commands
--------
auto - Uses randomly generated passwords
interactive - Uses passwords entered by a user

Non-option arguments:
command              

Option         Description        
------         -----------        
-h, --help     show help          
-s, --silent   show minimal output
-v, --verbose  show verbose output

# 自動生成密碼,發現失敗
$ ./es01/bin/elasticsearch-setup-passwords auto

Unexpected response code [500] from calling GET http://172.17.0.87:9200/_security/_authenticate?pretty
It doesn't look like the X-Pack security feature is enabled on this Elasticsearch node.
Please check if you have enabled X-Pack security in your elasticsearch.yml configuration file.

ERROR: X-Pack Security is disabled by configuration.

我們查看一些ES01的日志,發現有報錯:

[2019-11-27T14:35:13,391][WARN ][r.suppressed             ] [es01] path: /_security/_authenticate, params: {pretty=}
org.elasticsearch.ElasticsearchException: Security must be explicitly enabled when using a [basic] license. Enable security by setting [xpack.security.enabled] to [true] in the elasticsearch.yml file and restart the node.
......

提示說需要先開啟安全:

我們按照提示分別的三台ES節點上添加如下信息:

$ echo "xpack.security.enabled: true" >> es01/config/elasticsearch.yml
$ echo "xpack.security.enabled: true" >> es02/config/elasticsearch.yml
$ echo "xpack.security.enabled: true" >> es03/config/elasticsearch.yml

然后重啟:

$ ps -ef|grep elasticsearch
# 獲取到es節點的pid分別kill即可,注意不要用-9

發現無法啟動,錯誤提示:

ERROR: [1] bootstrap checks failed
[1]: Transport SSL must be enabled if security is enabled on a [basic] license. Please set [xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] to [false]

好吧我們再添加這條配置:

$ echo "xpack.security.transport.ssl.enabled: true" >> es01/config/elasticsearch.yml
$ echo "xpack.security.transport.ssl.enabled: true" >> es02/config/elasticsearch.yml
$ echo "xpack.security.transport.ssl.enabled: true" >> es03/config/elasticsearch.yml

然后再次啟動,我們又發現,在啟動第二台的時候,兩個es節點都一直報錯,如下:

[2019-11-27T14:50:58,643][WARN ][o.e.t.TcpTransport       ] [es01] exception caught on transport layer [Netty4TcpChannel{localAddress=/172.17.0.87:9331, remoteAddress=/172.17.0.87:56654}], closing connection
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: No available authentication scheme
4at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:475) ~[netty-codec-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:283) ~[netty-codec-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1421) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:697) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:597) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:551) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:511) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918) [netty-common-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.38.Final.jar:4.1.38.Final]
4at java.lang.Thread.run(Thread.java:830) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: No available authentication scheme
4at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
......

發現沒有配置認證的方式。好吧,我們先往下繼續配置:

配置Node間SSL

注意:這里是指配置ES集群節點間transport的SSL認證,對於ES節點的HTTP API接口並沒有配置,所以通過API訪問ES時不需要提供證書。

參考官網:

https://www.elastic.co/guide/en/elasticsearch/reference/current/ssl-tls.html

https://www.elastic.co/guide/en/elasticsearch/reference/7.4/configuring-tls.html

創建SSL/TLS證書:通過命令$ES_HOME/bin/elasticsearch-certutil

# 查看命令幫助
$ ./es01/bin/elasticsearch-certutil --help
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.bouncycastle.jcajce.provider.drbg.DRBG (file:/opt/elk74/elasticsearch-7.4.2-01/lib/tools/security-cli/bcprov-jdk15on-1.61.jar) to constructor sun.security.provider.Sun()
WARNING: Please consider reporting this to the maintainers of org.bouncycastle.jcajce.provider.drbg.DRBG
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Simplifies certificate creation for use with the Elastic Stack

Commands
--------
csr - generate certificate signing requests
cert - generate X.509 certificates and keys
ca - generate a new local certificate authority

Non-option arguments:
command              

Option         Description        
------         -----------        
-h, --help     show help          
-s, --silent   show minimal output
-v, --verbose  show verbose output

創建CA證書:

# 命令幫助:
$ ./bin/elasticsearch-certutil ca --help
generate a new local certificate authority

Option               Description                                             
------               -----------                                             
-E <KeyValuePair>    Configure a setting                                     
--ca-dn              distinguished name to use for the generated ca. defaults
                       to CN=Elastic Certificate Tool Autogenerated CA       
--days <Integer>     number of days that the generated certificates are valid
-h, --help           show help                                               
--keysize <Integer>  size in bits of RSA keys                                
--out                path to the output file that should be produced         
--pass               password for generated private keys                     
--pem                output certificates and keys in PEM format instead of PKCS#12                                               ## 默認創建PKCS#12格式的,使用--pem可以創建pem格式的,key,crt,ca分開的。
-s, --silent         show minimal output                                     
-v, --verbose        show verbose output

# 創建ca證書
$ ./es01/bin/elasticsearch-certutil ca -v
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
    * The CA certificate
    * The CA's private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]:  # 輸入保存的ca文件名稱
Enter password for elastic-stack-ca.p12 : # 輸入證書密碼,我們這里留空

# 默認的CA證書存放在$ES_HOME 目錄中
$ ll es01/
total 560
drwxr-xr-x  2 ec2-user ec2-user   4096 Oct 29 04:45 bin
drwxr-xr-x  2 ec2-user ec2-user    178 Nov 27 13:45 config
drwxrwxr-x  3 ec2-user ec2-user     19 Nov 27 13:46 data
-rw-------  1 ec2-user ec2-user   2527 Nov 27 15:05 elastic-stack-ca.p12 # 這里呢
drwxr-xr-x  9 ec2-user ec2-user    107 Oct 29 04:45 jdk
drwxr-xr-x  3 ec2-user ec2-user   4096 Oct 29 04:45 lib
-rw-r--r--  1 ec2-user ec2-user  13675 Oct 29 04:38 LICENSE.txt
drwxr-xr-x  2 ec2-user ec2-user   4096 Nov 27 14:48 logs
drwxr-xr-x 37 ec2-user ec2-user   4096 Oct 29 04:45 modules
-rw-r--r--  1 ec2-user ec2-user 523209 Oct 29 04:45 NOTICE.txt
drwxr-xr-x  2 ec2-user ec2-user      6 Oct 29 04:45 plugins
-rw-r--r--  1 ec2-user ec2-user   8500 Oct 29 04:38 README.textile

這個命令生成格式為PKCS#12名稱為 elastic-stack-ca.p12 的keystore文件,包含CA證書和私鑰。

創建節點間認證用的證書:

# 命令幫助:
$ ./bin/elasticsearch-certutil cert --help
generate X.509 certificates and keys

Option               Description                                             
------               -----------                                             
-E <KeyValuePair>    Configure a setting                                     
--ca                 path to an existing ca key pair (in PKCS#12 format)     
--ca-cert            path to an existing ca certificate                      
--ca-dn              distinguished name to use for the generated ca. defaults
                       to CN=Elastic Certificate Tool Autogenerated CA       
--ca-key             path to an existing ca private key                      
--ca-pass            password for an existing ca private key or the generated
                       ca private key                                        
--days <Integer>     number of days that the generated certificates are valid
--dns                comma separated DNS names   # 指定dns,域名
-h, --help           show help                                               
--in                 file containing details of the instances in yaml format 
--ip                 comma separated IP addresses   # 指定IP
--keep-ca-key        retain the CA private key for future use                
--keysize <Integer>  size in bits of RSA keys                                
--multiple           generate files for multiple instances                   
--name               name of the generated certificate                       
--out                path to the output file that should be produced         
--pass               password for generated private keys                     
--pem                output certificates and keys in PEM format instead of   
                       PKCS#12                                               
-s, --silent         show minimal output                                     
-v, --verbose        show verbose output

# 創建node證書
$ cd es01
$ ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'cert' mode generates X.509 certificate and private keys.
    * By default, this generates a single certificate and key for use
       on a single instance.
    * The '-multiple' option will prompt you to enter details for multiple
       instances and will generate a certificate and key for each one
    * The '-in' option allows for the certificate generation to be automated by describing
       the details of each instance in a YAML file

    * An instance is any piece of the Elastic Stack that requires an SSL certificate.
      Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
      may all require a certificate and private key.
    * The minimum required value for each instance is a name. This can simply be the
      hostname, which will be used as the Common Name of the certificate. A full
      distinguished name may also be used.
    * A filename value may be required for each instance. This is necessary when the
      name would result in an invalid file or directory name. The name provided here
      is used as the directory name (within the zip) and the prefix for the key and
      certificate files. The filename is required if you are prompted and the name
      is not displayed in the prompt.
    * IP addresses and DNS names are optional. Multiple values can be specified as a
      comma separated string. If no IP addresses or DNS names are provided, you may
      disable hostname verification in your SSL configuration.

    * All certificates generated by this tool will be signed by a certificate authority (CA).
    * The tool can automatically generate a new CA for you, or you can provide your own with the
         -ca or -ca-cert command line options.

By default the 'cert' mode produces a single PKCS#12 output file which holds:
    * The instance certificate
    * The private key for the instance certificate
    * The CA certificate

If you specify any of the following options:
    * -pem (PEM formatted output)
    * -keep-ca-key (retain generated CA key)
    * -multiple (generate multiple certificates)
    * -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files

Enter password for CA (elastic-stack-ca.p12) :  # 輸入CA證書的密碼,我們這里沒有設置,直接回車
Please enter the desired output file [elastic-certificates.p12]:  # 輸入證書保存名稱,保值默認直接回車
Enter password for elastic-certificates.p12 :  # 輸入證書的密碼,留空,直接回車

Certificates written to /opt/elk74/elasticsearch-7.4.2-01/elastic-certificates.p12 # 存放位置

This file should be properly secured as it contains the private key for 
your instance.

This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.

For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.
$ ll
total 564
drwxr-xr-x  2 ec2-user ec2-user   4096 Oct 29 04:45 bin
drwxr-xr-x  2 ec2-user ec2-user    178 Nov 27 13:45 config
drwxrwxr-x  3 ec2-user ec2-user     19 Nov 27 13:46 data
-rw-------  1 ec2-user ec2-user   3451 Nov 27 15:10 elastic-certificates.p12 # 這里
-rw-------  1 ec2-user ec2-user   2527 Nov 27 15:05 elastic-stack-ca.p12 # 還有這里
drwxr-xr-x  9 ec2-user ec2-user    107 Oct 29 04:45 jdk
drwxr-xr-x  3 ec2-user ec2-user   4096 Oct 29 04:45 lib
-rw-r--r--  1 ec2-user ec2-user  13675 Oct 29 04:38 LICENSE.txt
drwxr-xr-x  2 ec2-user ec2-user   4096 Nov 27 14:48 logs
drwxr-xr-x 37 ec2-user ec2-user   4096 Oct 29 04:45 modules
-rw-r--r--  1 ec2-user ec2-user 523209 Oct 29 04:45 NOTICE.txt
drwxr-xr-x  2 ec2-user ec2-user      6 Oct 29 04:45 plugins
-rw-r--r--  1 ec2-user ec2-user   8500 Oct 29 04:38 README.textile

這個命令生成格式為PKCS#12名稱為 elastic-certificates.p12 的keystore文件,包含node證書、私鑰、CA證書。

這個命令生成的證書內部默認是不包含主機名信息的(他沒有任何 Subject Alternative Name 字段),所以證書可以用在任何的node節點上,但是你必須配置elasticsearch關閉主機名認證。

配置ES節點使用這個證書:

$ mkdir config/certs
$ mv elastic-* config/certs/
$ ll config/certs/
total 8
-rw------- 1 ec2-user ec2-user 3451 Nov 27 15:10 elastic-certificates.p12
-rw------- 1 ec2-user ec2-user 2527 Nov 27 15:05 elastic-stack-ca.p12

# 拷貝這個目錄到所有的ES節點中
$ cp -a config/certs /opt/elk74/es02/config/
$ cp -a config/certs /opt/elk74/es03/config/

# 配置elasticsearch.yml配置文件,注意所有的node節點都需要配置,這里的配置是使用PKCS#12格式的證書。
$ vim es01/config/elasticsearch.yml
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate #認證方式使用證書
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

# 如果你使用--pem生成PEM格式的,那么需要使用如下的配置:
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.key: /home/es/config/node01.key # 私鑰
xpack.security.transport.ssl.certificate: /home/es/config/node01.crt # 證書
xpack.security.transport.ssl.certificate_authorities: [ "/home/es/config/ca.crt" ]  # ca證書

# 如果你生成的node證書設置了password,那么需要把password加入到elasticsearch 的keystore
## PKCS#12格式:
bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password

## PEM格式
bin/elasticsearch-keystore add xpack.security.transport.ssl.secure_key_passphrase

注意:config/certs 目錄中不需要拷貝CA證書文件,只拷貝cert文件即可。我這里是圖方便。

同時要注意把CA證書保存好,如果設置了CA證書密鑰也要保護放,方便后期增加ES節點使用。

 

xpack.security.transport.ssl.verification_mode 這里配置認證方式:參考官網

  • full,認證證書是否通過信任的CA證書簽發的,同時認證server的hostname or IP address是否匹配證書中配置的。
  • certificate,我們這里采用的方式,只認證證書是否通過信任的CA證書簽發的
  • none,什么也不認證,相當於關閉了SSL/TLS 認證,僅用於你非常相信安全的環境。

 

配置了,然后再次啟動ES節點測試:

測試能夠正常啟動了。好了,我們再來繼續之前的生成密碼:在隨意一台節點即可。

$ ./es01/bin/elasticsearch-setup-passwords auto
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y #輸入y,確認繼續


Changed password for user apm_system
PASSWORD apm_system = yc0GJ9QS4AP69pVzFKiX

Changed password for user kibana
PASSWORD kibana = UKuHceHWudloJk9NvHlX

Changed password for user logstash_system
PASSWORD logstash_system = N6pLSkNSNhT0UR6radrZ

Changed password for user beats_system
PASSWORD beats_system = BmsiDzgx1RzqHIWTri48

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = dflPnqGAQneqjhU1XQiZ

Changed password for user elastic
PASSWORD elastic = Tu8RPllSZz6KXkgZWFHv

查看集群節點數量:

$ curl -u elastic localhost:9200/_cat/nodes
Enter host password for user 'elastic': # 輸入elastic用戶的密碼:Tu8RPllSZz6KXkgZWFHv
172.17.0.87 14 92 18 0.16 0.11 0.37 dilm - es02
172.17.0.87  6 92 17 0.16 0.11 0.37 dilm - es03
172.17.0.87  8 92 19 0.16 0.11 0.37 dilm * es01

注意:

這里只是配置了ES集群中node間通信啟用了證書加密,HTTP API接口是使用用戶名和密碼的方式認證的,如果你需要更安全的SSL加密,請參考:TLS HTTP

安全配置的參數,請參考

好了,一個比較安全的Elasticsearch的集群就已經創建完畢了。

kibana的安裝配置

下面開始安裝kibana,方便通過瀏覽器訪問。

下載地址

$ wget -c "https://artifacts.elastic.co/downloads/kibana/kibana-7.4.2-linux-x86_64.tar.gz"
$ tar xf /opt/softs/elk7.4/kibana-7.4.2-linux-x86_64.tar.gz 
$ ln -s kibana-7.4.2-linux-x86_64 kibana

配置kibana:

$ cat kibana/config/kibana.yml |grep -Ev "^$|^#"
server.port: 5601
server.host: "0.0.0.0"
server.name: "mykibana"
elasticsearch.hosts: ["http://localhost:9200"]
kibana.index: ".kibana"
elasticsearch.username: "kibana"  # 這里使用的是 給kibana開通的連接賬號
elasticsearch.password: "UKuHceHWudloJk9NvHlX"
# i18n.locale: "en"
i18n.locale: "zh-CN"
xpack.security.encryptionKey: Hz*9yFFaPejHvCkhT*ddNx%WsBgxVSCQ # 自己隨意生成的32位加密key

訪問kibana的IP:5601即可,可以看到登陸界面:

 

 

一個使用永不過期的Basic許可的免費License,開啟了基本的Auth認證和集群間SSL/TLS 認證的Elasticsearch集群就創建完畢了。

等等,你有沒有想過Kibana的配置文件中使用着明文的用戶名密碼,這里只能通過LInux的權限進行控制了,有沒有更安全的方式呢,有的,就是keystore。

 

kibana keystore 安全配置

參考官網

查看``kibana-keystore`命令幫助:

$ ./bin/kibana-keystore --help
Usage: bin/kibana-keystore [options] [command]

A tool for managing settings stored in the Kibana keystore

Options:
  -V, --version           output the version number
  -h, --help              output usage information

Commands:
  create [options]        Creates a new Kibana keystore
  list [options]          List entries in the keystore
  add [options] <key>     Add a string setting to the keystore
  remove [options] <key>  Remove a setting from the keystore

 

首先我們創建keystore:

$ bin/kibana-keystore create
Created Kibana keystore in /opt/elk74/kibana-7.4.2-linux-x86_64/data/kibana.keystore # 默認存放位置

增加配置:

我們要吧kibana.yml 配置文件中的敏感信息,比如:elasticsearch.username 和 elasticsearch.password,給隱藏掉,或者直接去掉;

所以這里我們增加兩個配置:分別是elasticsearch.password 和 elasticsearch.username:

# 查看add的命令幫助:
$ ./bin/kibana-keystore add --help
Usage: add [options] <key>

Add a string setting to the keystore

Options:
  -f, --force   overwrite existing setting without prompting
  -x, --stdin   read setting value from stdin
  -s, --silent  prevent all logging
  -h, --help    output usage information

# 創建elasticsearch.username這個key:注意名字必須是kibana.yml中的key
$ ./bin/kibana-keystore add elasticsearch.username
Enter value for elasticsearch.username: ******  # 輸入key對應的value,這里是kibana連接es的賬號:kibana

# 創建elasticsearch.password這個key
$ ./bin/kibana-keystore add elasticsearch.password
Enter value for elasticsearch.password: ******************** # 輸入對應的密碼:UKuHceHWudloJk9NvHlX

 

好了,我們把kibana.yml配置文件中的這兩項配置刪除即可,然后直接啟動kibana,kibana會自動已用這兩個配置的。

最終的kibana.yml配置如下:

server.port: 5601
server.host: "0.0.0.0"
server.name: "mykibana"
elasticsearch.hosts: ["http://localhost:9200"]
kibana.index: ".kibana"
# i18n.locale: "en"
i18n.locale: "zh-CN"
xpack.security.encryptionKey: Hz*9yFFaPejHvCkhT*ddNx%WsBgxVSCQ # 自己隨意生成的32位加密key

這樣配置文件中就不會出現敏感信息了,達到了更高的安全性。

類似的Keystore方式不只是Kibana支持,ELK的產品都是支持的。

 

生產環境中整個集群重啟和滾動重啟的正確操作

比如我們后期可能要對整個集群的重啟,或者呢,更改一些配置,需要一台一台的重啟集群中的每個節點,因為在重啟的時候ES集群會自動復制下線節點的shart到其他的節點上,並再平衡node間的shart,會產生很大的IO的,但是這個IO操作是完全沒有必要的。

關閉shard allocation
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.allocation.enable": "primaries"
  }
}
'
關閉索引和synced flush
curl -X POST "localhost:9200/_flush/synced?pretty"

做完上面兩步的話再關閉整個集群;待變更完配置后,重新啟動集群,然后在打開之前關閉的shard allocation:
打開shard allocation
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.allocation.enable": null
  }
}
'

  

對於ES集群node節點輪訓重啟的操作時,在關閉每個節點之前都先執行上面兩步關閉的操作,然后關閉這個節點,做變更操作,然后在啟動該節點,然后在打開shard allocation,等待ES集群狀態變為Green后,再進行第二台,然后依次類推。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM