一次ES從5.2版本升級到7.13測試生產完全實操記錄
目前正在使用的elasticsearch 版本是5.2 ,使用Java Transport Client + Springbooot 構建的項目 ,
升級es引擎到最新版本7.13,需要代碼層面的改動。由於項目Springboot 版本是1.4.2,即選用Java REST Client[7.13]
因為是跨大版本升級,ES官網給出升級方案
If you are running a version prior to 6.0, upgrade to 6.8 and reindex your old indices or bring up a new 7.13.4 cluster and reindex from remote.
數據遷移方式:新建集群重建索引(bring up a new 7.13.4 cluster and reindex from remote)
測試環境ES實踐
1. 部署ES集群
測試環境2台節點的集群,啟動2台虛擬機,
ip分別位192.169.10.167、192.169.10.168
- 下載tag包,選擇自己需要的版本,並下載相應版本的kibana
- 解壓Es 到/usr/local/es,
tar -zxvf elasticsearch-7.13.0-linux-x86_64.tar.gz -C /usr/local/es
但是由於安全的考慮,elasticsearch不允許使用root用戶來啟動,所以需要創建一個新的用戶,並為這個賬戶賦予相應的權限來啟動elasticsearch集群。
useradd es
passwd es
#創建文件夾用於存放數據和日志
mkdir -p /var/data/elasticsearch
mkdir -p /var/log/elasticsearch
# 改變文件的擁有者為es
chown -R es /usr/local/es/
chown -R es /var/log/elasticsearch
chown -R es /var/data/elasticsearch
- 配置
JDK配置
新版本自帶適應的JDK,這里直接采用修改啟動文件./bin/elasticsearch
#vim ./bin/elasticsearch添加如下配置
export JAVA_HOME=/usr/local/es/elasticsearch-7.13.0/jdk
export PATH=$JAVA_HOME/bin:$PATH
ES 配置文件主要位於 $ES_HOME/config 目錄下
最新版本 Elasticsearch 主要有三個配置文件:
elasticsearch.yml ES 的配置,more
jvm.options ES JVM 配置,more
log4j2.properties ES 日志配置,more
復制5.2版本的配置 注意參數名版本之間的變化
elasticsearch.yml 配置如下
#集群名
cluster.name: microants-es-004
#節點名
node.name: node-01
# 數據目錄
path.data: /var/data/elasticsearch
# 日志目錄
path.logs: /var/log/elasticsearch
#網絡配置
network.host: 192.168.10.167
http.port: 9200
#遷移地址白名單
reindex.remote.whitelist: ["192.168.10.154:9200","192.168.10.155:9200"]
discovery.seed_hosts: ["192.168.10.167", "192.168.10.168"]
cluster.initial_master_nodes: ["node-01", "node-02"]
jvm.options
-Xms2g
-Xmx2g
其他可能需要的配置
1.修改文件描述符數目 2.最大映射數量MMP
#編輯 /etc/security/limits.conf,追加以下內容;
* soft nofile 65536
* hard nofile 65536
#/etc/sysctl.conf文件最后添加一行
vm.max_map_count=262144
#執行/sbin/sysctl -p 立即生效
- 啟動
#切換es 啟動 否則會失敗
su es
./bin/elasticsearch -d
{
"name": "node-01",
"cluster_name": "microants-es-004",
"cluster_uuid": "_na_",
"version": {
"number": "7.13.0",
"build_flavor": "default",
"build_type": "tar",
"build_hash": "5ca8591c6fcdb1260ce95b08a8e023559635c6f3",
"build_date": "2021-05-19T22:22:26.081971330Z",
"build_snapshot": false,
"lucene_version": "8.8.2",
"minimum_wire_compatibility_version": "6.8.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}
至此 啟動第二台 192.169.10.168 ,設置elasticsearch.yml 時 集群名保持一致
驗證 http://192.168.10.167:9200/_cat/nodes 返回集群節點
192.168.10.168 19 45 15 0.43 0.19 0.11 cdfhilmrstw - node-02
192.168.10.167 23 49 0 0.24 0.11 0.07 cdfhilmrstw * node-01
- 部署kibana
tar -zxvf kibana-7.13.0-linux-x86_64.tar.gz -C /opt/soft
配置 vim ./config/kibana.yml
server.port: 5601
server.host: "192.168.10.167"
elasticsearch.hosts: ["http://192.168.10.167:9200"]
i18n.locale: "zh-CN"
啟動 (最好添加kibana 用戶)
nohup ./bin/kibana &
集群安全設置
- 最低安全設置 https://www.elastic.co/guide/en/elasticsearch/reference/7.13/security-minimal-setup.html
1.在每個節點 elasticsearch.yml 文件下加入
xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.client_authentication: required xpack.security.transport.ssl.keystore.path: elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
- 啟動
./bin/elasticsearch
- 生成用戶和密碼采用走動生成
./bin/elasticsearch-setup-passwords auto
結果會打印再控制台 保存 如:
hanged password for user apm_system
PASSWORD apm_system = IywjBT4YDU86NDw7ox
Changed password for user kibana_system
PASSWORD kibana_system = SQmORp23LcZyPZU48l
Changed password for user kibana
PASSWORD kibana = SQmORp2Nb3LcPZU48l
Changed password for user logstash_system
PASSWORD logstash_system = cEtJPTbzktxc7aPuQx
Changed password for user beats_system
PASSWORD beats_system = 5wDk7jQNu4iP5J7eLg
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = p1HkCWzYVtt8SPGEtN
Changed password for user elastic
PASSWORD elastic = PRmId87ogKerJboyLw
- 任何節點生成ca證書
./bin/elasticsearch-certutil ca
會生成一個elastic-stack-ca.p12 文件
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
生成elastic-certificates.p12 文件 - 復制文件 elastic-certificates.p12 到所有節點的conf目錄下
- 如果證書有密碼 執行
./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
7.重啟es ,驗證查看節點是否可以通信
2.數據遷移
- 插件適配
查看老版本使用的插件
http://192.168.10.154:9200/_cat/plugins
返回:
node-0001 analysis-icu 5.2.0
node-0001 analysis-ik 5.2.0
node-0001 analysis-kuromoji 5.2.0
node-0001 analysis-pinyin 5.2.1
node-0002 analysis-icu 5.2.0
node-0002 analysis-ik 5.2.0
node-0002 analysis-pinyin 5.2.1
先官網查看[插件文檔](https://www.elastic.co/guide/en/elasticsearch/plugins/7.13/installation.html)
https://www.elastic.co/guide/en/elasticsearch/plugins/7.13/installation.html
# 安裝插件格式
sudo bin/elasticsearch-plugin install [plugin_name]
- 安裝icu
[root@localhost elasticsearch-7.13.0]# sudo bin/elasticsearch-plugin install analysis-icu
-> Installing analysis-icu
-> Downloading analysis-icu from elastic
[=================================================] 100%
-> Installed analysis-icu
-> Please restart Elasticsearch to activate any plugins installed
- 安裝ik 、pinyin
./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.3.0/elasticsearch-analysis-ik-7.13.0.zip
#網速慢 下載 了幾次失敗
#github手動下載download https://github.com/medcl/elasticsearch-analysis-ik/releases
#create plugin folder cd your-es-root/plugins/ && mkdir ik
yum install -y unzip zip
#unzip plugin to folder your-es-root/plugins/ik
#解壓二者到plugins下
unzip elasticsearch-analysis-ik-7.13.0.zip -d /usr/local/es/elasticsearch-7.13.0/plugins/ik/
unzip elasticsearch-analysis-pinyin-7.13.0.zip -d /usr/local/es/elasticsearch-7.13.0/plugins/pinyin/
#重啟es
ps -ef | grep elastic
kill 9 4093
su es
./bin/elasticsearch -d
查看結果 :http://192.168.10.167:9200/_cat/plugins
node-01 analysis-icu 7.13.0
node-01 analysis-ik 7.13.0
node-01 analysis-pinyin 7.13.0
node-02 analysis-ik 7.13.0
node-02 analysis-pinyin 7.13.0
- 開始數據遷移 官網建議 https://www.elastic.co/guide/en/elasticsearch/reference/7.13/reindex-upgrade-inplace.html
To manually reindex your old indices in place:
1. Create an index with 7.x compatible mappings.
2. Set the refresh_interval to -1 and the number_of_replicas to 0 for efficient reindexing.
3. Use the reindexAPI to copy documents from the 5.x index into the new index. You can use a script to perform any necessary modifications to the document data and metadata during reindexing.
4. Reset the refresh_interval and number_of_replicas to the values used in the old index.
5. Wait for the index status to change to green.
6. In a single update aliases request:
Delete the old index.
Add an alias with the old index name to the new index.
Add any aliases that existed on the old index to the new index.
- 老數據5.2數據需要遷移的索引 http://192.168.10.156:9200/_cat/indices
green open seller_subject_index_v1 efIwSm31QAiNwEdus0BVNg 5 1 6002 23 3mb 1.5mb
green open search_keyword_index_v1 BTalZTHlRrCTaxgFrcH-jA 5 1 3415 0 2.1mb 1mb
green open platform_coupon_index_v1 lD15Hyl6TtSWV2GP79aX8Q 5 1 57 0 192.4kb 96.2kb
.
.
.
.
green open syscate_index_v1 dEWSZp1sSq-lA9TINGyt0A 5 1 4660 15 9mb 4.5mb
green open cars gSqMCkN-SSa3EhH6Vm1nCw 5 1 8 0 47.3kb 23.6kb
green open product_keyword_hint_v1 ubQgLk4FRVaAciEHP7imQw 1 1 280 0 324.5kb 162.2kb
green open subject_index_v1 u-M6le0JSxmuxLxGLkNCUQ 5 1 1445 79 17.1mb 8.5mb
此處以seller_subject_index_v1索引例子演示
Create an index with 7.x compatible mappings.
- es.5.2的seting、mapping
點擊查看代碼
#kibana 管理工具
get seller_subject_index_v1/_settings
{
"seller_subject_index_v1": {
"settings": {
"index": {
"number_of_shards": "5",
"provided_name": "seller_subject_index_v1",
"creation_date": "1557301817602",
"analysis": {
"analyzer": {
"keyword_analyzer": {
"filter": [
"lowercase"
],
"type": "custom",
"tokenizer": "keyword"
},
"comma_analyzer": {
"filter": [
"lowercase"
],
"pattern": ",",
"type": "pattern"
},
"semicolon_analyzer": {
"filter": [
"lowercase"
],
"pattern": ";",
"type": "pattern"
}
}
},
"number_of_replicas": "1",
"uuid": "efIwSm31QAiNwEdus0BVNg",
"version": {
"created": "5020199"
}
}
}
}
}
get seller_subject_index_v1/_mapping
{
"seller_subject_index_v1": {
"mappings": {
"seller_subject": {
"_all": {
"enabled": false
},
"_routing": {
"required": true
},
"properties": {
"buyer_uid": {
"type": "long",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
},
"check_time": {
"type": "long"
},
"create_time": {
"type": "long"
},
"modify_time": {
"type": "long"
},
"origin": {
"type": "byte",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
},
"publish_type": {
"type": "short",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
},
"seller_uid": {
"type": "long",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
},
"sort_time": {
"type": "long"
},
"status": {
"type": "short",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
},
"status_reason": {
"type": "short",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
},
"subject_id": {
"type": "long",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
}
}
}
}
}
}
點擊查看代碼
put seller_subject_index_v1
{
"settings": {
"index": {
"refresh_interval":-1,
"number_of_shards": "5",
"analysis": {
"analyzer": {
"keyword_analyzer": {
"filter": [
"lowercase"
],
"type": "custom",
"tokenizer": "keyword"
},
"comma_analyzer": {
"filter": [
"lowercase"
],
"pattern": ",",
"type": "pattern"
},
"semicolon_analyzer": {
"filter": [
"lowercase"
],
"pattern": ";",
"type": "pattern"
}
}
},
"number_of_replicas": "1"
}
},
"mappings": {
"_routing": {
"required": true
},
"properties": {
"buyer_uid": {
"type": "long",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
},
"check_time": {
"type": "long"
},
"create_time": {
"type": "long"
},
"modify_time": {
"type": "long"
},
"origin": {
"type": "byte",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
},
"publish_type": {
"type": "short",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
},
"seller_uid": {
"type": "long",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
},
"sort_time": {
"type": "long"
},
"status": {
"type": "short",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
},
"status_reason": {
"type": "short",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
},
"subject_id": {
"type": "long",
"fields": {
"comma": {
"type": "text",
"term_vector": "with_positions_offsets",
"analyzer": "comma_analyzer"
}
}
}
}
}
}
post /_aliases
{
"actions":[
{ "add":{ "index":"seller_subject_index_v1","alias":"seller_subject_index"}
}]
}
最后別忘了 別名映射
##遷移數據seller_subject_index_v1 再kibana 工具執行
##
Post _reindex
{
"source":{
"remote":{
"host":"http://192.168.10.154:9200",
"socket_timeout": "1m",
"connect_timeout": "10s"
},
"index":"seller_subject_index_v1"
},
"dest":{
"index":"seller_subject_index_v1"
}
}
put /seller_subject_index_v1/_settings
{
"index":{
"refresh_interval": "1s"
}
}
- 繼續執行其他索引
- 小結及重要的點
1.出現{"statusCode":502,"error":"Bad Gateway","message":"Client request timeout"} ,加上參數wait_for_completion=false
2. 一個索引有多個doc時需要拆分 加type參數
應用層代碼修改
官方文檔https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html
This section describes how to migrate existing code from the TransportClient to the Java High Level REST Client released with the version 5.6.0 of Elasticsearch.
- 如何遷移
Adapting existing code to use the RestHighLevelClient instead of the TransportClient requires the following steps:
1. Update dependencies
2.Update client initialization
3.Update application code
調整現有代碼以使用RestHighLevelClient代替TransportClient 需要以下步驟:
- 更新依賴
- 更新客戶端初始化
- 更新應用代碼
Since the Java High Level REST Client does not support request builders, applications that use them must be changed to use requests constructors instead
**代碼層面的改**變就是TransportClient的請求構建器不支持使用 如:
IndexRequestBuilder indexRequestBuilder = transportClient.prepareIndex();
DeleteRequestBuilder deleteRequestBuilder = transportClient.prepareDelete();
SearchRequestBuilder searchRequestBuilder = transportClient.prepareSearch();
用Java High Level REST 構造器替換 如:
IndexRequest request = new IndexRequest("index").id("id");
request.source("{\"field\":\"value\"}", XContentType.JSON);