一、全部備份和導入
安裝:
git clone https://github.com/taskrabbit/elasticsearch-dump.git
cd elasticsearch-dump
npm install elasticdump -g
sudo yum install npm
(1)創建備份路徑 mkdir /data/es_data_backup (2)遷移原機器上的所有索引到目標機器 #把原始索引的mapping結構和數據導出 elasticdump --input=http://10.200.57.118:9200/ --output=/data/es_data_backup/cmdb_dump-mapping.json --all=true --type=mapping elasticdump --input=http://10.200.57.118:9200/ --output=/data/es_data_backup/cmdb_dump.json --all=true --type=data #mapping結構和數據導入新的cluster節點 elasticdump --input=/data/es_data_backup/cmdb_dump-mapping.json --output=http://10.200.57.118:9200/ --bulk=true elasticdump --input=/data/es_data_backup/cmdb_dump.json --output=http://10.200.57.118:9200/ --bulk=true
二、指定庫備份和導入
curl -XGET '192.168.11.10:9200/_cat/indices?v&pretty' . #查看都有哪些索引 health status index pri rep docs.count docs.deleted store.size pri.store.size green open jyall-test 5 1 18908740 2077368 25gb 12.5gb # Backup index data to a file: elasticdump --input=http://10.200.57.118:9200/ele_nginx_clusters --output=/data/es_data_backup/ele_nginx_clusters_mapping.json --type=mapping elasticdump --input=http://10.200.57.118:9200/ele_nginx_clusters --output=/data/es_data_backup/ele_nginx_clusters.json --type=data #或者采用gzip的方式,這種方式親測節省10多倍的空間,導入時gunzip ele_nginx_clusters.json.gz后再進行導入 #Backup and index to a gzip using stdout: elasticdump --input=http://10.200.57.118:9200/ele_nginx_clusters --output=$ | gzip > /data/es_data_backup/ele_nginx_clusters.json.gz 導入: elasticdump --input=/data/es_data_backup/ele_nginx_clusters_mapping.json --output=http://10.200.57.118:9200/ --bulk=true elasticdump --input=/data/es_data_backup/ele_nginx_clusters.json --output=http://10.200.57.118:9200/ --bulk=true
三、導出遇到的報錯及問題
(1)報錯如下:
Thu, 26 Apr 2018 09:14:49 GMT | Error Emitted => read ECONNRESET
Thu, 26 Apr 2018 09:14:49 GMT | Total Writes: 19800
Thu, 26 Apr 2018 09:14:49 GMT | dump ended with error (get phase) => Error: read ECONNRESET
(2)
<1>
It sounds like your issue is being caused by the elasticdump opening too many sockets to your elasticsearch cluster. You can use the --maxSockets option to limit the number of sockets opened.
elasticdump --input http://192.168.2.222:9200/index1 --output http://192.168.2.222:9200/index2 --type=data --maxSockets=5
Reference:
https://stackoverflow.com/questions/33248267/dump-ended-with-error-set-phase-error-read-econnreset
https://github.com/nodejs/node/issues/10563
Reference:
https://www.zhangluya.com/?p=543
https://github.com/taskrabbit/elasticsearch-dump
