Elasticsearchdump 數據導入/導出


一、安裝過程

Elasticsearchdump 倉庫地址,詳細使用情況

當前工具主要是用來對ES中的數據進行數據導入/導出,以及對數據遷移相關,使用elasticdump工具需要使用到npm,所以需要安裝相關的依賴
目前使用到的ES版本是 7.x

安裝NODE

NODE和NPM安裝鏈接詳細文檔

安裝命令如下:

$ wget https://nodejs.org/dist/v10.15.0/node-v10.15.0-linux-x64.tar.xz
$ tar -xf node-v10.15.0-linux-x64.tar.xz
#配置相關的環境變量
$ vim /etc/profile
> PATH=$PATH:/software/node-v10.15.0-linux-x64/bin
$ source /etc/profile

通過npm安裝elasticdump

#本地安裝和全局安裝的區別在於它是否自動給你設置環境變量,其他的沒有區別
# 本地安裝
$ npm install elasticdump
$ ./bin/elasticdump
# 全局安裝
$ npm install elasticdump -g
$ elasticdump

注:當前工具的安裝,我目前是安裝在ES集群本地的,當然可以安裝在其他節點,只要網絡能夠被訪問,但是因為在本地,所以走本地網卡,速度比較快!

二、使用Elasticdump對數據導出

ES中將數據導出為本地JSON文件

#格式:elasticdump --input {protocol}://{host}:{port}/{index} --output ./test_index.json
#例子:將ES中的test_index 中的索引導出
#導出當前索引的mapping結構
$ elasticdump --input http://192.168.56.104:9200/test_index --output ./test_index_mapping.json --type=mapping
#導出當前索引下的所有真實數據
$ elasticdump --input http://192.168.56.104:9200/test_index --output ./test_index.json --type=data

上面導出的兩個文件都是在導入到ES中所需要的,一個是mapping文件,另外一個是數據,當然mapping也可以自己手動建立

錯誤:在安裝完成之后,進行首次使用過程中出現錯誤,錯誤主要是CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory,出現當前錯誤

解決:試過多種方式,但是最終了能夠使程序完整跑出來是因為將內存參數調大

#下面兩個參數只需要設置一個,我兩個都用過,目前記不清用的是哪個起的作用,應該是第一個,所以先設置第一個的參數
$ export NODE_OPTIONS--max-old-space-size=8192
$ export NODE_OPTIONS="--max-old-space-size=8192"

上面內容設置完成之后,還需要注意,在使用過程中需要注意,limit(默認值100)參數和scrollTime(默認值10m),這兩個參數都有默認值,

limit:代表的是每次通過請求從ES中請求的數量,之前我將該參數設置為1000,但是出現了內存溢出,默認值就好

scrollTime:當前參數代表的是以當前數據生成一個類似鏡像的東西,然后通過這個鏡像去查詢,如果是后面有新的數據進來是不會被納入的,默認值是10m(分鍾),也就是說,如果數據比較多,導出可能耗時比較久,那么可以將當前參數設置大一些,滿足能夠將數據導出完

如何修改參數:

  • /root/node-v10.15.0-linux-x64/bin在安裝目錄下,找到elasticdump腳本文件中,找到對應的參數進行修改
  • 通過直接命令跟參數的形式進行修改,如:elasticdump --limit=200 --input http://192.168.56.104/test_index --output ./test_index

三、本地JSON文件導入數據到ES中

通過上面導出,已經導出了兩個文件,一個是數據文件,一個是mapping文件,進行數據導入:

數據導入需要進行檢查:

  • 在需要導入的ES創建索引,並且保持索引和type和mapping文件中的一致

  • 是否存在mapping.json,這個取決於你是否導出,沒倒出也可以自己手動建立,建立過程這里不細說

  • 是否存在相同索引(是否為同一ES中):存在需要修改導出的mapping.json中的索引信息,不存在可以直接導入;

數據導入:

# 創建索引
$ curl -XPUT http:192.168.56.104:9200/test_index
#因為導入的是mapping,所以設置type為mapping
$ elasticdump --input ./test_index_mapping.json --output http://192.168.56.105:9200/ --type=mapping
#因為導入的是data(真實數據)所以設置type為data
$ elasticdump --input ./test_index.json --output http://192.168.56.105:9200/ --type=data

如上圖所示,為導入過程

導入導出具體參數要看數據量決定,並且要看單條數據大小決定參數的調整

四、使用Elasticdump官方的Docker鏡像進行數據導入/導出

在使用前,需要提前安裝Docker環境,這里不進行細說

# 鏡像下載
$ docker pull taskrabbit/elasticsearch-dump
# 下面還是例子:通過鏡像導出數據到本地
# 創建一個文件夾用於保存導出數據
$ mkdir -p /root/data
# 下面需要對路徑進行映射並執行命令(導出mapping)
$ docker run --rm -ti -v /data:/tmp taskrabbit/elasticsearch-dump \
  --input=http://production.es.com:9200/my_index \
  --output=/tmp/my_index_mapping.json \
  --type=mapping
# 導出(data)
$ docker run --rm -ti -v /root/data:/tmp taskrabbit/elasticsearch-dump \
  --input=http://192.168.56.104:9200/test_index \
  --output=/tmp/elasticdump_export.json \
  --type=data
  -----------------------------------------------------------------------------
# 以下內容為ES -> ES的數據遷移例子
$ docker run --rm -ti taskrabbit/elasticsearch-dump \
  --input=http://production.es.com:9200/my_index \
  --output=http://staging.es.com:9200/my_index \
  --type=mapping
$ docker run --rm -ti taskrabbit/elasticsearch-dump \
  --input=http://production.es.com:9200/my_index \
  --output=http://staging.es.com:9200/my_index \
  --type=data

注:上面的這些導入導出都是最基本的使用,當然還有很多高級用法,參考下面所列出來的命令進行嘗試或者直接訪問Github官網,查看更加詳細的說明,這里只作為記錄!

Github倉庫中的詳細格式參考:

# Copy an index from production to staging with analyzer and mapping:
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=http://staging.es.com:9200/my_index \
  --type=analyzer
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=http://staging.es.com:9200/my_index \
  --type=mapping
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=http://staging.es.com:9200/my_index \
  --type=data

# Backup index data to a file:
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=/data/my_index_mapping.json \
  --type=mapping
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=/data/my_index.json \
  --type=data

# Backup and index to a gzip using stdout:
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=$ \
  | gzip > /data/my_index.json.gz

# Backup the results of a query to a file
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=query.json \
  --searchBody='{"query":{"term":{"username": "admin"}}}'

# Copy a single shard data:
elasticdump \
  --input=http://es.com:9200/api \
  --output=http://es.com:9200/api2 \
  --params='{"preference" : "_shards:0"}'

# Backup aliases to a file
elasticdump \
  --input=http://es.com:9200/index-name/alias-filter \
  --output=alias.json \
  --type=alias

# Import aliases into ES
elasticdump \
  --input=./alias.json \
  --output=http://es.com:9200 \
  --type=alias

# Backup templates to a file
elasticdump \
  --input=http://es.com:9200/template-filter \
  --output=templates.json \
  --type=template

# Import templates into ES
elasticdump \
  --input=./templates.json \
  --output=http://es.com:9200 \
  --type=template

# Split files into multiple parts
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=/data/my_index.json \
  --fileSize=10mb

# Import data from S3 into ES (using s3urls)
elasticdump \
  --s3AccessKeyId "${access_key_id}" \
  --s3SecretAccessKey "${access_key_secret}" \
  --input "s3://${bucket_name}/${file_name}.json" \
  --output=http://production.es.com:9200/my_index

# Export ES data to S3 (using s3urls)
elasticdump \
  --s3AccessKeyId "${access_key_id}" \
  --s3SecretAccessKey "${access_key_secret}" \
  --input=http://production.es.com:9200/my_index \
  --output "s3://${bucket_name}/${file_name}.json"

Github上elasticdump參數整理參考:

elasticdump: Import and export tools for elasticsearch
version: %%version%%

Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]

--input
                    Source location (required)
--input-index
                    Source index and type
                    (default: all, example: index/type)
--output
                    Destination location (required)
--output-index
                    Destination index and type
                    (default: all, example: index/type)
--overwrite
                    Overwrite output file if it exists
                    (default: false)                    
--limit
                    How many objects to move in batch per operation
                    limit is approximate for file streams
                    (default: 100)
--size
                    How many objects to retrieve
                    (default: -1 -> no limit)
--concurrency
                    The maximum number of requests the can be made concurrently to a specified transport.
                    (default: 1)       
--concurrencyInterval
                    The length of time in milliseconds in which up to <intervalCap> requests can be made
                    before the interval request count resets. Must be finite.
                    (default: 5000)       
--intervalCap
                    The maximum number of transport requests that can be made within a given <concurrencyInterval>.
                    (default: 5)
--carryoverConcurrencyCount
                    If true, any incomplete requests from a <concurrencyInterval> will be carried over to
                    the next interval, effectively reducing the number of new requests that can be created
                    in that next interval.  If false, up to <intervalCap> requests can be created in the
                    next interval regardless of the number of incomplete requests from the previous interval.
                    (default: true)                                                                                       
--throttleInterval
                    Delay in milliseconds between getting data from an inputTransport and sending it to an
                    outputTransport.
                     (default: 1)
--debug
                    Display the elasticsearch commands being used
                    (default: false)
--quiet
                    Suppress all messages except for errors
                    (default: false)
--type
                    What are we exporting?
                    (default: data, options: [settings, analyzer, data, mapping, alias, template])
--delete
                    Delete documents one-by-one from the input as they are
                    moved.  Will not delete the source index
                    (default: false)
--searchBody
                    Preform a partial extract based on search results
                    when ES is the input, default values are
                      if ES > 5
                        `'{"query": { "match_all": {} }, "stored_fields": ["*"], "_source": true }'`
                      else
                        `'{"query": { "match_all": {} }, "fields": ["*"], "_source": true }'`
--headers
                    Add custom headers to Elastisearch requests (helpful when
                    your Elasticsearch instance sits behind a proxy)
                    (default: '{"User-Agent": "elasticdump"}')
--params
                    Add custom parameters to Elastisearch requests uri. Helpful when you for example
                    want to use elasticsearch preference
                    (default: null)
--sourceOnly
                    Output only the json contained within the document _source
                    Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}
                    sourceOnly: {SOURCE}
                    (default: false)
--ignore-errors
                    Will continue the read/write loop on write error
                    (default: false)
--scrollTime
                    Time the nodes will hold the requested search in order.
                    (default: 10m)
--maxSockets
                    How many simultaneous HTTP requests can we process make?
                    (default:
                      5 [node <= v0.10.x] /
                      Infinity [node >= v0.11.x] )
--timeout
                    Integer containing the number of milliseconds to wait for
                    a request to respond before aborting the request. Passed
                    directly to the request library. Mostly used when you don't
                    care too much if you lose some data when importing
                    but rather have speed.
--offset
                    Integer containing the number of rows you wish to skip
                    ahead from the input transport.  When importing a large
                    index, things can go wrong, be it connectivity, crashes,
                    someone forgetting to `screen`, etc.  This allows you
                    to start the dump again from the last known line written
                    (as logged by the `offset` in the output).  Please be
                    advised that since no sorting is specified when the
                    dump is initially created, there's no real way to
                    guarantee that the skipped rows have already been
                    written/parsed.  This is more of an option for when
                    you want to get most data as possible in the index
                    without concern for losing some rows in the process,
                    similar to the `timeout` option.
                    (default: 0)
--noRefresh
                    Disable input index refresh.
                    Positive:
                      1. Much increase index speed
                      2. Much less hardware requirements
                    Negative:
                      1. Recently added data may not be indexed
                    Recommended to use with big data indexing,
                    where speed and system health in a higher priority
                    than recently added data.
--inputTransport
                    Provide a custom js file to use as the input transport
--outputTransport
                    Provide a custom js file to use as the output transport
--toLog
                    When using a custom outputTransport, should log lines
                    be appended to the output stream?
                    (default: true, except for `$`)
--transform
                    A javascript, which will be called to modify documents
                    before writing it to destination. global variable 'doc'
                    is available.
                    Example script for computing a new field 'f2' as doubled
                    value of field 'f1':
                        doc._source["f2"] = doc._source.f1 * 2;
                    May be used multiple times.
                    Additionally, transform may be performed by a module. See [Module Transform](#module-transform) below.
--awsChain
                    Use [standard](https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/) location and ordering for resolving credentials including environment variables, config files, EC2 and ECS metadata locations
                    _Recommended option for use with AWS_
--awsAccessKeyId
--awsSecretAccessKey
                    When using Amazon Elasticsearch Service protected by
                    AWS Identity and Access Management (IAM), provide
                    your Access Key ID and Secret Access Key.
                    --sessionToken can also be optionally provided if using temporary credentials
--awsIniFileProfile
                    Alternative to --awsAccessKeyId and --awsSecretAccessKey,
                    loads credentials from a specified profile in aws ini file.
                    For greater flexibility, consider using --awsChain
                    and setting AWS_PROFILE and AWS_CONFIG_FILE
                    environment variables to override defaults if needed
--awsIniFileName
                    Override the default aws ini file name when using --awsIniFileProfile
                    Filename is relative to ~/.aws/
                    (default: config)
--support-big-int   
                    Support big integer numbers
--retryAttempts  
                    Integer indicating the number of times a request should be automatically re-attempted before failing
                    when a connection fails with one of the following errors `ECONNRESET`, `ENOTFOUND`, `ESOCKETTIMEDOUT`,
                    ETIMEDOUT`, `ECONNREFUSED`, `EHOSTUNREACH`, `EPIPE`, `EAI_AGAIN`
                    (default: 0)
                    
--retryDelay   
                    Integer indicating the back-off/break period between retry attempts (milliseconds)
                    (default : 5000)            
--parseExtraFields
                    Comma-separated list of meta-fields to be parsed  
--fileSize
                    supports file splitting.  This value must be a string supported by the **bytes** module.     
                    The following abbreviations must be used to signify size in terms of units         
                    b for bytes
                    kb for kilobytes
                    mb for megabytes
                    gb for gigabytes
                    tb for terabytes
                    
                    e.g. 10mb / 1gb / 1tb
                    Partitioning helps to alleviate overflow/out of memory exceptions by efficiently segmenting files
                    into smaller chunks that then be merged if needs be.
--fsCompress
                    gzip data before sending outputting to file 
--s3AccessKeyId
                    AWS access key ID
--s3SecretAccessKey
                    AWS secret access key
--s3Region
                    AWS region
--s3Endpoint        
                    AWS endpoint can be used for AWS compatible backends such as
                    OpenStack Swift and OpenStack Ceph
--s3SSLEnabled      
                    Use SSL to connect to AWS [default true]
                    
--s3ForcePathStyle  Force path style URLs for S3 objects [default false]
                    
--s3Compress
                    gzip data before sending to s3  

--retryDelayBase
                    The base number of milliseconds to use in the exponential backoff for operation retries. (s3)
--customBackoff
                    Activate custom customBackoff function. (s3)
--tlsAuth
                    Enable TLS X509 client authentication
--cert, --input-cert, --output-cert
                    Client certificate file. Use --cert if source and destination are identical.
                    Otherwise, use the one prefixed with --input or --output as needed.
--key, --input-key, --output-key
                    Private key file. Use --key if source and destination are identical.
                    Otherwise, use the one prefixed with --input or --output as needed.
--pass, --input-pass, --output-pass
                    Pass phrase for the private key. Use --pass if source and destination are identical.
                    Otherwise, use the one prefixed with --input or --output as needed.
--ca, --input-ca, --output-ca
                    CA certificate. Use --ca if source and destination are identical.
                    Otherwise, use the one prefixed with --input or --output as needed.
--inputSocksProxy, --outputSocksProxy
                    Socks5 host address
--inputSocksPort, --outputSocksPort
                    Socks5 host port
--help
                    This page


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM