本文以最新的elasticsearch-6.3.0.tar.gz為例,為了節約資源,本文將副本調為0, 無client角色
https://www.elastic.co/blog/hot-warm-architecture-in-elasticsearch-5-x
以前es2.x版本配置elasticsearch.yml 里的node.tag: hot這個配置不生效了
被改成了這個
node.attr.box_type: hot
es架構
各節點的es配置
master節點:
[root@n1 ~]# cat /usr/local/elasticsearch/config/elasticsearch.yml
cluster.name: elk
node.master: true
node.data: false
node.name: 192.168.2.11
#node.attr.box_type: hot
#node.tag: hot
path.data: /data/es
path.logs: /data/log
network.host: 192.168.2.11
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.zen.ping.unicast.hosts: ["192.168.2.11"]
cluster.routing.allocation.disk.watermark.low: 85%
cluster.routing.allocation.disk.watermark.high: 90%
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 30%
http.cors.enabled: true
http.cors.allow-origin: "*"
- client節點(這里就不配置了)
node.master: false
node.data: false
- hot節點
[root@n2 ~]# cat /usr/local/elasticsearch/config/elasticsearch.yml
cluster.name: elk
node.master: false
node.data: true
node.name: 192.168.2.12
node.attr.box_type: hot
path.data: /data/es
network.host: 192.168.2.12
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.zen.ping.unicast.hosts: ["192.168.2.11"]
cluster.routing.allocation.disk.watermark.low: 85%
cluster.routing.allocation.disk.watermark.high: 90%
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 30%
http.cors.enabled: true
http.cors.allow-origin: "*"
- cold節點
[root@n3 ~]# cat /usr/local/elasticsearch/config/elasticsearch.yml
cluster.name: elk
node.master: false
node.data: true
node.name: 192.168.2.13
node.attr.box_type: cold
path.data: /data/es
network.host: 192.168.2.13
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.zen.ping.unicast.hosts: ["192.168.2.11"]
cluster.routing.allocation.disk.watermark.low: 85%
cluster.routing.allocation.disk.watermark.high: 90%
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 30%
http.cors.enabled: true
http.cors.allow-origin: "*"
如何實現某索引數據寫到指定的node?(根據節點tag即可)
我hot節點打了tag
node.attr.box_type: cold
創建一個template(這里我用kibana來操作es的api)
PUT _template/test
{
"index_patterns": "test-*",
"settings": {
"index.number_of_replicas": "0",
"index.routing.allocation.require.box_type": "hot"
}
}
意思是test-*
索引命名的,都將其數據放到hot節點上.
如何實現數據從hot節點遷移到老的cold節點?
以test-2018.07.05索引為例,將它從hot節點遷移到cold節點
kibana里操作:
PUT /test-2018.07.05/_settings
{
"settings": {
"index.routing.allocation.require.box_type": "cold"
}
}
生產中可能每天,或每h,生成一個index.
test-2018.07.01
test-2018.07.02
test-2018.07.03
test-2018.07.04
test-2018.07.05
...
我可以寫一個sh定時任務,每天晚上定時遷移數據.
如我在hot節點只保留7天的數據,7天以前的索引我匹配到, 每天晚上執行以下遷移命令即可.
cold節點數據保留1個月?
https://www.cnblogs.com/iiiiher/p/8029062.html
優化點:
1.為了提高吞吐量
path.data:/data1,/data2,/data3,/data4,/data5 可以每個目錄掛一塊盤
2.如果有10台hot節點,可以設置10個shards
logstash測試
input { stdin { } }
output {
elasticsearch {
index => "test-%{+YYYY.MM.dd}"
hosts => ["192.168.2.11:9200"]
}
stdout {codec => rubydebug}
}
/usr/local/logstash/bin/logstash -f logstash.yaml --config.reload.automatic
關於es的index template
關於es的template
es數據入庫時候都會匹配一個index template,默認匹配的是logstash這個template
template大致分成setting和mappings兩部分
- settings主要作用於index的一些相關配置信息,如分片數、副本數,tranlog同步條件、refresh等。
- mappings主要是一些說明信息,大致又分為_all、_source、prpperties這三部分: https://elasticsearch.cn/article/335
根據index name來匹配使用哪個index template. index template屬於節點范圍,而非全局. 需要給某個節點單獨設置index_template(如給設置一些特有的tag).