在上一片博客,詳細說明了mongodb的分片搭建的詳細過程:分片搭建
在這里會說一些分片的維護與操作!
在集群搭建完,我們使用了sh.status()查看分片之后的數據,如下:

#連接的是mongos路由 [root@test1 bin]# ./mongo --port 27017 mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5be2a93b4c4972e711620a02") } shards: #顯示的是分片的數據庫 { "_id" : "shard1", "host" : "shard1/10.0.102.202:30001,test4:30001", "state" : 1 } { "_id" : "shard2", "host" : "shard2/10.0.102.220:30002,test3:30002", "state" : 1 } { "_id" : "shard3", "host" : "shard3/10.0.102.202:30003,test2:30003", "state" : 1 } active mongoses: #mongodb的版本號 "3.4.2" : 1 autosplit: #是否開啟自動分片 Currently enabled: yes balancer: #均衡器(會在后面解釋) Currently enabled: yes Currently running: no Balancer lock taken at Wed Nov 07 2018 18:26:23 GMT+0800 (CST) by ConfigServer:Balancer Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: 4 : Success databases: #分片的數據 { "_id" : "mytest", "primary" : "shard2", "partitioned" : true } mytest.test shard key: { "id" : 1 } #分片鍵 unique: false #是否唯一 balancing: true #是否均衡 chunks: #每個分片含有數據的塊數 ,后面詳細列出了數據在每個分片的塊的范圍! shard1 3 shard2 2 shard3 2 { "id" : { "$minKey" : 1 } } -->> { "id" : 2 } on : shard1 Timestamp(5, 1) { "id" : 2 } -->> { "id" : 22 } on : shard3 Timestamp(3, 0) { "id" : 22 } -->> { "id" : 171218 } on : shard2 Timestamp(4, 1) { "id" : 171218 } -->> { "id" : 373212 } on : shard2 Timestamp(3, 3) { "id" : 373212 } -->> { "id" : 544408 } on : shard1 Timestamp(4, 2) { "id" : 544408 } -->> { "id" : 742999 } on : shard1 Timestamp(4, 3) { "id" : 742999 } -->> { "id" : { "$maxKey" : 1 } } on : shard3 Timestamp(5, 0) mongos>
檢查分片,還可以連接到配置服務器,查看分片信息!
shard_cfg:PRIMARY> show dbs; admin 0.000GB config 0.001GB local 0.001GB shard_cfg:PRIMARY> use config switched to db config shard_cfg:PRIMARY> show tables; #庫中的文檔會在下面詳細介紹 actionlog changelog chunks collections databases lockpings locks migrations mongos shards tags version
shard_cfg:PRIMARY> db.chunks.count() #共有7個塊
7
shard_cfg:PRIMARY> db.chunks.find().limit(1).pretty() #查看第一個塊的信息
{
"_id" : "mytest.test-id_MinKey",
"lastmod" : Timestamp(5, 1),
"lastmodEpoch" : ObjectId("5be2ce5986b5988b373c7cca"),
"ns" : "mytest.test", #命名空間
"min" : {
"id" : { "$minKey" : 1 } #分片鍵的最小值
},
"max" : { #分片鍵的最大值
"id" : 2
},
"shard" : "shard1" #在哪個分片
}
#用這個命令的統計信息和再mongos上面使用sh.status()的信息時一樣的。
#可以直接查看某個分片有多少個數據塊
shard_cfg:PRIMARY> db.chunks.find({"shard":"shard1"}).count()
3
shard_cfg:PRIMARY> db.chunks.find({"shard":"shard2"}).count()
2
shard_cfg:PRIMARY> db.chunks.find({"shard":"shard3"}).count()
2
mongodb會使每個分片盡量保持均衡的數據。
#每個分片集群中含有的數據數目,【這個算均衡吧,畢竟每條文檔只有幾個字節,ok這不重要,只要知道是均衡的就可以】 shard1:PRIMARY> db.test.count() 369788 shard2:PRIMARY> db.test.count() 373190 shard3:PRIMARY> db.test.count() 257022
mongodb保持數據在沒給分片均衡,主要依賴於分割與遷移!
分割是把一個數據塊分割為2個更小數據塊的過程。它只會在數據塊大小超過最大限制的時候才會發生,目前默認是64MB。分割是必須的,因為數據塊太大就難以在整個集群中分布。
遷移就是在分片之間移動數據的過程。當某些分片服務器包含的數據塊數量大大超過其他分片服務器時就會觸發遷移過程,這個觸發器叫做遷移回合。在一個遷移回合中,
數據塊從某些分片服務器遷移到其他分片服務器,直到集群看起來相對平衡為止。
在配置服務器中有一個changelog集合,通過這個集合,我們可以查看mongodb發生的分割和移動次數:
shard_cfg:PRIMARY> db.changelog.find({what: "split"}).count() #發生了多少次分割(因為只第一次批量插入了數據,因此沒有分割) 0 shard_cfg:PRIMARY> db.changelog.find({what: "moveChunk.commit"}).count() #發生遷移的次數 4
#再一次插入上面的數據,來觀察分割與遷移 【這次插入2千萬條數據,會很慢】
mongos> use mytest
switched to db mytest
mongos> for(var i = 1; i < 2000000; i++){
... db.test.save({id: i, name: "test2"})
... }
WriteResult({ "nInserted" : 1 })
#查看此時的分片狀態
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5be2a93b4c4972e711620a02")
}
shards:
{ "_id" : "shard1", "host" : "shard1/10.0.102.202:30001,test4:30001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/10.0.102.220:30002,test3:30002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/10.0.102.202:30003,test2:30003", "state" : 1 }
active mongoses:
"3.4.2" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Balancer lock taken at Wed Nov 07 2018 18:26:23 GMT+0800 (CST) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
7 : Success
1 : Failed with error 'aborted', from shard2 to shard3
databases:
{ "_id" : "mytest", "primary" : "shard2", "partitioned" : true }
mytest.test
shard key: { "id" : 1 }
unique: false
balancing: true
chunks:
shard1 5
shard2 4
shard3 4
{ "id" : { "$minKey" : 1 } } -->> { "id" : 2 } on : shard1 Timestamp(8, 1)
{ "id" : 2 } -->> { "id" : 22 } on : shard3 Timestamp(7, 1)
{ "id" : 22 } -->> { "id" : 171218 } on : shard2 Timestamp(6, 1)
{ "id" : 171218 } -->> { "id" : 256816 } on : shard3 Timestamp(6, 0)
{ "id" : 256816 } -->> { "id" : 342414 } on : shard2 Timestamp(5, 3)
{ "id" : 342414 } -->> { "id" : 373212 } on : shard2 Timestamp(5, 4)
{ "id" : 373212 } -->> { "id" : 544408 } on : shard1 Timestamp(4, 2)
{ "id" : 544408 } -->> { "id" : 742999 } on : shard1 Timestamp(4, 3)
{ "id" : 742999 } -->> { "id" : 828597 } on : shard3 Timestamp(6, 2)
{ "id" : 828597 } -->> { "id" : 1000000 } on : shard3 Timestamp(6, 3)
{ "id" : 1000000 } -->> { "id" : 1249999 } on : shard1 Timestamp(7, 2)
{ "id" : 1249999 } -->> { "id" : 1603980 } on : shard1 Timestamp(7, 3)
{ "id" : 1603980 } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(8, 0)
#查看是否發生了遷移與分割
shard_cfg:PRIMARY> use config
switched to db config
shard_cfg:PRIMARY> db.changelog.find({what: "split"}).count()
0
shard_cfg:PRIMARY> db.changelog.find({what: "moveChunk.commit"}).count()
7
#發生了7次遷移,沒有發生分割
查詢路由
在查詢數據時,如果查詢包含分片鍵,則mongos就可以迅速咨詢數據塊來確定屬於哪個分片,這叫做目標查詢。如果查詢沒有分片鍵,就必須訪問所有的分片服務器以填充查詢,這個過程叫做全局查詢。
插入定位到某個分片時,如果分片響應很慢則集群也會很慢。這時候就可以使用索引來優化查詢。
在分片集群中,索引有以下幾個要點:
- 每個分片維護自己的索引。當在分片集合上聲明索引時,每個分片都會為自己的集合部分定義單獨的索引。
- 它遵循分片集合在每個分片上應該擁有的相同的索引原理。如果不是,就無法保證查詢性能的一致性。
- 分片集合只允許在_id字段和分片鍵之間建立唯一索引。禁止其他地方建立唯一索引,因為強制唯一索引性需要在分片之間進行通信,這是由mongodb分片集群底層工作機制決定的。
對集合中分片的操作
添加分片就是使用sh.addShard()命令,用這種方式添加新的分片服務器時,要知道遷移數據需要花費時間。可以期望每分鍾遷移100~200M的數據。因此添加分片的時機應該在集群的性能耗盡之前,一旦索引和工作集無法加載到ram里,應用就會出現卡死。特別是應用需要高讀/寫並發吞吐量的時候,這時候數據庫的io比較高,就會降低讀寫速度,此時添加分片就比較困難!
刪除分片
在我們創建的集群中有三個分片用於數據的存儲,信息如下:

shards: { "_id" : "shard1", "host" : "shard1/10.0.102.202:30001,test4:30001", "state" : 1 } { "_id" : "shard2", "host" : "shard2/10.0.102.220:30002,test3:30002", "state" : 1 } { "_id" : "shard3", "host" : "shard3/10.0.102.202:30003,test2:30003", "state" : 1 } 此時各個分片對應的數據塊如下: chunks: shard1 5 shard2 4 shard3 4 ###########假設現在我們要刪除最后一個分片集合,也就是刪除shard3. mongos> sh.setBalancerState("true") #保證均衡器是開着的 { "ok" : 1 } mongos> use admin #進入admin庫中 switched to db admin mongos> db.runCommand({removeshard: "shard3/10.0.102.202:30003,test2:30003"}) #刪除分片 { "msg" : "draining started successfully", "state" : "started", "shard" : "shard3", "note" : "you need to drop or movePrimary these databases", "dbsToMove" : [ ], "ok" : 1 } #表示數據正在遷移到其他的庫上 ##再次執行這條命令查看遷移的狀態 mongos> db.runCommand({removeshard: "shard3/10.0.102.202:30003,test2:30003"}) { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(4), "dbs" : NumberLong(0) }, "note" : "you need to drop or movePrimary these databases", "dbsToMove" : [ ], "ok" : 1 } #這里遇見一個問題note提示說明shard3分片是整個集群的主分片【插入數據的時候數據先插入這個主分片然后,再遷移到其他的分片】,但是查看config庫,發現主分片不是這個! mongos> use config switched to db config mongos> db.databases.find() #這里顯示主分片是shard2 { "_id" : "mytest", "primary" : "shard2", "partitioned" : true } 【轉移主分片的命令如下 mongos> db.runCommand({movePrimary: "mytest", to : "shard1"}) { "primary" : "shard1:shard1/10.0.102.202:30001,test4:30001", "ok" : 1 } 】 #然后刪除就一直在這里,暫時未找到原因
config庫信息說明:
config庫中的信息,在mongos服務器上和配置服務器上的信息時一樣的

mongos> use config #庫中的數據和congisvr服務器中的數據是一樣的! switched to db config mongos> show tables; actionlog changelog chunks collections databases lockpings locks migrations mongos shards tags version mongos> db.shards.find() #查看分片信息 { "_id" : "shard1", "host" : "shard1/10.0.102.202:30001,test4:30001", "state" : 1 } { "_id" : "shard2", "host" : "shard2/10.0.102.220:30002,test3:30002", "state" : 1 } { "_id" : "shard3", "host" : "shard3/10.0.102.202:30003,test2:30003", "state" : 1 } mongos> db.databases.find() #查看分片中數據庫的信息,因為只有一個沒有test庫 { "_id" : "mytest", "primary" : "shard2", "partitioned" : true } #primary:說明這個分片是整個集群的主分片,插入數據的時候數據先插入這個主分片然后,再遷移到其他的分片。 mongos> db.collections.find().pretty() #分片的集合信息 { "_id" : "mytest.test", "lastmodEpoch" : ObjectId("5be4d47fd271124654f1411e"), "lastmod" : ISODate("1970-02-19T17:02:47.412Z"), "dropped" : false, "key" : { #分片鍵 "id" : 1 }, "unique" : false #分片鍵是否是唯一的 } mongos> db.mongos.find().pretty() #查看集群中所有的mongos路由信息 { "_id" : "test1:27017", "ping" : ISODate("2018-11-09T02:18:35.796Z"), "up" : NumberLong(69215), "waiting" : true, "mongoVersion" : "3.4.2" } { "_id" : "test3:27017", "ping" : ISODate("2018-11-09T02:18:35.594Z"), "up" : NumberLong(68335), "waiting" : true, "mongoVersion" : "3.4.2" } mongos> db.locks.find().pretty() #均衡器鎖的信息:config.locks,記錄所有集群范圍的鎖,可得知哪個mongos是均衡器 { "_id" : "balancer", "state" : 2, #0非活動狀態、1嘗試得到鎖,但還沒得到,2表示正在進行均衡 "ts" : ObjectId("5be3df32c3d597874a374d77"), "who" : "ConfigServer:Balancer", "process" : "ConfigServer", "when" : ISODate("2018-11-08T07:01:56.119Z"), "why" : "CSRS Balancer" #集群均衡器 } { "_id" : "mytest", "state" : 0, "ts" : ObjectId("5be4d7eed271124654f14137"), "who" : "test1:27017:1541660698:8505579169655688671:conn5", "process" : "test1:27017:1541660698:8505579169655688671", "when" : ISODate("2018-11-09T00:42:22.797Z"), "why" : "enableSharding" #分片功能 } { "_id" : "mytest.test", "state" : 0, "ts" : ObjectId("5be3df32c3d597874a374d77"), "who" : "ConfigServer:Balancer", "process" : "ConfigServer", "when" : ISODate("2018-11-09T01:51:38.065Z"), "why" : "Migrating chunk(s) in collection mytest.test" #合並塊 } mongos> db.chunks.find().count() #總共有211個塊 211 mongos> db.chunks.find().limit(1).pretty() #查看其中的一個塊信息 { "_id" : "mytest.test-id_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("5be4d47fd271124654f1411e"), "ns" : "mytest.test", "min" : { "id" : { "$minKey" : 1 } }, "max" : { "id" : 250001 }, "shard" : "shard1" } mongos> db.changelog.find().count() #記錄的是分片操作日志 1167 mongos> db.changelog.find().limit(1).pretty() #查看一個日志的文檔 { "_id" : "test1-2018-11-08T15:05:38.318+0800-5be3e042c3d597874a3758b9", "server" : "test1", "clientAddr" : "10.0.102.220:22561", "time" : ISODate("2018-11-08T07:05:38.318Z"), "what" : "addShard", #可以根據what的動作不同來篩選出需要的信息。 "ns" : "", "details" : { "name" : "shard1", "host" : "shard1/test4:30001,10.0.102.202:30001" } } mongos> db.changelog.find({what: "split"}).count() #分割發生的次數 0 mongos> db.changelog.find({what: "moveChunk.commit"}).count() #移動塊的次數 140 mongos>db.adminCommand({"connPoolStats":1}) #查看分片的網絡連接數據
不知道為什么,我搭建的這個集群沒有settings這個集合 ,mongodb3.4以及mongodb3.6的官方文檔都有這個集合,不知道為什么這里沒有?
但是可以使用一個如下命令查看塊的信息!

mongos> db.chunks.stats() { "sharded" : false, "primary" : "config", "ns" : "config.chunks", "size" : 34746, "count" : 211, "avgObjSize" : 164, "storageSize" : 53248, "capped" : false, "wiredTiger" : { "metadata" : { "formatVersion" : 1 }, "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u", "type" : "file", "uri" : "statistics:table:collection-14--1825262580711858791", "LSM" : { "bloom filter false positives" : 0, "bloom filter hits" : 0, "bloom filter misses" : 0, "bloom filter pages evicted from cache" : 0, "bloom filter pages read into cache" : 0, "bloom filters in the LSM tree" : 0, "chunks in the LSM tree" : 0, "highest merge generation in the LSM tree" : 0, "queries that could have benefited from a Bloom filter that did not exist" : 0, "sleep for LSM checkpoint throttle" : 0, "sleep for LSM merge throttle" : 0, "total size of bloom filters" : 0 }, "block-manager" : { "allocations requiring file extension" : 11, "blocks allocated" : 180, "blocks freed" : 50, "checkpoint size" : 12288, "file allocation unit size" : 4096, "file bytes available for reuse" : 24576, "file magic number" : 120897, "file major version number" : 1, "file size in bytes" : 53248, "minor version number" : 0 }, "btree" : { "btree checkpoint generation" : 1335, "column-store fixed-size leaf pages" : 0, "column-store internal pages" : 0, "column-store variable-size RLE encoded values" : 0, "column-store variable-size deleted values" : 0, "column-store variable-size leaf pages" : 0, "fixed-record size" : 0, "maximum internal page key size" : 368, "maximum internal page size" : 4096, "maximum leaf page key size" : 2867, "maximum leaf page size" : 32768, "maximum leaf page value size" : 67108864, "maximum tree depth" : 3, "number of key/value pairs" : 0, "overflow pages" : 0, "pages rewritten by compaction" : 0, "row-store internal pages" : 0, "row-store leaf pages" : 0 }, "cache" : { "bytes currently in the cache" : 80421, "bytes read into cache" : 0, "bytes written from cache" : 1194155, "checkpoint blocked page eviction" : 0, "data source pages selected for eviction unable to be evicted" : 0, "hazard pointer blocked page eviction" : 0, "in-memory page passed criteria to be split" : 0, "in-memory page splits" : 0, "internal pages evicted" : 0, "internal pages split during eviction" : 0, "leaf pages split during eviction" : 0, "modified pages evicted" : 0, "overflow pages read into cache" : 0, "overflow values cached in memory" : 0, "page split during eviction deepened the tree" : 0, "page written requiring lookaside records" : 0, "pages read into cache" : 0, "pages read into cache requiring lookaside entries" : 0, "pages requested from the cache" : 5640, "pages written from cache" : 95, "pages written requiring in-memory restoration" : 0, "unmodified pages evicted" : 0 }, "cache_walk" : { "Average difference between current eviction generation when the page was last considered" : 0, "Average on-disk page image size seen" : 0, "Clean pages currently in cache" : 0, "Current eviction generation" : 0, "Dirty pages currently in cache" : 0, "Entries in the root page" : 0, "Internal pages currently in cache" : 0, "Leaf pages currently in cache" : 0, "Maximum difference between current eviction generation when the page was last considered" : 0, "Maximum page size seen" : 0, "Minimum on-disk page image size seen" : 0, "On-disk page image sizes smaller than a single allocation unit" : 0, "Pages created in memory and never written" : 0, "Pages currently queued for eviction" : 0, "Pages that could not be queued for eviction" : 0, "Refs skipped during cache traversal" : 0, "Size of the root page" : 0, "Total number of pages currently in cache" : 0 }, "compression" : { "compressed pages read" : 0, "compressed pages written" : 52, "page written failed to compress" : 0, "page written was too small to compress" : 43, "raw compression call failed, additional data available" : 0, "raw compression call failed, no additional data available" : 0, "raw compression call succeeded" : 0 }, "cursor" : { "bulk-loaded cursor-insert calls" : 0, "create calls" : 19, "cursor-insert key and value bytes inserted" : 89436, "cursor-remove key bytes removed" : 0, "cursor-update value bytes updated" : 0, "insert calls" : 538, "next calls" : 669, "prev calls" : 1, "remove calls" : 0, "reset calls" : 5658, "restarted searches" : 0, "search calls" : 6051, "search near calls" : 0, "truncate calls" : 0, "update calls" : 0 }, "reconciliation" : { "dictionary matches" : 0, "fast-path pages deleted" : 0, "internal page key bytes discarded using suffix compression" : 21, "internal page multi-block writes" : 0, "internal-page overflow keys" : 0, "leaf page key bytes discarded using prefix compression" : 0, "leaf page multi-block writes" : 10, "leaf-page overflow keys" : 0, "maximum blocks required for a page" : 0, "overflow values written" : 0, "page checksum matches" : 1, "page reconciliation calls" : 86, "page reconciliation calls for eviction" : 0, "pages deleted" : 0 }, "session" : { "object compaction" : 0, "open cursor count" : 2 }, "transaction" : { "update conflicts" : 0 } }, "nindexes" : 4, "totalIndexSize" : 147456, "indexSizes" : { "_id_" : 36864, "ns_1_min_1" : 36864, "ns_1_shard_1_min_1" : 36864, "ns_1_lastmod_1" : 36864 }, "ok" : 1 }
分片鍵的選擇:
在通過mongs向分片集群寫入數據時,數據會首先寫到集群的primary分片,然后再啟動遷移塊,把數據塊盡量均衡的遷移到其他的分片。mongodb是根據分片鍵的值組成的塊來遷移數據,並最終把數據均衡的分布在每個分片上。分片鍵一旦選定在集群中是不可改變的。
mongodb使用分片鍵的范圍對集合中數據進行分塊。分片鍵的選擇會影響分片集群平衡器創建和分發塊。這會影響分片集群中的整體性能和效率。
在選擇分片鍵時要規避以下的問題:
- 熱點 某些分片鍵會導致所有的讀和寫操作都在單個數據塊或單個分片上。這可能導致單個分片服務器嚴重不堪重負,而其他分片服務器閑置,無所事事。
- 不可分割數據塊 過於粗粒度的分片鍵可能導致許多文檔使用相同的分片鍵。因為分片是基於分片鍵值的范圍,所以意味着這些文檔不能被分割為多個數據塊,這個最終會閑置mongdb均勻分布數據的能力。
- 槽糕的定位 可以均勻的分布寫操作,但是分片鍵與某些查詢沒有關聯,也會導致性能很差。
在上面的例子中,我們選擇了id這個自增的分片鍵,這樣的結果就是新插入的數據全部屬於同一個數據塊,也就是每一個新的文檔都會被寫入到同一個分片中,這時候並不能分散寫的壓力,並且mongodb在寫入數據的同時,mongodb為保證各個分片上數據的均衡,會同時遷移數據塊,這樣就更加大了單個分片的壓力,加劇了性能的惡化!但是使用自增的塊,比較好的一點是,使用范圍查詢比較快速一點。
使用哈希值分片,告訴mongodb使用哈希函數的結果作為分片鍵,而不是直接使用分片鍵。哈希函數用來產生隨機結果,它可以確保插入更加均勻地分布在集群中。但是這意味着范圍掃描需要掃描多個分片。
mongos> db.bar.ensureIndex({"files_id":"hashed"}) mongos> sh.enableSharding("foo") { "ok" : 1 } mongos> sh.shardCollection("foo.fs.chunks",{"files_id":"hashed"}) { "collectionsharded" : " foo.fs.chunks ", "ok" : 1 }
組合鍵:
在選擇分片時需要考慮如何定位讀取數據,如何更好的分布數據,如何在應用中高效地分割和遷移數據。
有時候沒有單獨的片鍵選擇時,可以選擇組合鍵來使用,一般是一個粗粒度的鍵+一個細粒度的鍵。
標簽分片:
給每個分片打上tag,然后指定范圍的數據直接插入到對應的tag上即可!
mongos > sh.addShardTag("shard0000", "T") mongos > sh.addShardTag("shard0001", "Q") mongos > sh.addShardTag("shard0002", "Q") mongos> sh.addTagRange("foo.ips",{ "ip": "010.000.000.000 ", … , "ip": "011.000.000.000 "}}, "T") mongos> sh.addTagRange("foo.ips",{ "ip": "011.000.000.000 ", … , "ip": "012.000.000.000 "}}, "Q")
【數據摘自: 分片鍵選擇】
備份分片集群
備份需要注意:
- 備份分片集群數據的時候,要注意的第一件事情就是可能發生數據遷移。這意味,除非備份是在同一個時間點,否則一定會丟失數據。
- 備份分片集群時,也必須備份配置服務器元數據。因此,可以指定單配置服務器備份,因為所有的配置服務器的數據是一樣的。
- 特別注意,以上的兩個備份,需要設置禁止數據塊遷移之后再做!
mongos> use config switched to db config mongos> sh.stopBalancer() #禁用均衡器 mongos> use config switched to db config mongos> sh.startBalancer() #開啟均衡器
【分片集群中刪除的分片的那個問題尚未解決?】
mongos沒有fialvoer功能,但是官方建議把mongos與應用服務器部署在一起,可以有多個mongos!
mongos的高可用【暫缺】