本文記錄從頭搭建一個MongoDB 副本集分片集群的過程。
我們要創建一個這樣子的分布式集群:有兩個shard,每個shard都是一個replica set,各有兩個副本(實際產品應用中還應加上一個僅用於投票aribiter);有三個config server;有一個mongos。步驟如下(前提:你已經安裝了MongoDB,並且假設你對分布式系統的一般架構有認識):
1、replica set
啟動兩個副本集:
replica set A
mkdir -p ./replset_shard1/node1
mkdir -p ./replset_shard1/node2
numactl --interleave=all mongod --port 20001 --dbpath ./replset_shard1/node1 --replSet set_a --oplogSize 1024 --logpath ./replset_shard1/node1/rs20001.log --fork
numactl --interleave=all mongod --port 20002 --dbpath ./replset_shard1/node2 --replSet set_a --oplogSize 1024 --logpath ./replset_shard1/node2/rs20002.log --fork
初始化,進入某個副本執行:
rs.initiate({"_id" : "set_a", "members" : [{_id: 0, host: "xxxhost:20001"}, {_id: 1, host: "xxxhost: 20002"}]})
replica set B
mkdir -p ./replset_shard2/node1
mkdir -p ./replset_shard2/node2
numactl --interleave=all mongod --port 30001 --dbpath ./replset_shard2/node1 --replSet set_b --oplogSize 1024 --logpath ./replset_shard2/node1/rs30001.log --fork
numactl --interleave=all mongod --port 30002 --dbpath ./replset_shard2/node2 --replSet set_b --oplogSize 1024 --logpath ./replset_shard2/node2/rs30002.log --fork
初始化
rs.initiate({"_id" : "set_a", "members" : [{_id: 0, host: "xxxhost:30001"}, {_id: 1, host: "xxxhost: 30002"}]})
注意1:--replSet 指定副本名,一個副本集內的副本名必須一樣,--oplogSize 指定oplog大小(單位MB),如果不指定,默認為DB所在磁盤空閑空間的5%,且大於1GB,不超過50GB
注意2:本例子為測試使用,實際工作中副本要能抵御單點故障:多個副本分布在不同機器\機房上。
2、config server
mkdir -p ./data/configdb1;mkdir -p ./data/configdb2;mkdir -p ./data/configdb3;
啟動mongo config server:
mongod --configsvr --fork --logpath ./data/configdb1/mongo17019.log --dbpath ./data/configdb1 --port 17019
mongod --configsvr --fork --logpath ./data/configdb2/mongo27019.log --dbpath ./data/configdb2 --port 27019
mongod --configsvr --fork --logpath ./data/configdb3/mongo37019.log --dbpath ./data/configdb3 --port 37019
3、mongos
mkdir -p ./mongosdb
啟動mongos:
mongos --configdb xxxhost:17019,xxxhost:27019,xxxhost:37019 --logpath ./mongosdb/mongos.log --fork --port 8100
啟動mongos時,config server的配置信息不使用localhost、也不使用127.0.0.1,否則添加其它機器的shard會出現錯誤提示:
"can’t use localhost as a shard since all shards need to communicate. either use all shards and configdbs in localhost or all in actual IPs host: xxxxx isLocalHost"
4、replica set 添加到 shard cluster
登陸mongos
test> use admin
switched to db admin
admin> db.runCommand({addShard: "set_a/xxxhost:20001"})
{ "shardAdded" : "set_a", "ok" : 1 }
admin> db.runCommand({addShard: "set_b/xxxhost:30001"})
{ "shardAdded" : "set_b", "ok" : 1 }
查看config.databases:
config> db.databases.find()
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "cswuyg", "partitioned" : false, "primary" : "set_a" }
查看shards:
config> db.shards.find()
{ "_id" : "set_a", "host" : "set_a/xxxhost:20001,xxxhost:20002" }
{ "_id" : "set_b", "host" : "set_b/xxxhost:30001,xxxhost:30002" }
5、對文檔使用shard功能
登陸mongos:
cswuyg> use admin
switched to db admin
admin> db.runCommand({"enablesharding": "cswuyg"})
{ "ok" : 1 }
admin> db.runCommand({"shardcollection": "cswuyg.a", "key": {"_id": 1}})
{ "collectionsharded" : "cswuyg.a", "ok" : 1 }
6、插入數據測試
登陸mongos,進入測試DB,執行測試js代碼:
var a = 10000
for (var i = 0; i < 1000000; ++i){
db.a.save({"b": i})
}
集合自動均衡后(或者手動啟動均衡:sh.startBalancer())chunks的分布效果:
config> db.chunks.find()
{ "_id" : "cswuyg.a-_id_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("54f54f0a59b0d8e1cbf0784e"), "ns" : "cswuyg.a", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : ObjectId("54f477859a27767875b03801") }, "shard" : "set_b" }
{ "_id" : "cswuyg.a-_id_ObjectId('54f477859a27767875b03801')", "lastmod" : Timestamp(3, 0), "lastmodEpoch" : ObjectId("54f54f0a59b0d8e1cbf0784e"), "ns" : "cswuyg.a", "min" : { "_id" : ObjectId("54f477859a27767875b03801") }, "max" : { "_id" : ObjectId("54f5507a86d364ad1c3f125f") }, "shard" : "set_b" }
{ "_id" : "cswuyg.a-_id_ObjectId('54f5507a86d364ad1c3f125f')", "lastmod" : Timestamp(4, 1), "lastmodEpoch" : ObjectId("54f54f0a59b0d8e1cbf0784e"), "ns" : "cswuyg.a", "min" : { "_id" : ObjectId("54f5507a86d364ad1c3f125f") }, "max" : { "_id" : ObjectId("54f551fe86d364ad1c44a844") }, "shard" : "set_a" }
{ "_id" : "cswuyg.a-_id_ObjectId('54f551fe86d364ad1c44a844')", "lastmod" : Timestamp(3, 2), "lastmodEpoch" : ObjectId("54f54f0a59b0d8e1cbf0784e"), "ns" : "cswuyg.a", "min" : { "_id" : ObjectId("54f551fe86d364ad1c44a844") }, "max" : { "_id" : ObjectId("54f552f086d364ad1c4aee1f") }, "shard" : "set_a" }
{ "_id" : "cswuyg.a-_id_ObjectId('54f552f086d364ad1c4aee1f')", "lastmod" : Timestamp(4, 0), "lastmodEpoch" : ObjectId("54f54f0a59b0d8e1cbf0784e"), "ns" : "cswuyg.a", "min" : { "_id" : ObjectId("54f552f086d364ad1c4aee1f") }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "set_b" }
7、為副本集set_a添加新的副本
啟動新副本實例
mkdir -p ./replset_shard1/node3
numactl --interleave=all mongod --port 20003 --dbpath ./replset_shard1/node3 --replSet set_a --oplogSize 1024 --logpath ./replset_shard1/node3/rs20003.log --fork
新副本實例加入到副本集
進入到primary實例執行:
test> rs.add("xxxhost:20003")
{ "ok" : 1 }
加入之后的新副本實例需要時間初始化同步數據,大數據量數據初始化過程可能很長,對服務會有較大影響。而且如果同步初始化過程耗時太長時,且導致了oplog空間被寫滿一輪,則又要再次觸發同步初始化,這種情況下可以采用其它方式來實現添加副本:拷貝primary實例的磁盤文件到新目錄然后以副本啟動,然后加入到replica set,這樣子就不需要有同步初始化過程。
參考:
8、其它
如果要把一個分片集群轉為副本集集群,需要dump出數據,然后restore回去;
如果要把一個分片集群轉為副本集分片集群,參考:
http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
如果是多機器的集群,不要在replica中使用localhost、127.0.0.1,否則導致無法使用多機器部署。
補充:
為副本添加tag:
本文所在:http://www.cnblogs.com/cswuyg/p/4356637.html