Mongodb是一種非關系數據庫(NoSQL),非關系型數據庫的產生就是為了解決大數據量、高擴展性、高性能、靈活數據模型、高可用性。MongoDB官方已經不建議使用主從模式了,替代方案是采用副本集的模式。主從模式其實就是一個單副本的應用,沒有很好的擴展性和容錯性,而Mongodb副本集具有多個副本保證了容錯性,就算一個副本掛掉了還有很多副本存在,主節點掛掉后,整個集群內會實現自動切換。
Mongodb副本集的工作原理
客戶端連接到整個Mongodb副本集,不關心具體哪一台節點機是否掛掉。主節點機負責整個副本集的讀寫,副本集定期同步數據備份,一但主節點掛掉,副本節點就會選舉一個新的主服務器,這一切對於應用服務器不需要關心。副本集中的副本節點在主節點掛掉后通過心跳機制檢測到后,就會在集群內發起主節點的選舉機制,自動選舉一位新的主服務器。
看起來Mongodb副本集很牛X的樣子,下面就演示下副本集環境部署過程,官方推薦的Mongodb副本集機器數量為至少3個節點,這里我就選擇三個節點,一個主節點,兩個從節點,暫不使用仲裁節點。
一、環境准備
ip地址 主機名 角色 172.16.60.205 mongodb-master01 副本集主節點 172.16.60.206 mongodb-slave01 副本集副本節點 172.16.60.207 mongodb-slave02 副本集副本節點 三個節點機均設置好各自的主機名,並如下設置好hosts綁定 [root@mongodb-master01 ~]# cat /etc/hosts ............ 172.16.60.205 mongodb-master01 172.16.60.206 mongodb-slave01 172.16.60.207 mongodb-slave02 三個節點機均關閉selinux,為了測試方便,將iptables也關閉 [root@mongodb-master01 ~]# setenforce 0 [root@mongodb-master01 ~]# cat /etc/sysconfig/selinux ........... SELINUX=disabled [root@mongodb-master01 ~]# iptables -F [root@mongodb-master01 ~]# /etc/init.d/iptables stop [root@mongodb-master01 ~]# /etc/init.d/iptables stop iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Flushing firewall rules: [ OK ] iptables: Unloading modules: [ OK ]
二、Mongodb安裝、副本集配置
1) 在三個節點機上建立mongodb副本集測試文件夾,用於存放整個副本集文件
[root@mongodb-master01 ~]# mkdir -p /data/mongodb/data/replset/
2)在三個節點機上安裝mongodb
下載地址:https://www.mongodb.org/dl/linux/x86_64-rhel62
[root@mongodb-master01 ~]# wget http://downloads.mongodb.org/linux/mongodb-linux-x86_64-rhel62-v3.6-latest.tgz
[root@mongodb-master01 ~]# tar -zvxf mongodb-linux-x86_64-rhel62-v3.6-latest.tgz
3)分別在每個節點機上啟動mongodb(啟動時指明--bind_ip地址,默認是127.0.0.1,需要改成本機ip,否則遠程連接時失敗)
[root@mongodb-master01 ~]# mv mongodb-linux-x86_64-rhel62-3.6.11-rc0-2-g2151d1d219 /usr/local/mongodb
[root@mongodb-master01 ~]# nohup /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset --bind_ip=172.16.60.205 --port=27017 &
[root@mongodb-master01 ~]# ps -ef|grep mongodb
root 7729 6977 1 15:10 pts/1 00:00:01 /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset
root 7780 6977 0 15:11 pts/1 00:00:00 grep mongodb
[root@mongodb-master01 ~]# lsof -i:27017
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
mongod 7729 root 10u IPv4 6554476 0t0 TCP localhost:27017 (LISTEN)
4)初始化副本集
在三個節點中的任意一個節點機上操作(比如在172.16.60.205節點機)
登陸mongodb
[root@mongodb-master01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.205:27017
.........
#使用admin數據庫
> use admin
switched to db admin
#定義副本集配置變量,這里的 _id:”repset” 和上面命令參數“ –replSet repset” 要保持一樣。
> config = { _id:"repset", members:[{_id:0,host:"172.16.60.205:27017"},{_id:1,host:"172.16.60.206:27017"},{_id:2,host:"172.16.60.207:27017"}]}
{
"_id" : "repset",
"members" : [
{
"_id" : 0,
"host" : "172.16.60.205:27017"
},
{
"_id" : 1,
"host" : "172.16.60.206:27017"
},
{
"_id" : 2,
"host" : "172.16.60.207:27017"
}
]
}
#初始化副本集配置
> rs.initiate(config);
{
"ok" : 1,
"operationTime" : Timestamp(1551166191, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1551166191, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
#查看集群節點的狀態
repset:SECONDARY> rs.status();
{
"set" : "repset",
"date" : ISODate("2019-02-26T07:31:07.766Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1551166263, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1551166263, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1551166263, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1551166263, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "172.16.60.205:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 270,
"optime" : {
"ts" : Timestamp(1551166263, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-02-26T07:31:03Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1551166202, 1),
"electionDate" : ISODate("2019-02-26T07:30:02Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "172.16.60.206:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 76,
"optime" : {
"ts" : Timestamp(1551166263, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1551166263, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-02-26T07:31:03Z"),
"optimeDurableDate" : ISODate("2019-02-26T07:31:03Z"),
"lastHeartbeat" : ISODate("2019-02-26T07:31:06.590Z"),
"lastHeartbeatRecv" : ISODate("2019-02-26T07:31:06.852Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "172.16.60.205:27017",
"syncSourceHost" : "172.16.60.205:27017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "172.16.60.207:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 76,
"optime" : {
"ts" : Timestamp(1551166263, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1551166263, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-02-26T07:31:03Z"),
"optimeDurableDate" : ISODate("2019-02-26T07:31:03Z"),
"lastHeartbeat" : ISODate("2019-02-26T07:31:06.589Z"),
"lastHeartbeatRecv" : ISODate("2019-02-26T07:31:06.958Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "172.16.60.205:27017",
"syncSourceHost" : "172.16.60.205:27017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1551166263, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1551166263, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
如上信息表明:
副本集配置成功后,172.16.60.205為主節點PRIMARY,172.16.60.206/207為副本節點SECONDARY。
health:1 1表明狀態是正常,0表明異常
state:1 值小的是primary節點、值大的是secondary節點
三、測試Mongodb副本集數據復制功能 <mongodb默認是從主節點讀寫數據的,副本節點上不允許讀,需要設置副本節點可以讀>
1)在主節點172.16.60.205上連接到終端
[root@mongodb-master01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.205:27017
................
#建立test 數據庫
repset:PRIMARY> use test;
switched to db test
#往testdb表插入測試數據
repset:PRIMARY> db.testdb.insert({"test1":"testval1"})
WriteResult({ "nInserted" : 1 })
2)在副本節點172.16.60.206、172.16.60.207上連接到mongodb查看數據是否復制過來。
這里在172.16.60.206副本節點上進行查看
[root@mongodb-slave01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.206:27017
................
repset:SECONDARY> use test;
switched to db test
repset:SECONDARY> show tables;
2019-02-26T15:37:46.446+0800 E QUERY [thread1] Error: listCollections failed: {
"operationTime" : Timestamp(1551166663, 1),
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotMasterNoSlaveOk",
"$clusterTime" : {
"clusterTime" : Timestamp(1551166663, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype._getCollectionInfosCommand@src/mongo/shell/db.js:941:1
DB.prototype.getCollectionInfos@src/mongo/shell/db.js:953:19
DB.prototype.getCollectionNames@src/mongo/shell/db.js:964:16
shellHelper.show@src/mongo/shell/utils.js:853:9
shellHelper@src/mongo/shell/utils.js:750:15
@(shellhelp2):1:1
上面出現了報錯!
這是因為mongodb默認是從主節點讀寫數據的,副本節點上不允許讀,需要設置副本節點可以讀
repset:SECONDARY> db.getMongo().setSlaveOk();
repset:SECONDARY> db.testdb.find();
{ "_id" : ObjectId("5c74ec9267d8c3d06506449b"), "test1" : "testval1" }
repset:SECONDARY> show tables;
testdb
如上發現已經在副本節點上發現了測試數據,即已經從主節點復制過來了。
(在另一個副本節點172.16.60.207也如上操作即可)
四、測試副本集故障轉移功能
先停掉主節點172.16.60.205,查看mongodb副本集狀態,可以看到經過一系列的投票選擇操作,172.16.60.206當選主節點,172.16.60.207從172.16.60.206同步數據過來。
1)停掉原來的主節點172.16.60.205的mongodb,模擬故障
[root@mongodb-master01 ~]# ps -ef|grep mongodb|grep -v grep|awk '{print $2}'|xargs kill -9
[root@mongodb-master01 ~]# lsof -i:27017
[root@mongodb-master01 ~]#
2)接着登錄到另外兩個正常的從節點172.16.60.206、172.16.60.207中的任意一個節點的mongodb,查看副本集狀態
[root@mongodb-slave01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.206:27017
.................
repset:PRIMARY> rs.status();
{
"set" : "repset",
"date" : ISODate("2019-02-26T08:06:02.996Z"),
"myState" : 1,
"term" : NumberLong(2),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1551168359, 1),
"t" : NumberLong(2)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1551168359, 1),
"t" : NumberLong(2)
},
"appliedOpTime" : {
"ts" : Timestamp(1551168359, 1),
"t" : NumberLong(2)
},
"durableOpTime" : {
"ts" : Timestamp(1551168359, 1),
"t" : NumberLong(2)
}
},
"members" : [
{
"_id" : 0,
"name" : "172.16.60.205:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2019-02-26T08:06:02.917Z"),
"lastHeartbeatRecv" : ISODate("2019-02-26T08:03:37.492Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "Connection refused",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : -1
},
{
"_id" : 1,
"name" : "172.16.60.206:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2246,
"optime" : {
"ts" : Timestamp(1551168359, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2019-02-26T08:05:59Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1551168228, 1),
"electionDate" : ISODate("2019-02-26T08:03:48Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 2,
"name" : "172.16.60.207:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2169,
"optime" : {
"ts" : Timestamp(1551168359, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1551168359, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2019-02-26T08:05:59Z"),
"optimeDurableDate" : ISODate("2019-02-26T08:05:59Z"),
"lastHeartbeat" : ISODate("2019-02-26T08:06:02.861Z"),
"lastHeartbeatRecv" : ISODate("2019-02-26T08:06:02.991Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "172.16.60.206:27017",
"syncSourceHost" : "172.16.60.206:27017",
"syncSourceId" : 1,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1551168359, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1551168359, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
發現當原來的主節點172.16.60.205宕掉后,經過選舉,原來的從節點172.16.60.206被推舉為新的主節點。
3)現在在172.16.60.206新主節點上創建測試數據
repset:PRIMARY> use kevin;
switched to db kevin
repset:PRIMARY> db.kevin.insert({"shibo":"hahaha"})
WriteResult({ "nInserted" : 1 })
4)另一個從節點172.16.60.207上登錄mongodb查看
[root@mongodb-slave02 ~]# /usr/local/mongodb/bin/mongo 172.16.60.207:27017
................
repset:SECONDARY> use kevin;
switched to db kevin
repset:SECONDARY> db.getMongo().setSlaveOk();
repset:SECONDARY> show tables;
kevin
repset:SECONDARY> db.kevin.find();
{ "_id" : ObjectId("5c74f42bb0b339ed6eb68e9c"), "shibo" : "hahaha" }
發現從節點172.16.60.207可以同步新的主節點172.16.60.206的數據
5)再重新啟動原來的主節點172.16.60.205的mongodb
[root@mongodb-master01 ~]# nohup /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset --bind_ip=172.16.60.205 --port=27017 &
mongod 9162 root 49u IPv4 6561201 0t0 TCP mongodb-master01:55236->mongodb-slave01:27017 (ESTABLISHED)
[root@mongodb-master01 ~]# ps -ef|grep mongodb
root 9162 6977 4 16:14 pts/1 00:00:01 /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset --bind_ip=172.16.60.205 --port=27017
root 9244 6977 0 16:14 pts/1 00:00:00 grep mongodb
再次登錄到三個節點中的任意一個的mongodb,查看副本集狀態
[root@mongodb-master01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.205:27017
....................
repset:SECONDARY> rs.status();
{
"set" : "repset",
"date" : ISODate("2019-02-26T08:16:11.741Z"),
"myState" : 2,
"term" : NumberLong(2),
"syncingTo" : "172.16.60.206:27017",
"syncSourceHost" : "172.16.60.206:27017",
"syncSourceId" : 1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1551168969, 1),
"t" : NumberLong(2)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1551168969, 1),
"t" : NumberLong(2)
},
"appliedOpTime" : {
"ts" : Timestamp(1551168969, 1),
"t" : NumberLong(2)
},
"durableOpTime" : {
"ts" : Timestamp(1551168969, 1),
"t" : NumberLong(2)
}
},
"members" : [
{
"_id" : 0,
"name" : "172.16.60.205:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 129,
"optime" : {
"ts" : Timestamp(1551168969, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2019-02-26T08:16:09Z"),
"syncingTo" : "172.16.60.206:27017",
"syncSourceHost" : "172.16.60.206:27017",
"syncSourceId" : 1,
"infoMessage" : "",
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "172.16.60.206:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 127,
"optime" : {
"ts" : Timestamp(1551168969, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1551168969, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2019-02-26T08:16:09Z"),
"optimeDurableDate" : ISODate("2019-02-26T08:16:09Z"),
"lastHeartbeat" : ISODate("2019-02-26T08:16:10.990Z"),
"lastHeartbeatRecv" : ISODate("2019-02-26T08:16:11.518Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1551168228, 1),
"electionDate" : ISODate("2019-02-26T08:03:48Z"),
"configVersion" : 1
},
{
"_id" : 2,
"name" : "172.16.60.207:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 127,
"optime" : {
"ts" : Timestamp(1551168969, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1551168969, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2019-02-26T08:16:09Z"),
"optimeDurableDate" : ISODate("2019-02-26T08:16:09Z"),
"lastHeartbeat" : ISODate("2019-02-26T08:16:10.990Z"),
"lastHeartbeatRecv" : ISODate("2019-02-26T08:16:11.655Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "172.16.60.206:27017",
"syncSourceHost" : "172.16.60.206:27017",
"syncSourceId" : 1,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1551168969, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1551168969, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
發現原來的主節點172.16.60.205在故障恢復后,變成了新的主節點172.16.60.206的從節點
五、Mongodb讀寫分離
目前來看。Mongodb副本集可以完美支持故障轉移。至於主節點的讀寫壓力過大如何解決?常見的解決方案是讀寫分離。
一般情況下,常規寫操作來說並沒有讀操作多,所以在Mongodb副本集中,一台主節點負責寫操作,兩台副本節點負責讀操作。 1)設置讀寫分離需要先在副本節點SECONDARY 設置 setSlaveOk。 2)在程序中設置副本節點負責讀操作,如下代碼:
public class TestMongoDBReplSetReadSplit { public static void main(String[] args) { try { List<ServerAddress> addresses = new ArrayList<ServerAddress>(); ServerAddress address1 = new ServerAddress("172.16.60.205" , 27017); ServerAddress address2 = new ServerAddress("172.16.60.206" , 27017); ServerAddress address3 = new ServerAddress("172.16.60.207" , 27017); addresses.add(address1); addresses.add(address2); addresses.add(address3); MongoClient client = new MongoClient(addresses); DB db = client.getDB( "test" ); DBCollection coll = db.getCollection( "testdb" ); BasicDBObject object = new BasicDBObject(); object.append( "test2" , "testval2" ); //讀操作從副本節點讀取 ReadPreference preference = ReadPreference. secondary(); DBObject dbObject = coll.findOne(object, null , preference); System. out .println(dbObject); } catch (Exception e) { e.printStackTrace(); } } }
讀參數除了secondary一共還有五個參數:primary、primaryPreferred、secondary、secondaryPreferred、nearest。
primary:默認參數,只從主節點上進行讀取操作;
primaryPreferred:大部分從主節點上讀取數據,只有主節點不可用時從secondary節點讀取數據。
secondary:只從secondary節點上進行讀取操作,存在的問題是secondary節點的數據會比primary節點數據“舊”。
secondaryPreferred:優先從secondary節點進行讀取操作,secondary節點不可用時從主節點讀取數據;
nearest:不管是主節點、secondary節點,從網絡延遲最低的節點上讀取數據。
讀寫分離做好后,就可以進行數據分流,減輕壓力,解決了"主節點的讀寫壓力過大如何解決?"這個問題。不過當副本節點增多時,主節點的復制壓力會加大有什么辦法解決嗎?基於這個問題,Mongodb已有了相應的解決方案 - 引用仲裁節點:
在Mongodb副本集中,仲裁節點不存儲數據,只是負責故障轉移的群體投票,這樣就少了數據復制的壓力。看起來想的很周到啊,其實不只是主節點、副本節點、仲裁節點,還有Secondary-Only、Hidden、Delayed、Non-Voting,其中:
Secondary-Only:不能成為primary節點,只能作為secondary副本節點,防止一些性能不高的節點成為主節點。
Hidden:這類節點是不能夠被客戶端制定IP引用,也不能被設置為主節點,但是可以投票,一般用於備份數據。
Delayed:可以指定一個時間延遲從primary節點同步數據。主要用於備份數據,如果實時同步,誤刪除數據馬上同步到從節點,恢復又恢復不了。
Non-Voting:沒有選舉權的secondary節點,純粹的備份數據節點。
