MongoDB的集群模式--Sharding(分片)


 

 

分片是數據跨多台機器存儲,MongoDB使用分片來支持具有非常大的數據集和高吞吐量操作的部署。

具有大型數據集或高吞吐量應用程序的數據庫系統可能會挑戰單個服務器的容量。例如,高查詢率會耗盡服務器的CPU容量。工作集大小大於系統的RAM會強調磁盤驅動器的I / O容量。

有兩種解決系統增長的方法:垂直和水平縮放。

垂直擴展涉及增加單個服務器的容量,例如使用更強大的CPU,添加更多RAM或增加存儲空間量。可用技術的局限性可能會限制單個機器對於給定工作負載而言足夠強大。此外,基於雲的提供商基於可用的硬件配置具有硬性上限。結果,垂直縮放有實際的最大值。

水平擴展涉及划分系統數據集並加載多個服務器,添加其他服務器以根據需要增加容量。雖然單個機器的總體速度或容量可能不高,但每台機器處理整個工作負載的子集,可能提供比單個高速大容量服務器更高的效率。擴展部署容量只需要根據需要添加額外的服務器,這可能比單個機器的高端硬件的總體成本更低。權衡是基礎架構和部署維護的復雜性增加。

MongoDB支持通過分片進行水平擴展

一、組件

  • shard:每個分片包含分片數據的子集。每個分片都可以部署為副本集(replica set)。可以分片,不分片的數據存於主分片服務器上。部署為3成員副本集
  • mongos:mongos充當查詢路由器,提供客戶端應用程序和分片集群之間的接口。可以部署多個mongos路由器。部署1個或者多個mongos
  • config servers:配置服務器存儲群集的元數據和配置設置。從MongoDB 3.4開始,必須將配置服務器部署為3成員副本集

 注意:應用程序或者客戶端必須要連接mongos才能與集群的數據進行交互,永遠不應連接到單個分片以執行讀取或寫入操作

shard的replica set的架構圖:

config servers的replica set的架構圖:

 

 分片策略

1、散列分片

  • 使用散列索引在共享群集中分區數據。散列索引計算單個字段的哈希值作為索引值; 此值用作分片鍵。
  • 使用散列索引解析查詢時,MongoDB會自動計算哈希值。應用程序也不會需要計算哈希值。
  • 基於散列值的數據分布有助於更均勻的數據分布,尤其是在分片鍵單調變化的數據集中。

 

 2、范圍分片

  • 基於分片鍵值將數據分成范圍。然后根據分片鍵值為每個塊分配一個范圍。
  • mongos可以將操作僅路由到包含所需數據的分片。
  • 分片鍵的規划很重要,可能導致數據不能均勻分布。

 二、部署

1、環境說明

服務器名稱 IP地址 操作系統版本 MongoDB版本 配置服務器(Config Server)端口 分片服務器1(Shard Server 1 分片服務器2(Shard Server 2) 分片服務器3(Shard Server 3) 功能
mongo1.example.net 10.10.18.10 Centos7.5 4.0 27027(Primary 27017(Primary 27018(Arbiter 27019(Secondary 配置服務器和分片服務器
mongo2.example.net 10.10.18.11 Centos7.5 4.0 27027(Secondary 27017(Secondary
27018(Primary 27019(Arbiter 配置服務器和分片服務器
mongo3.example.net 10.10.18.12 Centos7.5 4.0 27027(Secondary 27017(Arbiter 27018(Secondary 27019(Primary 配置服務器和分片服務器
mongos.example.net  192.168.11.10 Centos7.5 4.0   mongos的端口:27017     mongos

官方推薦配置中使用邏輯DNS,所以該文檔中,將服務器名稱和IP地址的DNS映射關系寫入到各服務器的/etc/hosts文件中

2、部署MongoDB

環境中4台服務器的MongoDB的安裝部署,詳見:MongoDB安裝

創建環境需要的目錄:

mkdir -p /data/mongodb/data/{configServer,shard1,shard2,shard3}
mkdir -p /data/mongodb/{log,pid}

3、創建配置服務器(Config Server)的 Replica Set(副本集)

3台服務器上配置文件內容: /data/mongodb/configServer.conf

mongo1.example.net服務器上

systemLog:
   destination: file
   path: "/data/mongodb/log/configServer.log"
   logAppend: true
storage:
   dbPath: "/data/mongodb/data/configServer"
   journal:
      enabled: true
   wiredTiger:
      engineConfig:
        cacheSizeGB: 2
processManagement:
   fork: true
   pidFilePath: "/data/mongodb/pid/configServer.pid"
net:
   bindIp: mongo1.example.net
   port: 27027
replication:
   replSetName: cs0
sharding:
  clusterRole: configsvr

mongo2.example.net服務器上

systemLog:
   destination: file
   path: "/data/mongodb/log/configServer.log"
   logAppend: true
storage:
   dbPath: "/data/mongodb/data/configServer"
   journal:
      enabled: true
   wiredTiger:
      engineConfig:
        cacheSizeGB: 2
processManagement:
   fork: true
   pidFilePath: "/data/mongodb/pid/configServer.pid"
net:
   bindIp: mongo2.example.net
   port: 27027
replication:
   replSetName: cs0
sharding:
  clusterRole: configsvr

mongo3.example.net服務器上

systemLog:
   destination: file
   path: "/data/mongodb/log/configServer.log"
   logAppend: true
storage:
   dbPath: "/data/mongodb/data/configServer"
   journal:
      enabled: true
   wiredTiger:
      engineConfig:
        cacheSizeGB: 2
processManagement:
   fork: true
   pidFilePath: "/data/mongodb/pid/configServer.pid"
net:
   bindIp: mongo3.example.net
   port: 27027
replication:
   replSetName: cs0
sharding:
  clusterRole: configsvr

啟動三台服務器Config Server

mongod -f /data/mongodb/configServer.conf

連接到其中一個Config Server

mongo --host mongo1.example.net --port 27027

結果:

 1 MongoDB shell version v4.0.10
 2 connecting to: mongodb://mongo1.example.net:27027/?gssapiServiceName=mongodb
 3 Implicit session: session { "id" : UUID("1a4d4252-11d0-40bb-90da-f144692be88d") }
 4 MongoDB server version: 4.0.10
 5 Server has startup warnings: 
 6 2019-06-14T14:28:56.013+0800 I CONTROL  [initandlisten] 
 7 2019-06-14T14:28:56.013+0800 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
 8 2019-06-14T14:28:56.013+0800 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
 9 2019-06-14T14:28:56.013+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
10 2019-06-14T14:28:56.013+0800 I CONTROL  [initandlisten] 
11 2019-06-14T14:28:56.013+0800 I CONTROL  [initandlisten] 
12 2019-06-14T14:28:56.013+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
13 2019-06-14T14:28:56.013+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
14 2019-06-14T14:28:56.014+0800 I CONTROL  [initandlisten] 
15 2019-06-14T14:28:56.014+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
16 2019-06-14T14:28:56.014+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
17 2019-06-14T14:28:56.014+0800 I CONTROL  [initandlisten] 
18 > 
View Code

配置Replica Set

rs.initiate(
  {
    _id: "cs0",
    configsvr: true,
    members: [
      { _id : 0, host : "mongo1.example.net:27027" },
      { _id : 1, host : "mongo2.example.net:27027" },
      { _id : 2, host : "mongo3.example.net:27027" }
    ]
  }
)

結果:

 1 {
 2         "ok" : 1,
 3         "operationTime" : Timestamp(1560493908, 1),
 4         "$gleStats" : {
 5                 "lastOpTime" : Timestamp(1560493908, 1),
 6                 "electionId" : ObjectId("000000000000000000000000")
 7         },
 8         "lastCommittedOpTime" : Timestamp(0, 0),
 9         "$clusterTime" : {
10                 "clusterTime" : Timestamp(1560493908, 1),
11                 "signature" : {
12                         "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
13                         "keyId" : NumberLong(0)
14                 }
15         }
16 }
View Code

查看Replica Set的狀態

cs0:PRIMARY> rs.status()

結果:  可以看出三個服務器:1個Primary,2個Secondary

  1 {
  2         "set" : "cs0",
  3         "date" : ISODate("2019-06-14T06:33:31.348Z"),
  4         "myState" : 1,
  5         "term" : NumberLong(1),
  6         "syncingTo" : "",
  7         "syncSourceHost" : "",
  8         "syncSourceId" : -1,
  9         "configsvr" : true,
 10         "heartbeatIntervalMillis" : NumberLong(2000),
 11         "optimes" : {
 12                 "lastCommittedOpTime" : {
 13                         "ts" : Timestamp(1560494006, 1),
 14                         "t" : NumberLong(1)
 15                 },
 16                 "readConcernMajorityOpTime" : {
 17                         "ts" : Timestamp(1560494006, 1),
 18                         "t" : NumberLong(1)
 19                 },
 20                 "appliedOpTime" : {
 21                         "ts" : Timestamp(1560494006, 1),
 22                         "t" : NumberLong(1)
 23                 },
 24                 "durableOpTime" : {
 25                         "ts" : Timestamp(1560494006, 1),
 26                         "t" : NumberLong(1)
 27                 }
 28         },
 29         "lastStableCheckpointTimestamp" : Timestamp(1560493976, 1),
 30         "members" : [
 31                 {
 32                         "_id" : 0,
 33                         "name" : "mongo1.example.net:27027",
 34                         "health" : 1,
 35                         "state" : 1,
 36                         "stateStr" : "PRIMARY",
 37                         "uptime" : 277,
 38                         "optime" : {
 39                                 "ts" : Timestamp(1560494006, 1),
 40                                 "t" : NumberLong(1)
 41                         },
 42                         "optimeDate" : ISODate("2019-06-14T06:33:26Z"),
 43                         "syncingTo" : "",
 44                         "syncSourceHost" : "",
 45                         "syncSourceId" : -1,
 46                         "infoMessage" : "could not find member to sync from",
 47                         "electionTime" : Timestamp(1560493919, 1),
 48                         "electionDate" : ISODate("2019-06-14T06:31:59Z"),
 49                         "configVersion" : 1,
 50                         "self" : true,
 51                         "lastHeartbeatMessage" : ""
 52                 },
 53                 {
 54                         "_id" : 1,
 55                         "name" : "mongo2.example.net:27027",
 56                         "health" : 1,
 57                         "state" : 2,
 58                         "stateStr" : "SECONDARY",
 59                         "uptime" : 102,
 60                         "optime" : {
 61                                 "ts" : Timestamp(1560494006, 1),
 62                                 "t" : NumberLong(1)
 63                         },
 64                         "optimeDurable" : {
 65                                 "ts" : Timestamp(1560494006, 1),
 66                                 "t" : NumberLong(1)
 67                         },
 68                         "optimeDate" : ISODate("2019-06-14T06:33:26Z"),
 69                         "optimeDurableDate" : ISODate("2019-06-14T06:33:26Z"),
 70                         "lastHeartbeat" : ISODate("2019-06-14T06:33:29.385Z"),
 71                         "lastHeartbeatRecv" : ISODate("2019-06-14T06:33:29.988Z"),
 72                         "pingMs" : NumberLong(0),
 73                         "lastHeartbeatMessage" : "",
 74                         "syncingTo" : "mongo1.example.net:27027",
 75                         "syncSourceHost" : "mongo1.example.net:27027",
 76                         "syncSourceId" : 0,
 77                         "infoMessage" : "",
 78                         "configVersion" : 1
 79                 },
 80                 {
 81                         "_id" : 2,
 82                         "name" : "mongo3.example.net:27027",
 83                         "health" : 1,
 84                         "state" : 2,
 85                         "stateStr" : "SECONDARY",
 86                         "uptime" : 102,
 87                         "optime" : {
 88                                 "ts" : Timestamp(1560494006, 1),
 89                                 "t" : NumberLong(1)
 90                         },
 91                         "optimeDurable" : {
 92                                 "ts" : Timestamp(1560494006, 1),
 93                                 "t" : NumberLong(1)
 94                         },
 95                         "optimeDate" : ISODate("2019-06-14T06:33:26Z"),
 96                         "optimeDurableDate" : ISODate("2019-06-14T06:33:26Z"),
 97                         "lastHeartbeat" : ISODate("2019-06-14T06:33:29.384Z"),
 98                         "lastHeartbeatRecv" : ISODate("2019-06-14T06:33:29.868Z"),
 99                         "pingMs" : NumberLong(0),
100                         "lastHeartbeatMessage" : "",
101                         "syncingTo" : "mongo1.example.net:27027",
102                         "syncSourceHost" : "mongo1.example.net:27027",
103                         "syncSourceId" : 0,
104                         "infoMessage" : "",
105                         "configVersion" : 1
106                 }
107         ],
108         "ok" : 1,
109         "operationTime" : Timestamp(1560494006, 1),
110         "$gleStats" : {
111                 "lastOpTime" : Timestamp(1560493908, 1),
112                 "electionId" : ObjectId("7fffffff0000000000000001")
113         },
114         "lastCommittedOpTime" : Timestamp(1560494006, 1),
115         "$clusterTime" : {
116                 "clusterTime" : Timestamp(1560494006, 1),
117                 "signature" : {
118                         "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
119                         "keyId" : NumberLong(0)
120                 }
121         }
122 }
View Code

創建管理用戶

use admin
db.createUser(
  {
    user: "myUserAdmin",
    pwd: "abc123",
    roles: [{ role: "userAdminAnyDatabase", db: "admin" },"readWriteAnyDatabase"]
  }
)

開啟Config Server的登錄驗證和內部驗證

使用Keyfiles進行內部認證,在其中一台服務器上創建Keyfiles

openssl rand -base64 756 > /data/mongodb/keyfile
chmod 400  /data/mongodb/keyfile

將這個keyfile文件分發到其它的三台服務器上,並保證權限400

/data/mongodb/configServer.conf  配置文件中開啟認證

security:
   keyFile: "/data/mongodb/keyfile"
   clusterAuthMode: "keyFile"
   authorization: "enabled"

然后依次關閉2個Secondary,在關閉 Primary

mongod -f /data/mongodb/configServer.conf --shutdown

依次開啟Primary和兩個Secondary

mongod -f /data/mongodb/configServer.conf 

使用用戶密碼登錄mongo

mongo --host mongo1.example.net --port 27027 -u myUserAdmin --authenticationDatabase "admin" -p 'abc123'

注意:由於剛創建用戶的時候沒有給該用戶管理集群的權限,所有此時登錄后,能查看所有數據庫,但是不能查看集群的狀態信息。

 1 cs0:PRIMARY> rs.status()
 2 {
 3         "operationTime" : Timestamp(1560495861, 1),
 4         "ok" : 0,
 5         "errmsg" : "not authorized on admin to execute command { replSetGetStatus: 1.0, lsid: { id: UUID(\"59dd4dc0-b34f-43b9-a341-a2f43ec1dcfa\") }, $clusterTime: { clusterTime: Timestamp(1560495849, 1), signature: { hash: BinData(0, A51371EC5AA54BB1B05ED9342BFBF03CBD87F2D9), keyId: 6702270356301807629 } }, $db: \"admin\" }",
 6         "code" : 13,
 7         "codeName" : "Unauthorized",
 8         "$gleStats" : {
 9                 "lastOpTime" : Timestamp(0, 0),
10                 "electionId" : ObjectId("7fffffff0000000000000002")
11         },
12         "lastCommittedOpTime" : Timestamp(1560495861, 1),
13         "$clusterTime" : {
14                 "clusterTime" : Timestamp(1560495861, 1),
15                 "signature" : {
16                         "hash" : BinData(0,"3UkTpXxyU8WI1TyS+u5vgewueGA="),
17                         "keyId" : NumberLong("6702270356301807629")
18                 }
19         }
20 }
21 cs0:PRIMARY> show dbs
22 admin   0.000GB
23 config  0.000GB
24 local   0.000GB
View Code

賦值該用戶具有集群的管理權限

use admin
db.system.users.find() #查看當前的用戶信息 db.grantRolesToUser(
"myUserAdmin", ["clusterAdmin"])

查看集群信息

  1 cs0:PRIMARY> rs.status()
  2 {
  3         "set" : "cs0",
  4         "date" : ISODate("2019-06-14T07:18:20.223Z"),
  5         "myState" : 1,
  6         "term" : NumberLong(2),
  7         "syncingTo" : "",
  8         "syncSourceHost" : "",
  9         "syncSourceId" : -1,
 10         "configsvr" : true,
 11         "heartbeatIntervalMillis" : NumberLong(2000),
 12         "optimes" : {
 13                 "lastCommittedOpTime" : {
 14                         "ts" : Timestamp(1560496690, 1),
 15                         "t" : NumberLong(2)
 16                 },
 17                 "readConcernMajorityOpTime" : {
 18                         "ts" : Timestamp(1560496690, 1),
 19                         "t" : NumberLong(2)
 20                 },
 21                 "appliedOpTime" : {
 22                         "ts" : Timestamp(1560496690, 1),
 23                         "t" : NumberLong(2)
 24                 },
 25                 "durableOpTime" : {
 26                         "ts" : Timestamp(1560496690, 1),
 27                         "t" : NumberLong(2)
 28                 }
 29         },
 30         "lastStableCheckpointTimestamp" : Timestamp(1560496652, 1),
 31         "members" : [
 32                 {
 33                         "_id" : 0,
 34                         "name" : "mongo1.example.net:27027",
 35                         "health" : 1,
 36                         "state" : 1,
 37                         "stateStr" : "PRIMARY",
 38                         "uptime" : 1123,
 39                         "optime" : {
 40                                 "ts" : Timestamp(1560496690, 1),
 41                                 "t" : NumberLong(2)
 42                         },
 43                         "optimeDate" : ISODate("2019-06-14T07:18:10Z"),
 44                         "syncingTo" : "",
 45                         "syncSourceHost" : "",
 46                         "syncSourceId" : -1,
 47                         "infoMessage" : "",
 48                         "electionTime" : Timestamp(1560495590, 1),
 49                         "electionDate" : ISODate("2019-06-14T06:59:50Z"),
 50                         "configVersion" : 1,
 51                         "self" : true,
 52                         "lastHeartbeatMessage" : ""
 53                 },
 54                 {
 55                         "_id" : 1,
 56                         "name" : "mongo2.example.net:27027",
 57                         "health" : 1,
 58                         "state" : 2,
 59                         "stateStr" : "SECONDARY",
 60                         "uptime" : 1113,
 61                         "optime" : {
 62                                 "ts" : Timestamp(1560496690, 1),
 63                                 "t" : NumberLong(2)
 64                         },
 65                         "optimeDurable" : {
 66                                 "ts" : Timestamp(1560496690, 1),
 67                                 "t" : NumberLong(2)
 68                         },
 69                         "optimeDate" : ISODate("2019-06-14T07:18:10Z"),
 70                         "optimeDurableDate" : ISODate("2019-06-14T07:18:10Z"),
 71                         "lastHeartbeat" : ISODate("2019-06-14T07:18:18.974Z"),
 72                         "lastHeartbeatRecv" : ISODate("2019-06-14T07:18:19.142Z"),
 73                         "pingMs" : NumberLong(0),
 74                         "lastHeartbeatMessage" : "",
 75                         "syncingTo" : "mongo1.example.net:27027",
 76                         "syncSourceHost" : "mongo1.example.net:27027",
 77                         "syncSourceId" : 0,
 78                         "infoMessage" : "",
 79                         "configVersion" : 1
 80                 },
 81                 {
 82                         "_id" : 2,
 83                         "name" : "mongo3.example.net:27027",
 84                         "health" : 1,
 85                         "state" : 2,
 86                         "stateStr" : "SECONDARY",
 87                         "uptime" : 1107,
 88                         "optime" : {
 89                                 "ts" : Timestamp(1560496690, 1),
 90                                 "t" : NumberLong(2)
 91                         },
 92                         "optimeDurable" : {
 93                                 "ts" : Timestamp(1560496690, 1),
 94                                 "t" : NumberLong(2)
 95                         },
 96                         "optimeDate" : ISODate("2019-06-14T07:18:10Z"),
 97                         "optimeDurableDate" : ISODate("2019-06-14T07:18:10Z"),
 98                         "lastHeartbeat" : ISODate("2019-06-14T07:18:18.999Z"),
 99                         "lastHeartbeatRecv" : ISODate("2019-06-14T07:18:18.998Z"),
100                         "pingMs" : NumberLong(0),
101                         "lastHeartbeatMessage" : "",
102                         "syncingTo" : "mongo2.example.net:27027",
103                         "syncSourceHost" : "mongo2.example.net:27027",
104                         "syncSourceId" : 1,
105                         "infoMessage" : "",
106                         "configVersion" : 1
107                 }
108         ],
109         "ok" : 1,
110         "operationTime" : Timestamp(1560496690, 1),
111         "$gleStats" : {
112                 "lastOpTime" : {
113                         "ts" : Timestamp(1560496631, 1),
114                         "t" : NumberLong(2)
115                 },
116                 "electionId" : ObjectId("7fffffff0000000000000002")
117         },
118         "lastCommittedOpTime" : Timestamp(1560496690, 1),
119         "$clusterTime" : {
120                 "clusterTime" : Timestamp(1560496690, 1),
121                 "signature" : {
122                         "hash" : BinData(0,"lHiVw7WeO81npTi2IMW16reAN84="),
123                         "keyId" : NumberLong("6702270356301807629")
124                 }
125         }
126 }
View Code

4、部署分片服務器1(Shard1)以及Replica Set(副本集)

3台服務器上配置文件內容: /data/mongodb/shard1.conf

mongo1.example.net服務器上

systemLog:
   destination: file
   path: "/data/mongodb/log/shard1.log"
   logAppend: true
storage:
   dbPath: "/data/mongodb/data/shard1"
   journal:
      enabled: true
   wiredTiger:
      engineConfig:
        cacheSizeGB: 2
processManagement:
   fork: true
   pidFilePath: "/data/mongodb/pid/shard1.pid"
net:
   bindIp: mongo1.example.net
   port: 27017
replication:
   replSetName: "shard1"
sharding:
   clusterRole: shardsvr

mongo2.example.net服務器上

systemLog:
   destination: file
   path: "/data/mongodb/log/shard1.log"
   logAppend: true
storage:
   dbPath: "/data/mongodb/data/shard1"
   journal:
      enabled: true
   wiredTiger:
      engineConfig:
        cacheSizeGB: 2
processManagement:
   fork: true
   pidFilePath: "/data/mongodb/pid/shard1.pid"
net:
   bindIp: mongo2.example.net
   port: 27017
replication:
   replSetName: "shard1"
sharding:
   clusterRole: shardsvr

mongo3.example.net服務器上

systemLog:
   destination: file
   path: "/data/mongodb/log/shard1.log"
   logAppend: true
storage:
   dbPath: "/data/mongodb/data/shard1"
   journal:
      enabled: true
   wiredTiger:
      engineConfig:
        cacheSizeGB: 2
processManagement:
   fork: true
   pidFilePath: "/data/mongodb/pid/shard1.pid"
net:
   bindIp: mongo3.example.net
   port: 27017
replication:
   replSetName: "shard1"
sharding:
   clusterRole: shardsvr

開啟三台服務器上Shard

mongod -f /data/mongodb/shard1.conf

連接Primary服務器的Shard的副本集

mongo --host mongo1.example.net --port 27017

結果

 1 MongoDB shell version v4.0.10
 2 connecting to: mongodb://mongo1.example.net:27017/?gssapiServiceName=mongodb
 3 Implicit session: session { "id" : UUID("91e76384-cdae-411f-ab88-b7a8bd4555d1") }
 4 MongoDB server version: 4.0.10
 5 Server has startup warnings: 
 6 2019-06-14T15:32:39.243+0800 I CONTROL  [initandlisten] 
 7 2019-06-14T15:32:39.243+0800 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
 8 2019-06-14T15:32:39.243+0800 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
 9 2019-06-14T15:32:39.243+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
10 2019-06-14T15:32:39.243+0800 I CONTROL  [initandlisten] 
11 2019-06-14T15:32:39.243+0800 I CONTROL  [initandlisten] 
12 2019-06-14T15:32:39.243+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
13 2019-06-14T15:32:39.243+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
14 2019-06-14T15:32:39.243+0800 I CONTROL  [initandlisten] 
15 2019-06-14T15:32:39.243+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
16 2019-06-14T15:32:39.243+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
17 2019-06-14T15:32:39.243+0800 I CONTROL  [initandlisten] 
18 > 
View Code

配置Replica Set

rs.initiate(
  {
    _id : "shard1",
    members: [
      { _id : 0, host : "mongo1.example.net:27017",priority:2 },
      { _id : 1, host : "mongo2.example.net:27017",priority:1 },
      { _id : 2, host : "mongo3.example.net:27017",arbiterOnly:true }
    ]
  }
)

注意:優先級priority的值越大,越容易選舉成為Primary

查看Replica Set的狀態:

 1 shard1:PRIMARY> rs.status()
 2 {
 3         "set" : "shard1",
 4         "date" : ISODate("2019-06-20T01:33:21.809Z"),
 5         "myState" : 1,
 6         "term" : NumberLong(2),
 7         "syncingTo" : "",
 8         "syncSourceHost" : "",
 9         "syncSourceId" : -1,
10         "heartbeatIntervalMillis" : NumberLong(2000),
11         "optimes" : {
12                 "lastCommittedOpTime" : {
13                         "ts" : Timestamp(1560994393, 1),
14                         "t" : NumberLong(2)
15                 },
16                 "readConcernMajorityOpTime" : {
17                         "ts" : Timestamp(1560994393, 1),
18                         "t" : NumberLong(2)
19                 },
20                 "appliedOpTime" : {
21                         "ts" : Timestamp(1560994393, 1),
22                         "t" : NumberLong(2)
23                 },
24                 "durableOpTime" : {
25                         "ts" : Timestamp(1560994393, 1),
26                         "t" : NumberLong(2)
27                 }
28         },
29         "lastStableCheckpointTimestamp" : Timestamp(1560994373, 1),
30         "members" : [
31                 {
32                         "_id" : 0,
33                         "name" : "mongo1.example.net:27017",
34                         "health" : 1,
35                         "state" : 1,
36                         "stateStr" : "PRIMARY",
37                         "uptime" : 43,
38                         "optime" : {
39                                 "ts" : Timestamp(1560994393, 1),
40                                 "t" : NumberLong(2)
41                         },
42                         "optimeDate" : ISODate("2019-06-20T01:33:13Z"),
43                         "syncingTo" : "",
44                         "syncSourceHost" : "",
45                         "syncSourceId" : -1,
46                         "infoMessage" : "could not find member to sync from",
47                         "electionTime" : Timestamp(1560994371, 1),
48                         "electionDate" : ISODate("2019-06-20T01:32:51Z"),
49                         "configVersion" : 1,
50                         "self" : true,
51                         "lastHeartbeatMessage" : ""
52                 },
53                 {
54                         "_id" : 1,
55                         "name" : "mongo2.example.net:27017",
56                         "health" : 1,
57                         "state" : 2,
58                         "stateStr" : "SECONDARY",
59                         "uptime" : 36,
60                         "optime" : {
61                                 "ts" : Timestamp(1560994393, 1),
62                                 "t" : NumberLong(2)
63                         },
64                         "optimeDurable" : {
65                                 "ts" : Timestamp(1560994393, 1),
66                                 "t" : NumberLong(2)
67                         },
68                         "optimeDate" : ISODate("2019-06-20T01:33:13Z"),
69                         "optimeDurableDate" : ISODate("2019-06-20T01:33:13Z"),
70                         "lastHeartbeat" : ISODate("2019-06-20T01:33:19.841Z"),
71                         "lastHeartbeatRecv" : ISODate("2019-06-20T01:33:21.164Z"),
72                         "pingMs" : NumberLong(0),
73                         "lastHeartbeatMessage" : "",
74                         "syncingTo" : "mongo1.example.net:27017",
75                         "syncSourceHost" : "mongo1.example.net:27017",
76                         "syncSourceId" : 0,
77                         "infoMessage" : "",
78                         "configVersion" : 1
79                 },
80                 {
81                         "_id" : 2,
82                         "name" : "mongo3.example.net:27017",
83                         "health" : 1,
84                         "state" : 7,
85                         "stateStr" : "ARBITER",
86                         "uptime" : 32,
87                         "lastHeartbeat" : ISODate("2019-06-20T01:33:19.838Z"),
88                         "lastHeartbeatRecv" : ISODate("2019-06-20T01:33:20.694Z"),
89                         "pingMs" : NumberLong(0),
90                         "lastHeartbeatMessage" : "",
91                         "syncingTo" : "",
92                         "syncSourceHost" : "",
93                         "syncSourceId" : -1,
94                         "infoMessage" : "",
95                         "configVersion" : 1
96                 }
97         ],
98         "ok" : 1
99 }
View Code

結果:  可以看出三個服務器:1個Primary,1個Secondary,1一個Arbiter

創建管理用戶

use admin
db.createUser(
  {
    user: "myUserAdmin",
    pwd: "abc123",
    roles: [{ role: "userAdminAnyDatabase", db: "admin" },"readWriteAnyDatabase","clusterAdmin"]
  }
)

開啟Shard1的登錄驗證和內部驗證

security:
   keyFile: "/data/mongodb/keyfile"
   clusterAuthMode: "keyFile"
   authorization: "enabled"

然后依次關閉Arbiter、Secondary、Primary

mongod -f /data/mongodb/shard1.conf --shutdown

依次開啟Primary和兩個Secondary

mongod -f /data/mongodb/shard1.conf 

使用用戶密碼登錄mongo

mongo --host mongo1.example.net --port 27017 -u myUserAdmin --authenticationDatabase "admin" -p 'abc123'

5、部署分片服務器2(Shard2)以及Replica Set(副本集)

3台服務器上配置文件內容: /data/mongodb/shard2.conf

mongo1.example.net服務器上

systemLog:
   destination: file
   path: "/data/mongodb/log/shard2.log"
   logAppend: true
storage:
   dbPath: "/data/mongodb/data/shard2"
   journal:
      enabled: true
   wiredTiger:
      engineConfig:
        cacheSizeGB: 2
processManagement:
   fork: true
   pidFilePath: "/data/mongodb/pid/shard2.pid"
net:
   bindIp: mongo1.example.net
   port: 27018
replication:
   replSetName: "shard2"
sharding:
   clusterRole: shardsvr

mongo2.example.net服務器上

systemLog:
   destination: file
   path: "/data/mongodb/log/shard2.log"
   logAppend: true
storage:
   dbPath: "/data/mongodb/data/shard2"
   journal:
      enabled: true
   wiredTiger:
      engineConfig:
        cacheSizeGB: 2
processManagement:
   fork: true
   pidFilePath: "/data/mongodb/pid/shard2.pid"
net:
   bindIp: mongo2.example.net
   port: 27018
replication:
   replSetName: "shard2"
sharding:
   clusterRole: shardsvr

mongo3.example.net服務器上

systemLog:
   destination: file
   path: "/data/mongodb/log/shard2.log"
   logAppend: true
storage:
   dbPath: "/data/mongodb/data/shard2"
   journal:
      enabled: true
   wiredTiger:
      engineConfig:
        cacheSizeGB: 2
processManagement:
   fork: true
   pidFilePath: "/data/mongodb/pid/shard2.pid"
net:
   bindIp: mongo3.example.net
   port: 27018
replication:
   replSetName: "shard2"
sharding:
   clusterRole: shardsvr

 

開啟三台服務器上Shard

mongod -f /data/mongodb/shard2.conf

連接Primary服務器的Shard的副本集

mongo --host mongo2.example.net --port 27018

配置Replica Set(注意:三個服務器的角色發生了改變)

rs.initiate(
  {
    _id : "shard2",
    members: [
      { _id : 0, host : "mongo1.example.net:27018",arbiterOnly:true },
      { _id : 1, host : "mongo2.example.net:27018",priority:2 },
      { _id : 2, host : "mongo3.example.net:27018",priority:1 }
    ]
  }
)

查看Replica Set的狀態:

  1 shard2:PRIMARY> rs.status()
  2 {
  3         "set" : "shard2",
  4         "date" : ISODate("2019-06-20T01:59:08.996Z"),
  5         "myState" : 1,
  6         "term" : NumberLong(1),
  7         "syncingTo" : "",
  8         "syncSourceHost" : "",
  9         "syncSourceId" : -1,
 10         "heartbeatIntervalMillis" : NumberLong(2000),
 11         "optimes" : {
 12                 "lastCommittedOpTime" : {
 13                         "ts" : Timestamp(1560995943, 1),
 14                         "t" : NumberLong(1)
 15                 },
 16                 "readConcernMajorityOpTime" : {
 17                         "ts" : Timestamp(1560995943, 1),
 18                         "t" : NumberLong(1)
 19                 },
 20                 "appliedOpTime" : {
 21                         "ts" : Timestamp(1560995943, 1),
 22                         "t" : NumberLong(1)
 23                 },
 24                 "durableOpTime" : {
 25                         "ts" : Timestamp(1560995943, 1),
 26                         "t" : NumberLong(1)
 27                 }
 28         },
 29         "lastStableCheckpointTimestamp" : Timestamp(1560995913, 1),
 30         "members" : [
 31                 {
 32                         "_id" : 0,
 33                         "name" : "mongo1.example.net:27018",
 34                         "health" : 1,
 35                         "state" : 7,
 36                         "stateStr" : "ARBITER",
 37                         "uptime" : 107,
 38                         "lastHeartbeat" : ISODate("2019-06-20T01:59:08.221Z"),
 39                         "lastHeartbeatRecv" : ISODate("2019-06-20T01:59:07.496Z"),
 40                         "pingMs" : NumberLong(0),
 41                         "lastHeartbeatMessage" : "",
 42                         "syncingTo" : "",
 43                         "syncSourceHost" : "",
 44                         "syncSourceId" : -1,
 45                         "infoMessage" : "",
 46                         "configVersion" : 1
 47                 },
 48                 {
 49                         "_id" : 1,
 50                         "name" : "mongo2.example.net:27018",
 51                         "health" : 1,
 52                         "state" : 1,
 53                         "stateStr" : "PRIMARY",
 54                         "uptime" : 412,
 55                         "optime" : {
 56                                 "ts" : Timestamp(1560995943, 1),
 57                                 "t" : NumberLong(1)
 58                         },
 59                         "optimeDate" : ISODate("2019-06-20T01:59:03Z"),
 60                         "syncingTo" : "",
 61                         "syncSourceHost" : "",
 62                         "syncSourceId" : -1,
 63                         "infoMessage" : "could not find member to sync from",
 64                         "electionTime" : Timestamp(1560995852, 1),
 65                         "electionDate" : ISODate("2019-06-20T01:57:32Z"),
 66                         "configVersion" : 1,
 67                         "self" : true,
 68                         "lastHeartbeatMessage" : ""
 69                 },
 70                 {
 71                         "_id" : 2,
 72                         "name" : "mongo3.example.net:27018",
 73                         "health" : 1,
 74                         "state" : 2,
 75                         "stateStr" : "SECONDARY",
 76                         "uptime" : 107,
 77                         "optime" : {
 78                                 "ts" : Timestamp(1560995943, 1),
 79                                 "t" : NumberLong(1)
 80                         },
 81                         "optimeDurable" : {
 82                                 "ts" : Timestamp(1560995943, 1),
 83                                 "t" : NumberLong(1)
 84                         },
 85                         "optimeDate" : ISODate("2019-06-20T01:59:03Z"),
 86                         "optimeDurableDate" : ISODate("2019-06-20T01:59:03Z"),
 87                         "lastHeartbeat" : ISODate("2019-06-20T01:59:08.220Z"),
 88                         "lastHeartbeatRecv" : ISODate("2019-06-20T01:59:08.716Z"),
 89                         "pingMs" : NumberLong(0),
 90                         "lastHeartbeatMessage" : "",
 91                         "syncingTo" : "mongo2.example.net:27018",
 92                         "syncSourceHost" : "mongo2.example.net:27018",
 93                         "syncSourceId" : 1,
 94                         "infoMessage" : "",
 95                         "configVersion" : 1
 96                 }
 97         ],
 98         "ok" : 1,
 99         "operationTime" : Timestamp(1560995943, 1),
100         "$clusterTime" : {
101                 "clusterTime" : Timestamp(1560995943, 1),
102                 "signature" : {
103                         "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
104                         "keyId" : NumberLong(0)
105                 }
106         }
107 }
View Code

結果:  可以看出三個服務器:1個Primary,1個Secondary,1一個Arbiter

配置登錄認證的用戶請按照 Shard1 的步驟

 

6、部署分片服務器3(Shard3)以及Replica Set(副本集)

3台服務器上配置文件內容: /data/mongodb/shard3.conf

mongo1.example.net服務器上

systemLog:
   destination: file path: "/data/mongodb/log/shard3.log" logAppend: true storage: dbPath: "/data/mongodb/data/shard3" journal: enabled: true wiredTiger: engineConfig: cacheSizeGB: 2 processManagement: fork: true pidFilePath: "/data/mongodb/pid/shard3.pid" net: bindIp: mongo1.example.net port: 27019 replication: replSetName: "shard3" sharding: clusterRole: shardsvr

mongo2.example.net服務器上

systemLog:
   destination: file path: "/data/mongodb/log/shard3.log" logAppend: true storage: dbPath: "/data/mongodb/data/shard3" journal: enabled: true wiredTiger: engineConfig: cacheSizeGB: 2 processManagement: fork: true pidFilePath: "/data/mongodb/pid/shard3.pid" net: bindIp: mongo2.example.net port: 27019 replication: replSetName: "shard3" sharding: clusterRole: shardsvr

mongo3.example.net服務器上

systemLog:
   destination: file path: "/data/mongodb/log/shard3.log" logAppend: true storage: dbPath: "/data/mongodb/data/shard3" journal: enabled: true wiredTiger: engineConfig: cacheSizeGB: 2 processManagement: fork: true pidFilePath: "/data/mongodb/pid/shard3.pid" net: bindIp: mongo3.example.net port: 27019 replication: replSetName: "shard3" sharding: clusterRole: shardsvr

開啟三台服務器上Shard

mongod -f /data/mongodb/shard3.conf

連接Primary服務器的Shard的副本集

mongo --host mongo3.example.net --port 27019

配置Replica Set(注意:三個服務器的角色發生了改變)

rs.initiate(
  {
    _id : "shard3",
    members: [
      { _id : 0, host : "mongo1.example.net:27019",priority:1 },
      { _id : 1, host : "mongo2.example.net:27019",arbiterOnly:true },
      { _id : 2, host : "mongo3.example.net:27019",priority:2 }
    ]
  }
)

查看Replica Set的狀態:

  1 shard3:PRIMARY> rs.status()
  2 {
  3         "set" : "shard3",
  4         "date" : ISODate("2019-06-20T02:21:56.990Z"),
  5         "myState" : 1,
  6         "term" : NumberLong(1),
  7         "syncingTo" : "",
  8         "syncSourceHost" : "",
  9         "syncSourceId" : -1,
 10         "heartbeatIntervalMillis" : NumberLong(2000),
 11         "optimes" : {
 12                 "lastCommittedOpTime" : {
 13                         "ts" : Timestamp(1560997312, 2),
 14                         "t" : NumberLong(1)
 15                 },
 16                 "readConcernMajorityOpTime" : {
 17                         "ts" : Timestamp(1560997312, 2),
 18                         "t" : NumberLong(1)
 19                 },
 20                 "appliedOpTime" : {
 21                         "ts" : Timestamp(1560997312, 2),
 22                         "t" : NumberLong(1)
 23                 },
 24                 "durableOpTime" : {
 25                         "ts" : Timestamp(1560997312, 2),
 26                         "t" : NumberLong(1)
 27                 }
 28         },
 29         "lastStableCheckpointTimestamp" : Timestamp(1560997312, 1),
 30         "members" : [
 31                 {
 32                         "_id" : 0,
 33                         "name" : "mongo1.example.net:27019",
 34                         "health" : 1,
 35                         "state" : 2,
 36                         "stateStr" : "SECONDARY",
 37                         "uptime" : 17,
 38                         "optime" : {
 39                                 "ts" : Timestamp(1560997312, 2),
 40                                 "t" : NumberLong(1)
 41                         },
 42                         "optimeDurable" : {
 43                                 "ts" : Timestamp(1560997312, 2),
 44                                 "t" : NumberLong(1)
 45                         },
 46                         "optimeDate" : ISODate("2019-06-20T02:21:52Z"),
 47                         "optimeDurableDate" : ISODate("2019-06-20T02:21:52Z"),
 48                         "lastHeartbeat" : ISODate("2019-06-20T02:21:56.160Z"),
 49                         "lastHeartbeatRecv" : ISODate("2019-06-20T02:21:55.155Z"),
 50                         "pingMs" : NumberLong(0),
 51                         "lastHeartbeatMessage" : "",
 52                         "syncingTo" : "mongo3.example.net:27019",
 53                         "syncSourceHost" : "mongo3.example.net:27019",
 54                         "syncSourceId" : 2,
 55                         "infoMessage" : "",
 56                         "configVersion" : 1
 57                 },
 58                 {
 59                         "_id" : 1,
 60                         "name" : "mongo2.example.net:27019",
 61                         "health" : 1,
 62                         "state" : 7,
 63                         "stateStr" : "ARBITER",
 64                         "uptime" : 17,
 65                         "lastHeartbeat" : ISODate("2019-06-20T02:21:56.159Z"),
 66                         "lastHeartbeatRecv" : ISODate("2019-06-20T02:21:55.021Z"),
 67                         "pingMs" : NumberLong(0),
 68                         "lastHeartbeatMessage" : "",
 69                         "syncingTo" : "",
 70                         "syncSourceHost" : "",
 71                         "syncSourceId" : -1,
 72                         "infoMessage" : "",
 73                         "configVersion" : 1
 74                 },
 75                 {
 76                         "_id" : 2,
 77                         "name" : "mongo3.example.net:27019",
 78                         "health" : 1,
 79                         "state" : 1,
 80                         "stateStr" : "PRIMARY",
 81                         "uptime" : 45,
 82                         "optime" : {
 83                                 "ts" : Timestamp(1560997312, 2),
 84                                 "t" : NumberLong(1)
 85                         },
 86                         "optimeDate" : ISODate("2019-06-20T02:21:52Z"),
 87                         "syncingTo" : "",
 88                         "syncSourceHost" : "",
 89                         "syncSourceId" : -1,
 90                         "infoMessage" : "could not find member to sync from",
 91                         "electionTime" : Timestamp(1560997310, 1),
 92                         "electionDate" : ISODate("2019-06-20T02:21:50Z"),
 93                         "configVersion" : 1,
 94                         "self" : true,
 95                         "lastHeartbeatMessage" : ""
 96                 }
 97         ],
 98         "ok" : 1,
 99         "operationTime" : Timestamp(1560997312, 2),
100         "$clusterTime" : {
101                 "clusterTime" : Timestamp(1560997312, 2),
102                 "signature" : {
103                         "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
104                         "keyId" : NumberLong(0)
105                 }
106         }
107 }
View Code

結果:  可以看出三個服務器:1個Primary,1個Secondary,1一個Arbiter

配置登錄認證的用戶請按照 Shard1 的步驟

 

7、配置mongos服務器去連接分片集群

mongos.example.net 服務器上mongos的配置文件 /data/mongodb/mongos.conf

systemLog:
  destination: file
  path: "/data/mongodb/log/mongos.log"
  logAppend: true
processManagement:
  fork: true
net:
  port: 27017
  bindIp: mongos.example.net
sharding:
  configDB: "cs0/mongo1.example.net:27027,mongo2.example.net:27027,mongo3.example.net:27027"
security:
   keyFile: "/data/mongodb/keyfile"
   clusterAuthMode: "keyFile"

啟動mongos服務

mongos -f /data/mongodb/mongos.conf

連接mongos

mongo --host mongos.example.net --port 27017 -u myUserAdmin --authenticationDatabase "admin" -p 'abc123'

查看當前集群結果:

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5d0af6ed4fa51757cd032108")
  }
  shards:
  active mongoses:
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

在集群中先加入Shard1、Shard2,剩余Shard3我們在插入數據有在進行加入(模擬實現擴容)。

sh.addShard("shard1/mongo1.example.net:27017,mongo2.example.net:27017,mongo3.example.net:27017")
sh.addShard("shard2/mongo1.example.net:27018,mongo2.example.net:27018,mongo3.example.net:27018")

結果:

 1 mongos> sh.addShard("shard1/mongo1.example.net:27017,mongo2.example.net:27017,mongo3.example.net:27017")
 2 {
 3         "shardAdded" : "shard1",
 4         "ok" : 1,
 5         "operationTime" : Timestamp(1561009140, 7),
 6         "$clusterTime" : {
 7                 "clusterTime" : Timestamp(1561009140, 7),
 8                 "signature" : {
 9                         "hash" : BinData(0,"2je9FsNfMfBMHp+X/6d98B5tLH8="),
10                         "keyId" : NumberLong("6704442493062086684")
11                 }
12         }
13 }
14 mongos> sh.addShard("shard2/mongo1.example.net:27018,mongo2.example.net:27018,mongo3.example.net:27018")
15 {
16         "shardAdded" : "shard2",
17         "ok" : 1,
18         "operationTime" : Timestamp(1561009148, 5),
19         "$clusterTime" : {
20                 "clusterTime" : Timestamp(1561009148, 6),
21                 "signature" : {
22                         "hash" : BinData(0,"8FvJuCy8kCrMu5nB9PYILj0bzLk="),
23                         "keyId" : NumberLong("6704442493062086684")
24                 }
25         }
26 }
View Code

查看集群的狀態

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5d0af6ed4fa51757cd032108")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/mongo1.example.net:27017,mongo2.example.net:27017",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/mongo2.example.net:27018,mongo3.example.net:27018",  "state" : 1 }
  active mongoses:
        "4.0.10" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

 

8、測試

為了便於測試,設置分片chunk的大小為1M

use config
db.settings.save({"_id":"chunksize","value":1})

在連接mongos后,執行創建數據庫,並啟用分片存儲

sh.enableSharding("user_center")

創建 "user_center"數據庫,並啟用分片,查看結果:

 1 mongos> sh.status()
 2 --- Sharding Status --- 
 3   sharding version: {
 4         "_id" : 1,
 5         "minCompatibleVersion" : 5,
 6         "currentVersion" : 6,
 7         "clusterId" : ObjectId("5d0af6ed4fa51757cd032108")
 8   }
 9   shards:
10         {  "_id" : "shard1",  "host" : "shard1/mongo1.example.net:27017,mongo2.example.net:27017",  "state" : 1 }
11         {  "_id" : "shard2",  "host" : "shard2/mongo2.example.net:27018,mongo3.example.net:27018",  "state" : 1 }
12   active mongoses:
13         "4.0.10" : 1
14   autosplit:
15         Currently enabled: yes
16   balancer:
17         Currently enabled:  yes
18         Currently running:  no
19         Failed balancer rounds in last 5 attempts:  0
20         Migration Results for the last 24 hours: 
21                 No recent migrations
22   databases:
23         {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
24                 config.system.sessions
25                         shard key: { "_id" : 1 }
26                         unique: false
27                         balancing: true
28                         chunks:
29                                 shard1  1
30                         { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 
31         {  "_id" : "user_center",  "primary" : "shard1",  "partitioned" : true,  "version" : {  "uuid" : UUID("3b05ccb5-796a-4e9e-a36e-99b860b6bee0"),  "lastMod" : 1 } }
View Code

創建 "users" 集合

sh.shardCollection("user_center.users",{"name":1})   #數據庫user_center中users集合使用了片鍵{"name":1},這個片鍵通過字段name的值進行數據分配

現在查看集群狀態

 1 mongos> sh.status()
 2 --- Sharding Status --- 
 3   sharding version: {
 4         "_id" : 1,
 5         "minCompatibleVersion" : 5,
 6         "currentVersion" : 6,
 7         "clusterId" : ObjectId("5d0af6ed4fa51757cd032108")
 8   }
 9   shards:
10         {  "_id" : "shard1",  "host" : "shard1/mongo1.example.net:27017,mongo2.example.net:27017",  "state" : 1 }
11         {  "_id" : "shard2",  "host" : "shard2/mongo2.example.net:27018,mongo3.example.net:27018",  "state" : 1 }
12   active mongoses:
13         "4.0.10" : 1
14   autosplit:
15         Currently enabled: yes
16   balancer:
17         Currently enabled:  yes
18         Currently running:  no
19         Failed balancer rounds in last 5 attempts:  0
20         Migration Results for the last 24 hours: 
21                 No recent migrations
22   databases:
23         {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
24                 config.system.sessions
25                         shard key: { "_id" : 1 }
26                         unique: false
27                         balancing: true
28                         chunks:
29                                 shard1  1
30                         { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 
31         {  "_id" : "user_center",  "primary" : "shard2",  "partitioned" : true,  "version" : {  "uuid" : UUID("33c79b3f-aa18-4755-a5e8-b8f7f3d05893"),  "lastMod" : 1 } }
32                 user_center.users
33                         shard key: { "name" : 1 }
34                         unique: false
35                         balancing: true
36                         chunks:
37                                 shard2  1
38                         { "name" : { "$minKey" : 1 } } -->> { "name" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) 
View Code

 寫pyhton腳本插入數據

#enconding:utf8
import pymongo,string,random

def random_name():
    str_args = string.ascii_letters
    name_list = random.sample(str_args,5)
    random.shuffle(name_list)
    return ''.join(name_list)

def random_age():
    age_args = string.digits
    age_list = random.sample(age_args,2)
    random.shuffle(age_list)
    return int(''.join(age_list))
def insert_data_to_mongo(url,dbname,collections_name):
    print(url)
    client = pymongo.MongoClient(url)
    db = client[dbname]
    collections = db[collections_name]
    for i in range(1,100000):
        name = random_name()
        collections.insert({"name" : name , "age" : random_age(), "status" : "pending"})
        print("insert ",name)

if __name__ == "__main__":
    mongo_url="mongodb://myUserAdmin:abc123@192.168.11.10:27017/?maxPoolSize=100&minPoolSize=10&maxIdleTimeMS=600000"
    mongo_db="user_center"
    mongo_collections="users"
    insert_data_to_mongo(mongo_url,mongo_db,mongo_collections)

插入數據后查看此時集群的狀態:

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5d0af6ed4fa51757cd032108")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/mongo1.example.net:27017,mongo2.example.net:27017",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/mongo2.example.net:27018,mongo3.example.net:27018",  "state" : 1 }
  active mongoses:
        "4.0.10" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                3 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 
        {  "_id" : "user_center",  "primary" : "shard2",  "partitioned" : true,  "version" : {  "uuid" : UUID("33c79b3f-aa18-4755-a5e8-b8f7f3d05893"),  "lastMod" : 1 } }
                user_center.users
                        shard key: { "name" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  9
                                shard2  8
                        { "name" : { "$minKey" : 1 } } -->> { "name" : "ABXEw" } on : shard1 Timestamp(2, 0) 
                        { "name" : "ABXEw" } -->> { "name" : "EKdCt" } on : shard1 Timestamp(3, 11) 
                        { "name" : "EKdCt" } -->> { "name" : "ITgcx" } on : shard1 Timestamp(3, 12) 
                        { "name" : "ITgcx" } -->> { "name" : "JKoOz" } on : shard1 Timestamp(3, 13) 
                        { "name" : "JKoOz" } -->> { "name" : "NSlcY" } on : shard1 Timestamp(4, 2) 
                        { "name" : "NSlcY" } -->> { "name" : "RbrAy" } on : shard1 Timestamp(4, 3) 
                        { "name" : "RbrAy" } -->> { "name" : "SQvZq" } on : shard1 Timestamp(4, 4) 
                        { "name" : "SQvZq" } -->> { "name" : "TxpPM" } on : shard1 Timestamp(3, 4) 
                        { "name" : "TxpPM" } -->> { "name" : "YEujn" } on : shard1 Timestamp(4, 0) 
                        { "name" : "YEujn" } -->> { "name" : "cOlra" } on : shard2 Timestamp(3, 9) 
                        { "name" : "cOlra" } -->> { "name" : "dFTNS" } on : shard2 Timestamp(3, 10) 
                        { "name" : "dFTNS" } -->> { "name" : "hLwFZ" } on : shard2 Timestamp(3, 14) 
                        { "name" : "hLwFZ" } -->> { "name" : "lVQzu" } on : shard2 Timestamp(3, 15) 
                        { "name" : "lVQzu" } -->> { "name" : "mNLGP" } on : shard2 Timestamp(3, 16) 
                        { "name" : "mNLGP" } -->> { "name" : "oILav" } on : shard2 Timestamp(3, 7) 
                        { "name" : "oILav" } -->> { "name" : "wJWQI" } on : shard2 Timestamp(4, 1) 
                        { "name" : "wJWQI" } -->> { "name" : { "$maxKey" : 1 } } on : shard2 Timestamp(3, 1) 

可以看出,數據分別再Shard1、Shard2分片上。

將Shard3分片也加入到集群中來

mongos> sh.addShard("shard3/mongo1.example.net:27019,mongo2.example.net:27019,mongo3.example.net:27019")

在查看集群的狀態:

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5d0af6ed4fa51757cd032108")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/mongo1.example.net:27017,mongo2.example.net:27017",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/mongo2.example.net:27018,mongo3.example.net:27018",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/mongo1.example.net:27019,mongo3.example.net:27019",  "state" : 1 }
  active mongoses:
        "4.0.10" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                8 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 
        {  "_id" : "user_center",  "primary" : "shard2",  "partitioned" : true,  "version" : {  "uuid" : UUID("33c79b3f-aa18-4755-a5e8-b8f7f3d05893"),  "lastMod" : 1 } }
                user_center.users
                        shard key: { "name" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  6
                                shard2  6
                                shard3  5
                        { "name" : { "$minKey" : 1 } } -->> { "name" : "ABXEw" } on : shard3 Timestamp(5, 0) 
                        { "name" : "ABXEw" } -->> { "name" : "EKdCt" } on : shard3 Timestamp(7, 0) 
                        { "name" : "EKdCt" } -->> { "name" : "ITgcx" } on : shard3 Timestamp(9, 0) 
                        { "name" : "ITgcx" } -->> { "name" : "JKoOz" } on : shard1 Timestamp(9, 1) 
                        { "name" : "JKoOz" } -->> { "name" : "NSlcY" } on : shard1 Timestamp(4, 2) 
                        { "name" : "NSlcY" } -->> { "name" : "RbrAy" } on : shard1 Timestamp(4, 3) 
                        { "name" : "RbrAy" } -->> { "name" : "SQvZq" } on : shard1 Timestamp(4, 4) 
                        { "name" : "SQvZq" } -->> { "name" : "TxpPM" } on : shard1 Timestamp(5, 1) 
                        { "name" : "TxpPM" } -->> { "name" : "YEujn" } on : shard1 Timestamp(4, 0) 
                        { "name" : "YEujn" } -->> { "name" : "cOlra" } on : shard3 Timestamp(6, 0) 
                        { "name" : "cOlra" } -->> { "name" : "dFTNS" } on : shard3 Timestamp(8, 0) 
                        { "name" : "dFTNS" } -->> { "name" : "hLwFZ" } on : shard2 Timestamp(3, 14) 
                        { "name" : "hLwFZ" } -->> { "name" : "lVQzu" } on : shard2 Timestamp(3, 15) 
                        { "name" : "lVQzu" } -->> { "name" : "mNLGP" } on : shard2 Timestamp(3, 16) 
                        { "name" : "mNLGP" } -->> { "name" : "oILav" } on : shard2 Timestamp(8, 1) 
                        { "name" : "oILav" } -->> { "name" : "wJWQI" } on : shard2 Timestamp(4, 1) 
                        { "name" : "wJWQI" } -->> { "name" : { "$maxKey" : 1 } } on : shard2 Timestamp(6, 1) 

加入后,集群的分片數據重新平衡調整,有一部分數據分布到Shard3上。

 9、備份和恢復

 備份

備份的時候需要鎖定配置服務器(ConfigServer)和分片服務器(Shard)

在備份前查看當前數據庫中數據總條數

mongos> db.users.find().count()
99999

然后啟動前面的python腳本,可以在腳本中添加time.sleep來控制插入的頻率。

在mongos服務器上停止平衡器。

mongos> sh.stopBalancer()

鎖定配置服務器和各分片服務器,登錄配置服務器和各分片服務器的Secondary執行命令

db.fsyncLock()

開始備份數據庫

mongodump  -h mongo2.example.net --port 27027 --authenticationDatabase admin -u myUserAdmin -p abc123 -o /data/backup/config
mongodump  -h mongo2.example.net --port 27017 --authenticationDatabase admin -u myUserAdmin -p abc123 -o /data/backup/shard1
mongodump  -h mongo3.example.net --port 27018 --authenticationDatabase admin -u myUserAdmin -p abc123 -o /data/backup/shard2
mongodump  -h mongo1.example.net --port 27019 --authenticationDatabase admin -u myUserAdmin -p abc123 -o /data/backup/shard3

鎖定配置服務器和各分片服務器

db.fsyncUnlock()

在mongos中開啟平衡器

sh.setBalancerState(true);

在備份的過程中不會影響到數據的寫入,備份后查看此時的數據

mongos> db.users.find().count()
107874

 

恢復

將Shard1分片服務器1中的數據庫刪除

shard1:PRIMARY> use user_center
switched to db user_center
shard1:PRIMARY>  db.dropDatabase()
{
        "dropped" : "user_center",
        "ok" : 1,
        "operationTime" : Timestamp(1561022404, 2),
        "$gleStats" : {
                "lastOpTime" : {
                        "ts" : Timestamp(1561022404, 2),
                        "t" : NumberLong(2)
                },
                "electionId" : ObjectId("7fffffff0000000000000002")
        },
        "lastCommittedOpTime" : Timestamp(1561022404, 1),
        "$configServerState" : {
                "opTime" : {
                        "ts" : Timestamp(1561022395, 1),
                        "t" : NumberLong(2)
                }
        },
        "$clusterTime" : {
                "clusterTime" : Timestamp(1561022404, 2),
                "signature" : {
                        "hash" : BinData(0,"GO1yQDvdZ6oJBXdvM94noPNnJTM="),
                        "keyId" : NumberLong("6704442493062086684")
                }
        }
}

然后使用剛備份的數據庫進行恢復

mongorestore  -h mongo1.example.net --port 27017 --authenticationDatabase admin -u myUserAdmin -p abc123 -d user_center /data/backup/shard1/user_center

2019-06-20T17:20:34.325+0800 the --db and --collection args should only be used when restoring from a BSON file. Other uses are deprecated and will not exist in the future; use --nsInclude instead
2019-06-20T17:20:34.326+0800 building a list of collections to restore from /data/backup/shard1/user_center dir
2019-06-20T17:20:34.356+0800 reading metadata for user_center.users from /data/backup/shard1/user_center/users.metadata.json
2019-06-20T17:20:34.410+0800 restoring user_center.users from /data/backup/shard1/user_center/users.bson
2019-06-20T17:20:36.836+0800 restoring indexes for collection user_center.users from metadata
2019-06-20T17:20:37.093+0800 finished restoring user_center.users (30273 documents)
2019-06-20T17:20:37.093+0800 done

根據上述步驟恢復Shard2、Shard3的數據

最后恢復的結果:

mongos> db.users.find().count()
100013

這個應該是我在鎖的時候插入的數據。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM