搭建mongodb分布式集群(3台主機分片集群)


搭建mongodb分布式集群(分片集群+keyfile安全認證以及用戶權限)

2020-01-02 12:56:37

介紹:

    分片(sharding)是指將數據庫拆分,將其分散在不同的機器上的過程。將數據分散到不同的機器上,不需要功能強大的服務器就可以存儲更多的數據和處理更大的負載。基本思想就是將集合切成小塊,這些塊分散到若干片里,每個片只負責總數據的一部分,最后通過一個均衡器來對各個分片進行均衡(數據遷移)。通過一個名為mongos的路由進程進行操作,mongos知道數據和片的對應關系(通過配置服務器)。大部分使用場景都是解決磁盤空間的問題,對於寫入有可能會變差(+++里面的說明+++),查詢則盡量避免跨分片查詢。使用分片的時機:

       Shard   Server: mongod 實例,用於存儲實際的數據塊,實際生產環境中一個shard server角色可由幾台機器組個一個relica set承擔,防止主機單點故障;
       Config  Server: mongod 實例,存儲了整個 Cluster Metadata,其中包括 chunk 信息。
       Route  Server: mongos  實例,前端路由,客戶端由此接入,且讓整個集群看上去像單一數據庫,前端應用可以透明使用

主機規划: 

IP地址 實例(端口) 實例(端口) 實例(端口) 實例(端口) 實例(端口)
192.168.2.3 mongos(27107) configsvr(20000) shard1(27018) shard2(27019) shard3(27020)
192.168.2.4 mongos(27107) configsvr(20000) shard1(27018) shard2(27019) shard3(27020)
192.168.2.5 mongos(27107) configsvr(20000) shard1(27018) shard2(27019) shard3(27020)

 

 

 

 

 

目錄創建:

在每台服務器創建對應的目錄

1 mkdir -p /data/mongodb/{share1,share2,share3}/db
2 mkdir -p /data/mongodb/mongos/db 3 mkdir -p /data/mongodb/configsvr/db 4 mkdir -p /data/mongodb/{conf,logs}

創建配置文件:

1 touch /data/mongodb/conf/configsvr.conf 
2 touch /data/mongodb/conf/mongos.conf 3 touch /data/mongodb/conf/shard1.conf 4 touch /data/mongodb/conf/shard2.conf 5 touch /data/mongodb/conf/shard3.conf

配置文件詳情:

    官網連接 :https://docs.mongodb.com/manual/reference/configuration-options/

configsvr.conf

 1 systemLog:
 2  destination: file 3  logAppend: true 4  path: /data/mongodb/logs/configsvr.log 5 storage: 6  dbPath: /data/mongodb/configsvr/db 7  journal: 8  enabled: true 9 processManagement: 10  fork: true 11  pidFilePath: /data/mongodb/configsvr/configsvr.pid 12 net: 13 port: 20000 14 bindIp: 0.0.0.0 15 replication: 16  replSetName: config 17 sharding: 18 clusterRole: configsvr

shard1.conf

 1 systemLog:
 2  destination: file 3  logAppend: true 4  path: /data/mongodb/logs/shard1.log 5 storage: 6  dbPath: /data/mongodb/shard1/db 7  journal: 8  enabled: true 9 processManagement: 10  fork: true 11  pidFilePath: /data/mongodb/shard1/shard1.pid 12 net: 13 port: 27018 14 bindIp: 0.0.0.0 15 replication: 16  replSetName: shard1 17 sharding: 18 clusterRole: shardsvr

shard2.conf

 1 systemLog:
 2  destination: file 3  logAppend: true 4  path: /data/mongodb/logs/shard2.log 5 storage: 6  dbPath: /data/mongodb/shard2/db 7  journal: 8  enabled: true 9 processManagement: 10  fork: true 11  pidFilePath: /data/mongodb/shard2/shard2.pid 12 net: 13 port: 27019 14 bindIp: 0.0.0.0 15 replication: 16  replSetName: shard2 17 sharding: 18 clusterRole: shardsvr

shard3.conf

 1 systemLog:
 2  destination: file 3  logAppend: true 4  path: /data/mongodb/logs/shard3.log 5 storage: 6  dbPath: /data/mongodb/shard3/db 7  journal: 8  enabled: true 9 processManagement: 10  fork: true 11  pidFilePath: /data/mongodb/shard3/shard3.pid 12 net: 13 port: 27020 14 bindIp: 0.0.0.0 15 replication: 16  replSetName: shard3 17 sharding: 18 clusterRole: shardsvr

mongos.conf

 1 systemLog:
 2  destination: file 3  logAppend: true 4  path: /data/mongodb/logs/mongos.log 5 processManagement: 6  fork: true 7  pidFilePath: /data/mongodb/mongos/mongos.pid 8 net: 9 port: 27017 10 bindIp: 0.0.0.0 11 sharding: 12 configDB: config/192.168.2.3:20000,192.168.2.4:20000,192.168.2.5:20000

啟動命令:

1 /data/mongodb/bin/mongod -f /data/mongodb/config/configser.conf
2 /data/mongodb/bin/mongod -f /data/mongodb/config/shard1.conf 3 /data/mongodb/bin/mongod -f /data/mongodb/config/shard2.conf 4 /data/mongodb/bin/mongod -f /data/mongodb/config/shard3.conf 5 /data/mongodb/bin/mongod -f /data/mongodb/config/mongos.conf

 停止命令:

killall mongod  #停止config 和shard 
killall mongos  #停止 mongos

連接 shard1 

/data/mongodb/bin/mongo 192.168.2.3:27018
執行

 1 rs.initiate({
 2 "_id":"shard1", 3 "members":[ 4 { 5 "_id":0, 6 "host":"192.168.2.3:27018" 7 }, 8 { 9 "_id":1, 10 "host":"192.168.2.4:27018" 11 }, 12 { 13 "_id":2, 14 "host":"192.168.2.5:27018" 15 } 16 ] 17 })

連接 shard2

/data/mongodb/bin/mongo 192.168.2.3:27019
執行

 1 rs.initiate({
 2 "_id":"shard2", 3 "members":[ 4 { 5 "_id":0, 6 "host":"192.168.2.3:27019" 7 }, 8 { 9 "_id":1, 10 "host":"192.168.2.4:27019" 11 }, 12 { 13 "_id":2, 14 "host":"192.168.2.5:27019" 15 } 16 ] 17 })

連接 shard3
/data/mongodb/bin/mongo 192.168.2.3:27020
執行

 1 rs.initiate({
 2 "_id":"shard3", 3 "members":[ 4 { 5 "_id":0, 6 "host":"192.168.2.3:27020" 7 }, 8 { 9 "_id":1, 10 "host":"192.168.2.4:27020" 11 }, 12 { 13 "_id":2, 14 "host":"192.168.2.5:27020" 15 } 16 ] 17 })

連接 config
/data/mongodb/bin/mongo 192.168.2.3:20000

 1 rs.initiate({
 2 "_id":"config", 3 "members":[ 4 { 5 "_id":0, 6 "host":"192.168.2.3:20000" 7 }, 8 { 9 "_id":1, 10 "host":"192.168.2.4:20000" 11 }, 12 { 13 "_id":2, 14 "host":"192.168.2.5:20000" 15 } 16 ] 17 })

連接 mongos 添加路由 

/data/mongodb/bin/mongo 192.168.2.3:27017

1 sh.addShard("shard1/192.168.2.3:27018,192.168.2.4:27018,192.168.2.5:27018")
2 sh.addShard("shard2/192.168.2.3:27019,192.168.2.4:27019,192.168.2.5:27019") 3 sh.addShard("shard3/192.168.2.3:27020,192.168.2.4:27020,192.168.2.5:27020")

查看狀態

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
      "_id" : 1,
      "minCompatibleVersion" : 5,
      "currentVersion" : 6,
      "clusterId" : ObjectId("5e0995613452582938deff4f")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.2.3:27018,192.168.2.4:27018,192.168.2.5:27018",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.2.3:27019,192.168.2.4:27019,192.168.2.5:27019",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.2.3:27020,192.168.2.4:27020,192.168.2.5:27020",  "state" : 1 }
  active mongoses:
        "3.6.3" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  5
        Last reported error:  Could not find host matching read preference { mode: "primary" } for set shard1
        Time of Reported error:  Mon Dec 30 2019 14:30:41 GMT+0800 (CST)
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1    1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 
        {  "_id" : "dfcx_test",  "primary" : "shard2",  "partitioned" : false }
        {  "_id" : "test",  "primary" : "shard1",  "partitioned" : false }

 

初始化用戶

接入其中一個mongos實例,添加管理員用戶

1 use admin
2  db.createUser({user:'admin',pwd:'admin',roles:['clusterAdmin','dbAdminAnyDatabase','userAdminAnyDatabase','readWriteAnyDatabase']})
# 查看用戶 在 admin庫
3 db.system.users.find().pretty()
#授權庫賬號 4 use df_test 5 db.createUser({user:'df_test',pwd:'admin',roles:['readWrite']})
#修改權限
6 db.updateUser("usertest",{roles:[ {role:"read",db:"testDB"} ]})
  注:updateuser它是完全替換之前的值,如果要新增或添加roles而不是代替它 
#修改密碼
7 db.updateUser("usertest",{pwd:"changepass1"});
role角色:
    數據庫用戶角色:read、readWrite;
    數據庫管理角色:dbAdmin、dbOwner、userAdmin;
    集群管理角色:clusterAdmin、clusterManager、clusterMonitor、hostManager;
    備份恢復角色:backup、restore;
    所有數據庫角色:readAnyDatabase、readWriteAnyDatabase、userAdminAnyDatabase、dbAdminAnyDatabase
    超級用戶角色:root
    內部角色:__system
角色說明:
    read:允許用戶讀取指定數據庫
    readWrite:允許用戶讀寫指定數據庫
    dbAdmin:允許用戶在指定數據庫中執行管理函數,如索引創建、刪除,查看統計或訪問system.profile
    userAdmin:允許用戶向system.users集合寫入,可以找指定數據庫里創建、刪除和管理用戶
    clusterAdmin:只在admin數據庫中可用,賦予用戶所有分片和復制集相關函數的管理權限。
    readAnyDatabase:只在admin數據庫中可用,賦予用戶所有數據庫的讀權限
    readWriteAnyDatabase:只在admin數據庫中可用,賦予用戶所有數據庫的讀寫權限
    userAdminAnyDatabase:只在admin數據庫中可用,賦予用戶所有數據庫的userAdmin權限
    dbAdminAnyDatabase:只在admin數據庫中可用,賦予用戶所有數據庫的dbAdmin權限。
    root:只在admin數據庫中可用。超級賬號,超級權限
    dbOwner: readWrite + dbAdmin + dbAdmin

 

數據操作

在案例中,創建appuser用戶、為數據庫實例df_test啟動分片。

1 use df_test
2 db.createUser({user:'appuser',pwd:'AppUser@01',roles:[{role:'dbOwner',db:'df_test'}]}) 
3 sh.enableSharding("df_test") #開啟分片

 

創建集合userid,為其執行分片初始化。

1 use df_test
2 db.createCollection("users")
3 db.users.ensureIndex({userid:1}) 創建索引,
4 sh.shardCollection("dfcx_test.users",{userid:1}) 同時為集合指定片鍵
5 sh.shardCollection("dfcx_test.users.users", {users:"hashed"}, false, { numInitialChunks: 4} ) (添加參數,可以執行4 和5任意一個)

 

插入測試數據

1 mongos> for(var i=1;i<1000000;i++) db.users.insert({userid:i,username:"HSJ"+i,city:"beijing"})
2 mongos> for(var i=1;i<1000000;i++) db.users.insert({userid:i,username:"HSJ"+i,city:"tianjing"})

 

查看狀態

sh.status()
--- Sharding Status --- 
  sharding version: {
      "_id" : 1,
      "minCompatibleVersion" : 5,
      "currentVersion" : 6,
      "clusterId" : ObjectId("5e0995613452582938deff4f")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.2.3:27018,192.168.2.4:27018,192.168.2.5:27018",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.2.3:27019,192.168.2.4:27019,192.168.2.5:27019",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.2.3:27020,192.168.2.4:27020,192.168.2.5:27020",  "state" : 1 }
  active mongoses:
        "3.6.3" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  yes
        Collections with active migrations: 
                dfcx_test.users started at Thu Jan 02 2020 14:35:34 GMT+0800 (CST)
        Failed balancer rounds in last 5 attempts:  4
        Last reported error:  Could not find host matching read preference { mode: "primary" } for set shard1
        Time of Reported error:  Mon Dec 30 2019 14:32:41 GMT+0800 (CST)
        Migration Results for the last 24 hours: 
                1 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1    1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 
        {  "_id" : "dfcx_test",  "primary" : "shard2",  "partitioned" : true }
                dfcx_test.users
                        shard key: { "userid" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1    1
                                shard2    3
                        { "userid" : { "$minKey" : 1 } } -->> { "userid" : 2 } on : shard1 Timestamp(2, 0) 
                        { "userid" : 2 } -->> { "userid" : 500002 } on : shard2 Timestamp(2, 1) 
                        { "userid" : 500002 } -->> { "userid" : 750003 } on : shard2 Timestamp(1, 3) 
                        { "userid" : 750003 } -->> { "userid" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 4) 
        {  "_id" : "test",  "primary" : "shard1",  "partitioned" : false }

 

mongodb 生產開啟 keyfile安全認證以及用戶權限

創建副本集認證key文件

1、創建key文件: 注意,三個節點必須要用同一份keyfile,在一台機器生成,拷貝到另外兩台,並且修改成 600 的文件屬性

1 openssl rand -base64 90 -out ./keyfile
2 cp keyfile /data/mongodb/conf、
3 chmod 600 /data/mongodb/keyfile

 

2.在每個配置文件里添加配置

config 和shard 添加:

security:
  keyFile: /data/mongodb/conf/keyfile
  authorization: enabled

 

mongos 添加

1 security:
2   keyFile: /data/mongodb/conf/keyfile

 

3.重啟集群

 隱藏節點-延遲節點

由於我沒有部署測試過在完成看到這個比較好收藏查看 

查看文檔:https://www.cnblogs.com/kevingrace/p/8178549.html

 

文章參考連接:

     https://www.jianshu.com/p/f021f1f3c60b

     https://www.cnblogs.com/littleatp/p/8563273.html

     https://www.cnblogs.com/woxingwoxue/p/9875878.html


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM