mongodb是最常用的noSql數據庫,在數據庫排名中已經上升到了前五。這篇文章介紹如何搭建高可用的mongodb(分片+副本)集群。
在搭建集群之前,需要首先了解幾個概念:路由,分片、副本集、配置服務器等。
相關概念
mongodb集群架構圖:
從圖中可以看到有四個組件:mongos、config server、shard、replica set。
mongos,數據庫集群請求的入口,所有的請求都通過mongos進行協調,不需要在應用程序添加一個路由選擇器,mongos自己就是一個請求分發中心,它負責把對應的數據請求請求轉發到對應的shard服務器上。在生產環境通常有多mongos作為請求的入口,防止其中一個掛掉所有的mongodb請求都沒有辦法操作。
config server,顧名思義為配置服務器,存儲所有數據庫元信息(路由、分片)的配置。mongos本身沒有物理存儲分片服務器和數據路由信息,只是緩存在內存里,配置服務器則實際存儲這些數據。mongos第一次啟動或者關掉重啟就會從 config server 加載配置信息,以后如果配置服務器信息變化會通知到所有的 mongos 更新自己的狀態,這樣 mongos 就能繼續准確路由。在生產環境通常有多個 config server 配置服務器,因為它存儲了分片路由的元數據,防止數據丟失!
shard,分片(sharding)是指將數據庫拆分,將其分散在不同的機器上的過程。將數據分散到不同的機器上,不需要功能強大的服務器就可以存儲更多的數據和處理更大的負載。基本思想就是將集合切成小塊,這些塊分散到若干片里,每個片只負責總數據的一部分,最后通過一個均衡器來對各個分片進行均衡(數據遷移)。
replica set,中文翻譯副本集,其實就是shard的備份,防止shard掛掉之后數據丟失。復制提供了數據的冗余備份,並在多個服務器上存儲數據副本,提高了數據的可用性, 並可以保證數據的安全性。
仲裁者(Arbiter),是復制集中的一個MongoDB實例,它並不保存數據。仲裁節點使用最小的資源並且不要求硬件設備,不能將Arbiter部署在同一個數據集節點中,可以部署在其他應用服務器或者監視服務器中,也可部署在單獨的虛擬機中。為了確保復制集中有奇數的投票成員(包括primary),需要添加仲裁節點做為投票,否則primary不能運行時不會自動切換primary。
簡單了解之后,我們可以這樣總結一下,應用請求mongos來操作mongodb的增刪改查,配置服務器存儲數據庫元信息,並且和mongos做同步,數據最終存入在shard(分片)上,為了防止數據丟失同步在副本集中存儲了一份,仲裁在數據存儲到分片的時候決定存儲到哪個節點。
環境准備
系統系統 centos 7.4
三台服務器:192.168.56.101、node2(192.168.56.102)、node3(192.168.56.103)
安裝包:yum安裝
服務器規划
服務器node1(192.168.56.101) |
服務器node2(192.168.56.102) |
服務器node3(192.168.56.103) |
mongos |
mongos |
mongos |
config server |
config server |
config server |
shard server1 主節點 |
shard server1 副節點 |
shard server1 仲裁 |
shard server2 仲裁 |
shard server2 主節點 |
shard server2 副節點 |
shard server3 副節點 |
shard server3 仲裁 |
shard server3 主節點 |
端口分配:
mongos:20000
config:21000
shard1:27001
shard2:27002
shard3:27003
開放端口
firewall-cmd --add-port=21000/tcp --permanent
firewall-cmd --add-port=21000/tcp
firewall-cmd --add-port=27001/tcp --permanent
firewall-cmd --add-port=27001/tcp
firewall-cmd --add-port=27002/tcp --permanent
firewall-cmd --add-port=27002/tcp
firewall-cmd --add-port=27003/tcp --permanent
firewall-cmd --add-port=27003/tcp
firewall-cmd --add-port=20000/tcp --permanent
firewall-cmd --add-port=20000/tcp
關閉selinux
vi /etc/selinux/config
SELINUX=disabled
重啟系統
集群搭建
1、安裝mongodb
#yum安裝mongodb
cat /etc/yum.repos.d/mongodb.repo
[mongodb-org]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
yum install mongodb-org
禁用自帶服務:
systemctl disable mongod
systemctl stop mongod
路徑規划並創建
分別在每台機器建立conf、mongos、config、shard1、shard2、shard3目錄,因為mongos不存儲數據,只需要建立日志文件目錄即可。
配置文件路徑:
mkdir -p /etc/mongod/conf.d
pid文件路徑
/var/run/mongodb
數據存儲路徑
#config server數據存儲路徑
mkdir -p /var/lib/mongo/config/data
#shard server數據存儲路徑
mkdir -p /var/lib/mongo/shard1/data
mkdir -p /var/lib/mongo/shard2/data
mkdir -p /var/lib/mongo/shard3/data
chown -R mongod:mongod /var/lib/mongo
日志文件路徑
/var/log/mongodb
2、config server配置服務器
mongodb3.4以后要求配置服務器也創建副本集,不然集群搭建不成功。
添加配置文件
vi /etc/mongod/conf.d/config.conf
## 配置文件內容
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/configsvr.log
storage:
dbPath: /var/lib/mongo/config/data
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /var/run/mongodb/configsvr.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 21000
bindIp: 0.0.0.0
maxIncomingConnections: 20000
replication:
replSetName: csReplSet
sharding:
clusterRole: configsvr
創建systemctl unit文件mongod-configsvr.service
[Unit]
Description=Mongodb Config Server
After=network.target
Documentation=https://docs.mongodb.org/manual
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod/conf.d/config.conf"
ExecStart=/usr/bin/mongod $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/configsvr.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for for mongod as specified in
# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings
[Install]
WantedBy=multi-user.target
啟動三台服務器的config server
systemctl enable mongod-configsvr
systemctl start mongod-configsvr
登錄任意一台配置服務器,初始化配置副本集
#連接
mongo 127.0.0.1:21000
#config變量
config = {
_id : "csReplSet",
members : [
{_id : 1, host : "192.168.56.101:21000" },
{_id : 2, host : "192.168.56.102:21000" },
{_id : 3, host : "192.168.56.103:21000" }
]
}
#初始化副本集
rs.initiate(config)
其中,"_id" : " csReplSet "應與配置文件中配置的 replicaction.replSetName 一致,"members" 中的 "host" 為三個節點的 ip 和 port
3、配置分片副本集(三台機器)
1) 設置第一個分片副本集
添加配置文件
vi /etc/mongod/conf.d/shard1.conf
## 配置文件內容
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/shard1.log
storage:
dbPath: /var/lib/mongo/shard1/data
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /var/run/mongodb/shard1.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27001
bindIp: 0.0.0.0
maxIncomingConnections: 20000
replication:
replSetName: shard1
sharding:
clusterRole: shardsvr
創建systemctl unit文件mongod-shard1.service
[Unit]
Description=Mongodb Shard1 Server
After=network.target
Documentation=https://docs.mongodb.org/manual
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod/conf.d/shard1.conf"
ExecStart=/usr/bin/mongod $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/shard1.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for for mongod as specified in
# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings
[Install]
WantedBy=multi-user.target
啟動三台服務器的shard1 server
systemctl enable mongod-shard1
systemctl start mongod-shard1
登陸任意一台非仲裁節點服務器,初始化副本集
mongo 127.0.0.1:27001
#使用admin數據庫
use admin
#定義副本集配置,第三個節點的 "arbiterOnly":true 代表其為仲裁節點。
config = {
_id : "shard1",
members : [
{_id : 1, host : "192.168.56.101:27001" , priority:2},
{_id : 2, host : "192.168.56.102:27001" , priority:1},
{_id : 3, host : "192.168.56.103:27001" , arbiterOnly: true }
]
}
#初始化副本集配置
rs.initiate(config);
2) 設置第二個分片副本集
添加配置文件
vi /etc/mongod/conf.d/shard2.conf
## 配置文件內容
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/shard2.log
storage:
dbPath: /var/lib/mongo/shard2/data
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /var/run/mongodb/shard2.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27002
bindIp: 0.0.0.0
maxIncomingConnections: 20000
replication:
replSetName: shard2
sharding:
clusterRole: shardsvr
創建systemctl unit文件mongod-shard2.service
[Unit]
Description=Mongodb Shard2 Server
After=network.target
Documentation=https://docs.mongodb.org/manual
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod/conf.d/shard2.conf"
ExecStart=/usr/bin/mongod $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/shard2.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for for mongod as specified in
# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings
[Install]
WantedBy=multi-user.target
啟動三台服務器的shard2 server
systemctl enable mongod-shard2
systemctl start mongod-shard2
登陸任意一台非仲裁節點服務器,初始化副本集
mongo 127.0.0.1:27002
#使用admin數據庫
use admin
#定義副本集配置,第一個節點的 "arbiterOnly":true 代表其為仲裁節點。
config = {
_id : "shard2",
members : [
{_id : 1, host : "192.168.56.101:27002" , arbiterOnly: true },
{_id : 2, host : "192.168.56.102:27002" , priority:2},
{_id : 3, host : "192.168.56.103:27002" , priority:1}
]
}
#初始化副本集配置
rs.initiate(config);
3) 設置第三個分片副本集
添加配置文件
vi /etc/mongod/conf.d/shard3.conf
## 配置文件內容
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/shard3.log
storage:
dbPath: /var/lib/mongo/shard3/data
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /var/run/mongodb/shard3.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27003
bindIp: 0.0.0.0
maxIncomingConnections: 20000
replication:
replSetName: shard3
sharding:
clusterRole: shardsvr
創建systemctl unit文件mongod-shard3.service
[Unit]
Description=Mongodb Shard3 Server
After=network.target
Documentation=https://docs.mongodb.org/manual
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod/conf.d/shard3.conf"
ExecStart=/usr/bin/mongod $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/shard3.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for for mongod as specified in
# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings
[Install]
WantedBy=multi-user.target
啟動三台服務器的shard3 server
systemctl enable mongod-shard3
systemctl start mongod-shard3
登陸任意一台非仲裁節點服務器,初始化副本集
mongo 127.0.0.1:27003
#使用admin數據庫
use admin
#定義副本集配置,第二個節點的 "arbiterOnly":true 代表其為仲裁節點。
config = {
_id : "shard3",
members : [
{_id : 1, host : "192.168.56.101:27003" , priority:1},
{_id : 2, host : "192.168.56.102:27003" , arbiterOnly: true},
{_id : 3, host : "192.168.56.103:27003" , priority:2}
]
}
#初始化副本集配置
rs.initiate(config);
4、配置路由服務器 mongos
先啟動配置服務器和分片服務器,后啟動路由實例啟動路由實例:(三台機器)
vi /etc/mongod/conf.d/mongos.conf
## 配置文件內容
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongos.log
processManagement:
fork: true
pidFilePath: /var/run/mongodb/mongos.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 20000
bindIp: 0.0.0.0
maxIncomingConnections: 20000
sharding:
configDB: csReplSet/192.168.56.101:21000, 192.168.56.102:21000, 192.168.56.103:21000
#注意監聽的配置服務器,只能有1個或者3個 csReplSet為配置服務器的副本集名字
configDB: csReplSet/192.168.56.101:21000, 192.168.56.102:21000, 192.168.56.103:21000
創建systemctl unit文件mongod-mongos.service
[Unit]
Description=Mongodb Mongos Server
After=network.target mongod-shard1.service mongod-shard2.service mongod-shard3.service
Documentation=https://docs.mongodb.org/manual
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod/conf.d/mongos.conf"
ExecStart=/usr/bin/mongos $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/mongos.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for for mongod as specified in
# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings
[Install]
WantedBy=multi-user.target
啟動三台服務器的mongos server
systemctl enable mongod-mongos
systemctl start mongod-mongos
5、啟用分片
目前搭建了mongodb配置服務器、路由服務器,各個分片服務器,不過應用程序連接到mongos路由服務器並不能使用分片機制,還需要在程序里設置分片配置,讓分片生效。
登陸任意一台mongos
mongo 127.0.0.1:20000
#使用admin數據庫
user admin
#串聯路由服務器與分配副本集
sh.addShard("shard1/192.168.56.101:27001,192.168.56.102:27001,192.168.56.103:27001")
sh.addShard("shard2/192.168.56.101:27002,192.168.56.102:27002,192.168.56.103:27002")
sh.addShard("shard3/192.168.56.101:27003,192.168.56.102:27003,192.168.56.103:27003")
#查看集群狀態
sh.status()
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5a86b6255d128f35cb22de20")
}
shards:
{ "_id" : "shard1", "host" : "shard1/192.168.56.101:27001,192.168.56.102:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/192.168.56.102:27002,192.168.56.103:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/192.168.56.101:27003,192.168.56.103:27003", "state" : 1 }
active mongoses:
"3.6.2" : 3
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
6、測試
目前配置服務、路由服務、分片服務、副本集服務都已經串聯起來了,但我們的目的是希望插入數據,數據能夠自動分片。連接在mongos上,准備讓指定的數據庫、指定的集合分片生效。
mongo 127.0.0.1:20000
#設置分片chunk大小
use config
db.settings.save({ "_id" : "chunksize", "value" : 1 })
設置1M是為了測試,否則要插入大量數據才能分片。
#指定test分片生效
sh.enableSharding("test")
#指定數據庫里需要分片的集合和片鍵
use test
db.users.createIndex({user_id : 1})
use admin
sh.shardCollection("test.users", {user_id: 1})
我們設置testdb的 table1 表需要分片,根據 user_id 自動分片到 shard1 ,shard2,shard3 上面去。要這樣設置是因為不是所有mongodb 的數據庫和表 都需要分片!
測試分片配置結果
mongo 127.0.0.1:20000
use test;
for (var i = 1; i <=1000000; i++){
db.users.save({user_id: i, username: "user"+i});
}
#查看分片情況如下,部分無關信息省掉了
sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5a8919101e5f8edd5feddf36")
}
shards:
{ "_id" : "shard1", "host" : "shard1/192.168.56.101:27001,192.168.56.102:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/192.168.56.102:27002,192.168.56.103:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/192.168.56.101:27003,192.168.56.103:27003", "state" : 1 }
active mongoses:
"3.6.2" : 3
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Collections with active migrations:
test.users started at Sun Feb 18 2018 21:14:30 GMT+0800 (CST)
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
43 : Success
1 : Failed with error 'aborted', from shard1 to shard3
139 : Failed with error 'aborted', from shard1 to shard2
databases:
…
{ "_id" : "test", "primary" : "shard3", "partitioned" : true }
test.users
shard key: { "user_id" : 1 }
unique: false
balancing: true
chunks:
shard1 34
shard2 31
shard3 31
too many chunks to print, use verbose if you want to force print
可以看到數據分到3個分片,各自分片chunk 數為: shard1 : 34,shard2 : 31,shard3 : 31。已經成功了!
后期運維
啟動
mongodb的啟動順序是,先啟動配置服務器,在啟動分片,最后啟動mongos.
systemctl start mongod-configsvr
systemctl start mongod-shard1
systemctl start mongod-shard2
systemctl start mongod-shard3
systemctl start mongod-mongos
關閉:
systemctl stop mongod-mongos
systemctl stop mongod-shard3
systemctl stop mongod-shard2
systemctl stop mongod-shard1
systemctl stop mongod-configsvr