centos7搭建ceph集群


 

一、服務器規划

 

主機名 主機IP 磁盤配比 角色
node1

public-ip:10.0.0.130
cluster-ip:192.168.2.130

sda,sdb,sdc
sda是系統盤,另外兩塊數據盤
ceph-deploy,monitor,mgr,osd
node2

public-ip:10.0.0.131
cluster-ip:192.168.2.131

sda,sdb,sdc
sda是系統盤,另外兩塊數據盤
monitor,mgr,osd
node3

public-ip:10.0.0.132
cluster-ip:192.168.2.132

sda,sdb,sdc
sda是系統盤,另外兩塊數據盤
monitor,mgr,osd

 

二、設置主機名

主機名設置,三台主機分別執行屬於自己的命令

node1

[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# hostname node1

 

node2

[root@localhost ~]# hostnamectl set-hostname node2
[root@localhost ~]# hostname node2

 

node3

[root@localhost ~]# hostnamectl set-hostname node3
[root@localhost ~]# hostname node3

 

執行完畢后要想看到效果,需要關閉當前命令行窗口,重新打開即可看到設置效果

 

三、設置hosts文件

 

在3台機器上都執行下面命令,添加映射

echo "10.0.0.130 node1 " >> /etc/hosts
echo "10.0.0.131 node2 " >> /etc/hosts
echo "10.0.0.132 node3 " >> /etc/hosts

 

 

四、創建用戶並設置免密登錄

創建用戶(三台機器上都運行)

useradd -d /home/admin -m admin
echo "123456" | passwd admin --stdin 
#sudo權限
echo "admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/admin
sudo chmod 0440 /etc/sudoers.d/admin

設置免密登錄  (只在node1上執行)

[root@node1 ~]# su - admin
[admin@node1 ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/admin/.ssh/id_rsa): 
Created directory '/home/admin/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/admin/.ssh/id_rsa.
Your public key has been saved in /home/admin/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:qfWhuboKeoHQOOMLOIB5tjK1RPjgw/Csl4r6A1FiJYA admin@admin.ops5.bbdops.com
The key's randomart image is:
+---[RSA 2048]----+
|+o..             |
|E.+              |
|*%               |
|X+X      .       |
|=@.+    S .      |
|X.*    o + .     |
|oBo.  . o .      |
|ooo.     .       |
|+o....oo.        |
+----[SHA256]-----+
[admin@node1 ~]$ ssh-copy-id admin@node1
[admin@node1 ~]$ ssh-copy-id admin@node2
[admin@node1 ~]$ ssh-copy-id admin@node3

 

五、配置時間同步

三台都執行

yum -y install ntpdate
ntpdate -u  cn.ntp.org.cn

crontab -e
*/20 * * * * ntpdate -u  cn.ntp.org.cn > /dev/null 2>&1

systemctl reload crond.service

 

 

 

六、安裝ceph-deploy並安裝ceph軟件包

配置ceph清華源

cat > /etc/yum.repos.d/ceph.repo<<'EOF'
[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
EOF

 

安裝ceph-deploy

[root@node1 ~]# sudo yum install ceph-deploy

 

初始化mon點

ceph需要epel源的包,所以安裝的節點都需要yum install epel-release

[admin@node1 ~]$ mkdir my-cluster
[admin@node1 ~]$ cd my-cluster
# new
[admin@node1 my-cluster]$ ceph-deploy new node1 node2 node3
Traceback (most recent call last):
  File "/bin/ceph-deploy", line 18, in <module>
    from ceph_deploy.cli import main
  File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in <module>
    import pkg_resources
ImportError: No module named pkg_resources
#以上出現報錯,是因為沒有pip,安裝pip
[admin@node1 my-cluster]$ sudo yum install epel-release
[admin@node1 my-cluster]$ sudo yum install python-pip
#重新初始化
[admin@node1 my-cluster]$ ceph-deploy new node1 node2 node3
[admin@node1 my-cluster]$ ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring
[admin@node1 my-cluster]$ cat ceph.conf 
[global]
fsid = a1132f78-cdc5-43d0-9ead-5b590c60c53d
mon_initial_members = node1, node2, node3
mon_host = 10.28.103.211,10.28.103.212,10.28.103.213
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

 

修改ceph.conf,添加如下配置

public network = 10.28.103.0/24
cluster network = 172.30.103.0/24
osd pool default size       = 3
osd pool default min size   = 2
osd pool default pg num     = 128
osd pool default pgp num    = 128
osd pool default crush rule = 0
osd crush chooseleaf type   = 1
max open files              = 131072
ms bind ipv6                = false
[mon]
mon clock drift allowed      = 10
mon clock drift warn backoff = 30
mon osd full ratio           = .95
mon osd nearfull ratio       = .85
mon osd down out interval    = 600
mon osd report timeout       = 300
mon allow pool delete      = true
[osd]
osd recovery max active      = 3    
osd max backfills            = 5
osd max scrubs               = 2
osd mkfs type = xfs
osd mkfs options xfs = -f -i size=1024
osd mount options xfs = rw,noatime,inode64,logbsize=256k,delaylog
filestore max sync interval  = 5
osd op threads               = 2

 

安裝Ceph軟件到指定節點

[admin@node1 my-cluster]$ ceph-deploy install --no-adjust-repos node1 node2 node3

--no-adjust-repos是直接使用本地源,不生成官方源。

 

部署初始的monitors,並獲得keys

[admin@node1 my-cluster]$ ceph-deploy mon create-initial

 

做完這一步,在當前目錄下就會看到有如下的keyrings:

[admin@node1 my-cluster]$ ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph-deploy-ceph.log
ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  ceph.conf                  ceph.mon.keyring

 

將配置文件和密鑰復制到集群各節點

配置文件就是生成的ceph.conf,而密鑰是ceph.client.admin.keyring,當使用ceph客戶端連接至ceph集群時需要使用的密默認密鑰,這里我們所有節點都要復制,命令如下。

 

[admin@node1 my-cluster]$ ceph-deploy admin node1 node2 node3

 

七、部署ceph-mgr

#在L版本的`Ceph`中新增了`manager daemon`,如下命令部署一個`Manager`守護進程
[admin@node1 my-cluster]$ ceph-deploy mgr create node1 

八、創建osd

在node1上執行以下命令

#用法:ceph-deploy osd create –data {device} {ceph-node}
ceph-deploy osd create --data /dev/sdb node1
ceph-deploy osd create --data /dev/sdb node2
ceph-deploy osd create --data /dev/sdb node3
ceph-deploy osd create --data /dev/sdc node1
ceph-deploy osd create --data /dev/sdc node2
ceph-deploy osd create --data /dev/sdc node3

如果報錯,記得用root執行

 

檢查osd狀態

[admin@node1 ~]$ sudo ceph health
HEALTH_OK
[admin@node1 ~]$ sudo ceph -s
  cluster:
    id:     af6bf549-45be-419c-92a4-8797c9a36ee8
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node1,node2,node3
    mgr: node1(active)
    osd: 6 osds: 6 up, 6 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   6.0 GiB used, 108 GiB / 114 GiB avail
    pgs:     
 

 

默認情況下ceph.client.admin.keyring文件的權限為600,屬主和屬組為root,如果在集群內節點使用cephadmin用戶直接直接ceph命令,將會提示無法找到/etc/ceph/ceph.client.admin.keyring文件,因為權限不足。

如果使用sudo ceph不存在此問題,為方便直接使用ceph命令,可將權限設置為644。在集群節點上面node1 admin用戶下執行下面命令。

[admin@node1 my-cluster]$ ceph -s
2020-03-08 07:59:36.062 7f52d08e0700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2020-03-08 07:59:36.062 7f52d08e0700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication
[errno 2] error connecting to the cluster
[admin@node1 my-cluster]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring 
[admin@node1 my-cluster]$ ceph -s
  cluster:
    id:     af6bf549-45be-419c-92a4-8797c9a36ee8
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node1,node2,node3
    mgr: node1(active)
    osd: 6 osds: 6 up, 6 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   6.1 GiB used, 108 GiB / 114 GiB avail
    pgs:     
 
[admin@node1 my-cluster]$ 

 

 

 查看osds

[admin@node1 ~]$ sudo ceph osd tree
ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF 
-1       0.11151 root default                           
-3       0.03717     host node1                         
 0   hdd 0.01859         osd.0      up  1.00000 1.00000 
 3   hdd 0.01859         osd.3      up  1.00000 1.00000 
-5       0.03717     host node2                         
 1   hdd 0.01859         osd.1      up  1.00000 1.00000 
 4   hdd 0.01859         osd.4      up  1.00000 1.00000 
-7       0.03717     host node3                         
 2   hdd 0.01859         osd.2      up  1.00000 1.00000 
 5   hdd 0.01859         osd.5      up  1.00000 1.00000 

 

 

九、開啟MGR監控模塊

方式一:命令操作

ceph mgr module enable dashboard

如果以上操作報錯如下:

Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement

則因為沒有安裝ceph-mgr-dashboard,在mgr的節點上安裝。

yum install ceph-mgr-dashboard

方式二:配置文件

# 編輯ceph.conf文件
vi ceph.conf
[mon]
mgr initial modules = dashboard
#推送配置
[admin@admin my-cluster]$ ceph-deploy --overwrite-conf config push node1 node2 node3 
#重啟mgr
 sudo systemctl restart ceph-mgr@node1

 

web登錄配置

默認情況下,儀表板的所有HTTP連接均使用SSL/TLS進行保護。

#要快速啟動並運行儀表板,可以使用以下內置命令生成並安裝自簽名證書:
[root@node1 my-cluster]# ceph dashboard create-self-signed-cert
Self-signed certificate created

#創建具有管理員角色的用戶: [root@node1 my
-cluster]# ceph dashboard set-login-credentials admin admin Username and password updated
#查看ceph-mgr服務: [root@node1 my
-cluster]# ceph mgr services { "dashboard": "https://node1:8443/" }

 

以上配置完成后,瀏覽器輸入https://node1:8443輸入用戶名admin,密碼admin登錄即可查看

 

 

 

 

 

參考鏈接:

https://www.sysit.cn/blog/post/sysit/Ceph%E5%AE%89%E8%A3%85%E9%85%8D%E7%BD%AE%E6%89%8B%E5%86%8C

https://boke.wsfnk.com/archives/1163.html

https://www.linux-note.cn/?p=85

 

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM