一、部署環境規划
1.1 各版本規划如下表
到今目前為止各軟件的支持情況 https://docs.ceph.com/docs/master/releases/general/
1.2 查看磁盤信息
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 50G 0 disk └─vda1 253:1 0 50G 0 part / vdb 253:16 0 20G 0 disk vdc 253:32 0 20G 0 disk vdd 253:48 0 20G 0 disk
二、環境初始化(沒有說明則三個節點都進行操作)
2.1需要配置時間同步
# crontab -e */5 * * * * /usr/sbin/ntpdate time1.aliyun.com
或者使用
yum install ntpdate ntp -y ntpdate cn.ntp.org.cn systemctl restart ntpd && systemctl enable ntpd
2.2 關閉SELinux
sed -i "/^SELINUX/s/enforcing/disabled/" /etc/selinux/config setenforce 0
2.3 關閉firewalld
systemctl stop firewalld
systemctl disable firewalld
或者增加規則
firewall-cmd --zone=public --add-port=6789/tcp --permanent firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent firewall-cmd --reload firewall-cmd --zone=public --list-all
2.4 增加本地hosts解析
# vim /etc/hosts 192.168.5.91 ceph01 192.168.5.92 ceph02 192.168.5.93 ceph03
2.5 配置sudo不需要tty,7.6應該不需要配置
sed -i 's/Default requiretty/#Default requiretty/' /etc/sudoers
2.6 創建ceph.repo配置ceph m版本的軟件安裝源
阿里其它版本軟件源都在此位置找:http://mirrors.aliyun.com/ceph/
# vim /etc/yum.repos.d/ceph.repo [ceph] name=ceph baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/ gpgcheck=0 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ gpgcheck=0
2.7 創建cephadmin部署帳戶
useradd cephadmin echo 'cephadmin' | passwd --stdin cephadmin echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmin chmod 0440 /etc/sudoers.d/cephadmin
2.8 僅在ceph01 節點部署節點配置ssh免密鑰登錄
[root@ceph01 ~]# su - cephadmin [cephadmin@ceph01 ~]$ ssh-keygen [cephadmin@ceph01 ~]$ ssh-copy-id cephadmin@ceph01 [cephadmin@ceph01 ~]$ ssh-copy-id cephadmin@ceph02 [cephadmin@ceph01 ~]$ ssh-copy-id cephadmin@ceph03
三、使用ceph-deploy部署集群
3.1 在ceph01 節點安裝ceph-deploy和python-pip軟件
[cephadmin@ceph01 ~]$ sudo yum install -y ceph-deploy python-pip
3.2 在三個節點上安裝ceph軟件包
sudo yum install -y ceph ceph-radosgw
3.3 在ceph01上安裝集群
[cephadmin@ceph01 ~]$ mkdir my-cluster [cephadmin@ceph01 ~]$ cd my-cluster/ [cephadmin@ceph01 my-cluster]$ ceph-deploy new ceph01 ceph02 ceph03 # 如果不指定自定義集群的名稱默認為ceph,多個集群安裝的時候可以進行指定。 ceph-deploy --cluster {cluster-name} new node1 node2 # 安裝完成之后會生成配置文件 [cephadmin@ceph01 my-cluster]$ ls ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
也可以將集群網絡和內部網絡互相隔離,在ceph.conf中增加兩行配置。
[cephadmin@ceph01 my-cluster]$ cat ceph.conf [global] fsid = 4d02981a-cd20-4cc9-8390-7013da54b161 mon_initial_members = ceph01, ceph02, ceph03 mon_host = 192.168.5.91,192.168.5.92,192.168.5.93 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx # 如果是雙網卡只設置一個網關,然后增加如下兩行。若不分離網絡請忽添加。 public network = 192.168.5.0/24 cluster network = 192.168.10.0/24
3.4 集群初始化monitor 監控組件
[cephadmin@ceph01 my-cluster]$ ceph-deploy mon create-initial
3.5 把配置信息拷貝到各個節點當中
[cephadmin@ceph01 my-cluster]$ ceph-deploy admin ceph01 ceph02 ceph03 # 在所有節點執行操作;為/etc/ceph 授予cephadmin的權限 sudo chown -R cephadmin.cephadmin /etc/ceph [cephadmin@ceph01 my-cluster]$ ceph -s # 查看集群狀態
3.6 配置OSD,添加到上面的OSD當中,用來存儲數據
在ceph01的my-cluster目錄對3個節點進行操作:清空磁盤、創建磁盤、加 入集群中,如果不想整個磁盤做數據目錄,但也不能使用目錄,可以使用一個LVM分區。如果要在LVM卷上創建OSD,則參數--data 必須是volume_group/lv_name,而不是卷的塊設備路徑。
for dev in /dev/vdb /dev/vdc /dev/vdd do ceph-deploy disk zap ceph01 $dev ceph-deploy osd create ceph01 --data $dev ceph-deploy disk zap ceph02 $dev ceph-deploy osd create ceph02 --data $dev ceph-deploy disk zap ceph03 $dev ceph-deploy osd create ceph03 --data $dev done
一個OSD守護進程相當於一塊物理磁盤
3.7 部署MGR,L版本以后需要安裝mgr,它用來管理dashbord組件,必須要部署的。
[cephadmin@ceph01 my-cluster]$ ceph-deploy mgr create ceph01 ceph02 ceph03
四、開啟dashboard
4.1 打開dashboard模塊
[cephadmin@ceph01 my-cluster]$ ceph mgr module ls
4.2 生成自簽證書
[cephadmin@ceph01 my-cluster]$ ceph dashboard create-self-signed-cert
Self-signed certificate created
4.3 生成密鑰對
[cephadmin@ceph01 mgr-dashboard]$ openssl req -new -nodes -x509 -subj "/O=IT/CN=ceph-mgr-dashboard" -days 3650 -keyout dashboard.key -out dashboard.crt -extensions v3_ca Generating a 2048 bit RSA private key ...................................+++ ...........+++ writing new private key to 'dashboard.key' ----- [cephadmin@ceph01 mgr-dashboard]$ ls dashboard.crt dashboard.key
4.4 設置IP和端口,惹不設置默認為8443
[cephadmin@ceph01 my-cluster]$ ceph config set mgr mgr/dashboard/server_addr 192.168.5.91 [cephadmin@ceph01 my-cluster]$ ceph config set mgr mgr/dashboard/server_port 8080 [cephadmin@ceph01 my-cluster]$ ceph mgr services { "dashboard": "https://ceph01:8443/" }
4.5 重啟dashboard
[cephadmin@ceph01 my-cluster]$ ceph mgr module disable dashboard [cephadmin@ceph01 my-cluster]$ ceph mgr module enable dashboard [cephadmin@ceph01 my-cluster]$ ceph mgr services { "dashboard": "https://192.168.5.91:8080/" }
4.6 設置登錄帳號和密碼
[cephadmin@ceph01 my-cluster]$ ceph dashboard set-login-credentials admin admin
4.7 可以直接訪問
https://192.168.5.91:8080/