Ceph集群搭建及Kubernetes上實現動態存儲(StorageClass)


集群准備

ceph集群配置說明

 
節點名稱 IP地址 配置 作用
ceph-moni-0 10.10.3.150

centos7.5

4C,16G,200Disk

管理節點,監視器 monitor
ceph-moni-1 10.10.3.151

centos7.5

4C,16G,200Disk
監視器 monitor
ceph-moni-2 10.10.3.152

centos7.5

4C,16G,200Disk
監視器 monitor
ceph-osd-0 10.10.3.153

centos7.5

4C,16G,200Disk
存儲節點 osd
ceph-osd-1 10.10.3.154

centos7.5

4C,16G,200Disk
存儲節點 osd
ceph-osd-2 10.10.3.155

centos7.5

4C,16G,200Disk
存儲節點 osd

本文使用ceph-deploy安裝配置集群,以 6 個節點—3 個 monitor 節點,3 個 osd 節點。

Ceph 集群的安裝配置

安裝依賴包(所有節點)

sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

配置cephd的yum源(所有節點)

vim /etc/yum.repos.d/ceph.repo

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/SRPMS
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

 

更新yum源(所有節點)

sudo yum update 

添加集群hosts信息(所有節點)

cat >> /etc/hosts <<EOF
10.10.3.150 ceph-moni-0
10.10.3.151 ceph-moni-1
10.10.3.152 ceph-moni-2
10.10.3.153 ceph-osd-0
10.10.3.154 ceph-osd-1
10.10.3.155 ceph-osd-2
EOF

創建用戶,賦予 root 權限 設置sudo免密(所有節點)

 useradd -d /home/ceph -m ceph  && echo 123456 | passwd --stdin ceph && echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph

在管理節點安裝 ceph-deploy(管理節點)

sudo yum install ceph-deploy

配置 ssh 免密登錄(管理節點)

su - ceph
ssh-keygen -t rsa
#傳key到各節點
ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-moni-1
ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-moni-2
ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-osd-0
ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-osd-1
ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-osd-2
#管理節點上更改~/.ssh/config
Host ceph-moni-1
Hostname ceph-moni-1
User ceph
Host ceph-moni-2
Hostname ceph-moni-2
User ceph
Host ceph-osd-0
Hostname cceph-osd-0
User ceph
Host ceph-osd-1
Hostname cceph-osd-1
User ceph
Host ceph-osd-2
Hostname cceph-osd-2
User ceph
#更改權限
sudo chmod 600 ~/.ssh/config

創建管理節點

su - ceph 
mkdir ceph-cluster
cd ceph-cluster
ceph-deploy new {initial-monitor-node(s)}
例如
ceph-deploy new ceph-moni-0 ceph-moni-1 ceph-moni-2

在管理節點上,更改生成的 ceph 配置文件,添加以下內容

vim ceph.conf 
#更改 osd 個數
osd pool default size = 3
#允許 ceph 集群刪除 pool
[mon]
mon_allow_pool_delete = true

在管理節點上給集群所有節點安裝 ceph

ceph-deploy install {ceph-node} [{ceph-node} ...]
例如:
ceph-deploy install ceph-moni-0 ceph-moni-1 ceph-moni-2 ceph-osd-0 ceph-osd-1 ceph-osd2

配置初始 monitor(s)、並收集所有密鑰

ceph-deploy mon create-initial

在管理節點上登錄到每個 osd 節點,創建 osd 節點的數據存儲目錄(所有osd節點)

ssh ceph-osd-0
sudo mkdir /var/local/osd0
sudo chmod 777 -R /var/local/osd0 exit
ssh ceph-osd-1 sudo mkdir /var/local/osd1
sudo chmod 777 -R /var/local/osd1 exit
ssh ceph-osd-2 sudo mkdir /var/local/osd2
sudo chmod 777 -R /var/local/osd2 exit

使每個 osd 就緒(管理節點執行)

ceph-deploy osd prepare ceph-osd-0:/var/local/osd0 ceph-osd-1:/var/local/osd1 ceph-osd-2:/var/local/osd2

激活每個 osd 節點(管理節點執行)

ceph-deploy osd activate ceph-osd-0:/var/local/osd0 ceph-osd-1:/var/local/osd1 ceph-osd-2:/var/local/osd2

在管理節點把配置文件和 admin 密鑰拷貝到管理節點和 Ceph 節點,賦予 ceph.client.admin.keyring 有操作權限(所有節點)

ceph-deploy admin {manage-node} {ceph-node}
例如:
ceph-deploy admin ceph-moni-0 ceph-moni-1 ceph-moni-2 ceph-osd-0 ceph-osd-1 ceph-osd-2
#所有節點執行
sudo chmod +r /etc/ceph/ceph.client.admin.keyring

部署完成。查看集群狀態

$ eph health
HEALTH_OK

 客戶端配置

因為我的kubernetes的操作系統是Ubuntu操作系統,Kubernetes 集群中的每個節點要想使用 Ceph,需要按照 Ceph 客戶端來安裝配置 Ceph,所有我這里有兩種操作系統的安裝方式。

Centos

添加 ceph 源

vim /etc/yum.repos.d/ceph.repo

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/SRPMS
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

安裝 ceph client

yum update & yum install -y ceph

添加集群信息

cat >> /etc/hosts <<EOF
10.10.3.30 ceph-client01
10.10.3.150 ceph-moni-0 10.10.3.151 ceph-moni-1 10.10.3.152 ceph-moni-2 10.10.3.153 ceph-osd-0 10.10.3.154 ceph-osd-1 10.10.3.155 ceph-osd-2 EOF

拷貝集群配置信息和 admin 密鑰

scp -r root@ceph-moni-0:/etc/ceph/\{ceph.conf,ceph.client.admin.keyring\} /etc/ceph/

Ubuntu

配置源

wget -q -O- https://mirrors.aliyun.com/ceph/keys/release.asc | sudo apt-key add -; echo deb https://mirrors.aliyun.com/ceph/debian-kraken xenial main | sudo tee /etc/apt/sources.list.d/ceph.list

更新源

apt update  && apt -y dist-upgrade && apt -y autoremove

安裝 ceph client

apt-get install ceph 

添加集群信息

cat >> /etc/hosts <<EOF
10.10.3.30 ceph-client01
10.10.3.150 ceph-moni-0
10.10.3.151 ceph-moni-1
10.10.3.152 ceph-moni-2
10.10.3.153 ceph-osd-0
10.10.3.154 ceph-osd-1
10.10.3.155 ceph-osd-2
EOF

拷貝集群配置信息和 admin 密鑰

scp -r root@ceph-moni-0:/etc/ceph/\{ceph.conf,ceph.client.admin.keyring\} /etc/ceph/

 配置StorageClass

所有的k8s節點的node節點到要能訪問到ceph的服務端,所以所有的node節點要安裝客戶端(ceph-common),我上面是直接安裝ceph,也是可以的。

生成key文件

$ grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64
QVFCWXB0RmIzK2dqTEJBQUtsYm4vaHU2NWZ2eHlaaGRnM2hwc1E9PQ==

配置訪問ceph的secret

下面的key默認是default的Namespace,所有只能在default下使用,要想其他namespace下使用,需要在指定namespace下創建key,修改namespace即可。

$ vim ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
namespace: default type:
"kubernetes.io/rbd" data: key: QVFCWXB0RmIzK2dqTEJBQUtsYm4vaHU2NWZ2eHlaaGRnM2hwc1E9PQ== $ kubectl apply -f ceph-secret.yaml secret/ceph-secret created $ kubectl get secret NAME TYPE DATA AGE ceph-secret kubernetes.io/rbd 1 4s default-token-lplp6 kubernetes.io/service-account-token 3 50d mysql-root-password Opaque 1 2d

 

配置ceph的存儲類

$ vim ceph-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: jax-ceph
provisioner: kubernetes.io/rbd
parameters:
  monitors: 10.10.3.150:6789,10.10.3.151:6789,10.10.3.152:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: default
  pool: rbd
  userId: admin
  userSecretName: ceph-secret
$ kubectl apply -f ceph-storageclass.yaml 
storageclass.storage.k8s.io/jax-ceph created
$ kubectl get storageclass
NAME              PROVISIONER          AGE
jax-ceph          kubernetes.io/rbd    1

到此動態存儲創建完成

下面是一個在statefulset中的一個使用方法

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myapp
spec:
  serviceName: myapp-sts-svc
  replicas: 2
  selector:
    matchLabels:
      app: myapp-pod
  template:
    metadata:
      labels:
        app: myapp-pod
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: myappdata
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: myappdata
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: "jax-ceph"
      resources:
        requests:
          storage: 5Gi

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM