前提條件:已經部署好ceph集群
本次實驗由於環境有限,ceph集群是部署在k8s的master節點上的
一、創建ceph存儲池
在ceph集群的mon節點上執行以下命令:
ceph osd pool create k8s-volumes 64 64
查看下副本數
[root@master ceph]# ceph osd pool get k8s-volumes size size: 3
pg的設置參照以下公式:
Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_count
結算的結果往上取靠近2的N次方的值。比如總共OSD數量是2,復制份數3,pool數量也是1,那么按上述公式計算出的結果是66.66。取跟它接近的2的N次方是64,那么每個pool分配的PG數量就是64。
二、在k8s的所有節點上安裝ceph-common
1、配置國內 yum源地址、ceph源地址
cp -r /etc/yum.repos.d/ /etc/yum-repos-d-bak yum install -y wget rm -rf /etc/yum.repos.d/* wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo yum clean all yum makecache
cat <<EOF > /etc/yum.repos.d/ceph.repo [ceph] name=Ceph packages baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/ enabled=1 gpgcheck=1 priority=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch enabled=1 gpgcheck=1 priority=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS enabled=0 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1 EOF
2、安裝ceph-common
yum -y install ceph-common
3、將ceph的mon節點的配置文件/etc/ceph/ceph.conf 放到所有k8s節點的/etc/ceph目錄下
4、將ceph的mon節點的文件 /etc/ceph/ceph.client.admin.keyring 放到所有k8s節點的/etc/ceph目錄下
三、以rbac方式對接ceph
由於是用kubeadm部署的k8s集群,kube-controller-manager是以容器方式運行的,里面並沒有ceph-common,所以采用 擴展存儲卷插件 的方式來實現。
簡單說一下,這里提供 rbac 和 no-rbac 兩種方式,這里因為我們搭建的 k8s 集群時開啟了 rbac 認證的,所以這里采用 rbac 方式來創建該 deployment。
1、下載插件鏡像:(本人已經將其上傳到阿里雲的鏡像倉庫了)
docker pull registry.cn-hangzhou.aliyuncs.com/boshen-ns/rbd-provisioner:v1.0
2、創建/root/k8s-ceph-rbac/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-provisioner
3、創建/root/k8s-ceph-rbac/clusterrole.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: rbd-provisioner [root@master k8s-ceph-rbac]# vim clusterrole.yaml [root@master k8s-ceph-rbac]# cat clusterrole.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-provisioner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["list", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get", "create", "delete"]
4、創建/root/k8s-ceph-rbac/clusterrolebinding.yaml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-provisioner subjects: - kind: ServiceAccount name: rbd-provisioner namespace: default roleRef: kind: ClusterRole name: rbd-provisioner apiGroup: rbac.authorization.k8s.io
5、創建/root/k8s-ceph-rbac/role.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: rbd-provisioner rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"]
6、創建/root/k8s-ceph-rbac/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: rbd-provisioner roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: rbd-provisioner subjects: - kind: ServiceAccount name: rbd-provisioner namespace: default
7、創建/root/k8s-ceph-rbac/deployment.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: rbd-provisioner spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: rbd-provisioner spec: containers: - name: rbd-provisioner image: registry.cn-hangzhou.aliyuncs.com/boshen-ns/rbd-provisioner:v1.0 imagePullPolicy: Never env: - name: PROVISIONER_NAME value: ceph.com/rbd serviceAccount: rbd-provisioner
8、創建/root/k8s-ceph-rbac/ceph-secret.yaml
apiVersion: v1 kind: Secret metadata: name: ceph-secret type: "kubernetes.io/rbd" data: key: QVFDQmRvbGNxSHlaQmhBQW45WllIbCtVd2JrTnlPV0xseGQ4RUE9PQ==
上面key的值用以下方式查看:
[root@master ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64 QVFDQmRvbGNxSHlaQmhBQW45WllIbCtVd2JrTnlPV0xseGQ4RUE9PQ==
9、創建/root/k8s-ceph-rbac/ceph-storageclass.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph-storage-class provisioner: ceph.com/rbd parameters: #monitors: 192.168.137.10:6789 monitors: ceph-mon-1.default.svc.cluster.local.:6789 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: default pool: k8s-volumes userId: admin userSecretName: ceph-secret fsType: ext4 imageFormat: "2" imageFeatures: layering
注意:上面的monitors,不能直接寫ip,這樣以后創建pvc會報:missing Ceph monitors。源碼中,monitors需要k8s dns解析,我這里使用外部ceph,肯定沒有相關解析。所以手動添加解析,如第10步。
10、創建/root/k8s-ceph-rbac/rbd-monitors-dns.yaml
kind: Service apiVersion: v1 metadata: name: ceph-mon-1 spec: type: ExternalName externalName: 192.168.137.10.xip.io
ceph的mon地址為:192.168.137.10:6789
11、執行以下命令將上面1到10步的yaml文件進行執行
kubeclt apply -f k8s-ceph-rbac/
12、進行測試是否可用
1)創建test-pvc.yaml
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc spec: accessModes: - ReadWriteOnce storageClassName: ceph-storage-class resources: requests: storage: 1Gi
kubectl apply -f test-pvc.yaml
狀態為Bound,說明創建的pvc正常
擴展鏈接:https://github.com/kubernetes-incubator/external-storage