Ceph 14.2.5-K8S 使用Ceph存儲實戰 -- <6>


K8S 使用Ceph存儲

PV、PVC概述

管理存儲是管理計算的一個明顯問題。PersistentVolume子系統為用戶和管理員提供了一個API,用於抽象如何根據消費方式提供存儲的詳細信息。於是引入了兩個新的API資源:PersistentVolume和PersistentVolumeClaim

PersistentVolume(PV)是集群中已由管理員配置的一段網絡存儲。 集群中的資源就像一個節點是一個集群資源。 PV是諸如卷之類的卷插件,但是具有獨立於使用PV的任何單個pod的生命周期。 該API對象包含存儲的實現細節,即NFS,iSCSI或雲提供商特定的存儲系統。

PersistentVolumeClaim(PVC)是用戶存儲的請求。 它類似於pod。Pod消耗節點資源,PVC消耗存儲資源。 pod可以請求特定級別的資源(CPU和內存)。 權限要求可以請求特定的大小和訪問模式。

雖然PersistentVolumeClaims允許用戶使用抽象存儲資源,但是常見的是,用戶需要具有不同屬性(如性能)的PersistentVolumes,用於不同的問題。 管理員需要能夠提供多種不同於PersistentVolumes,而不僅僅是大小和訪問模式,而不會使用戶了解這些卷的實現細節。 對於這些需求,存在StorageClass資源。

StorageClass為集群提供了一種描述他們提供的存儲的“類”的方法。 不同的類可能映射到服務質量級別,或備份策略,或者由群集管理員確定的任意策略。 Kubernetes本身對於什么類別代表是不言而喻的。 這個概念有時在其他存儲系統中稱為“配置文件”

POD動態供給

動態供給主要是能夠自動幫你創建pv,需要多大的空間就創建多大的pv。k8s幫助創建pv,創建pvc就直接api調用存儲類來尋找pv。

如果是存儲靜態供給的話,會需要我們手動去創建pv,如果沒有足夠的資源,找不到合適的pv,那么pod就會處於pending等待的狀態。而動態供給主要的一個實現就是StorageClass存儲對象,其實它就是聲明你使用哪個存儲,然后幫你去連接,再幫你去自動創建pv。

POD使用RBD做為持久數據卷

安裝與配置

RBD支持ReadWriteOnce,ReadOnlyMany兩種模式
1、配置rbd-provisioner

cat >external-storage-rbd-provisioner.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-provisioner
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns"]
    verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: rbd-provisioner
  namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner
  namespace: kube-system

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rbd-provisioner
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: rbd-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "quay.io/external_storage/rbd-provisioner:v2.0.0-k8s1.11"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      serviceAccount: rbd-provisioner
EOF
kubectl apply -f external-storage-rbd-provisioner.yaml

2、配置storageclass

1、創建pod時,kubelet需要使用rbd命令去檢測和掛載pv對應的ceph image,所以要在所有的worker節點安裝ceph客戶端ceph-common。
將ceph的ceph.client.admin.keyring和ceph.conf文件拷貝到master的/etc/ceph目錄下
yum -y install ceph-common

2、創建 osd pool 在ceph的mon或者admin節點
ceph osd pool create kube 128 128 
ceph osd pool ls

3、創建k8s訪問ceph的用戶 在ceph的mon或者admin節點
ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring

4、查看key 在ceph的mon或者admin節點
ceph auth get-key client.admin
AQCrBwteAI7TOhAAzFgRZO0MK/da2AFn5EddqA==
ceph auth get-key client.kube
AQDlPwxeT1MfBhAAB66MV550XcNcVfMq9dsnZQ==

5、創建 admin secret
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
--from-literal=key=AQCrBwteAI7TOhAAzFgRZO0MK/da2AFn5EddqA== \
--namespace=kube-system

6、在 default 命名空間創建pvc用於訪問ceph的 secret
kubectl create secret generic ceph-user-secret --type="kubernetes.io/rbd" \
--from-literal=key=AQDlPwxeT1MfBhAAB66MV550XcNcVfMq9dsnZQ== \
--namespace=default

3、配置StorageClass

cat >storageclass-ceph-rdb.yaml<<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: dynamic-ceph-rdb
provisioner: ceph.com/rbd
parameters:
  monitors: 192.168.25.224:6789,192.168.25.227:6789,192.168.25.228:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-user-secret
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"
EOF

4、創建yaml

kubectl apply -f storageclass-ceph-rdb.yaml

5、查看sc

kubectl get storageclasses

測試使用

1、創建pvc測試

cat >ceph-rdb-pvc-test.yaml<<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-rdb-claim
spec:
  accessModes:     
    - ReadWriteOnce
  storageClassName: dynamic-ceph-rdb
  resources:
    requests:
      storage: 2Gi
EOF
kubectl apply -f ceph-rdb-pvc-test.yaml

2、查看

kubectl get pvc
kubectl get pv

3、創建 nginx pod 掛載測試

cat >nginx-pod.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod1
  labels:
    name: nginx-pod1
spec:
  containers:
  - name: nginx-pod1
    image: nginx:alpine
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: ceph-rdb
      mountPath: /usr/share/nginx/html
  volumes:
  - name: ceph-rdb
    persistentVolumeClaim:
      claimName: ceph-rdb-claim
EOF
kubectl apply -f nginx-pod.yaml

4、查看

kubectl get pods -o wide
查看rbd-provisioner-pod 日志

E0101 09:29:53.509202       1 provision.go:232] dns lookup of "192.168.25.224" failed: err read udp 10.243.169.134:57984->10.0.0.2:53: i/o timeout
E0101 09:29:55.509858       1 provision.go:232] dns lookup of "192.168.25.227" failed: err read udp 10.243.169.134:38583->10.0.0.2:53: i/o timeout
I0101 09:29:57.252279       1 provision.go:132] successfully created rbd image "kubernetes-dynamic-pvc-4568768d-2c79-11ea-b8d3-367aebbc365a"
I0101 09:29:57.252307       1 controller.go:1043] volume "pvc-aa54a7a6-599b-4057-b9d3-eedc148c2604" for claim "default/ceph-rdb-claim" created
  101 09:29:57.260381       1 controller.go:1060] volume "pvc-aa54a7a6-599b-4057-b9d3-eed▽148c2604" for claim "default/ceph-rdb-claim" saved
I0101 09:29:57.260408       1 controller.go:1096] volume "pvc-aa54a7a6-599b-4057-b9d3-eedc148c2604" provisioned for claim "default/ceph-rdb-claim"
I0101 09:29:57.260793       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"ceph-rdb-claim", UID:"aa54a7a6-599b-4057-b9d3-eedc148c2604", APIVersion:"v1", ResourceVersion:"2729557", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-aa54a7a6-599b-4057-b9d3-eedc148c2604

5、修改文件內容

kubectl exec -ti nginx-pod1 -- /bin/sh -c 'echo this is from Ceph RBD!!! > /usr/share/nginx/html/index.html'

6、訪問測試

curl http://$podip

7、清理

kubectl delete -f nginx-pod.yaml
kubectl delete -f ceph-rdb-pvc-test.yaml

POD使用CephFS做為持久數據卷

CephFS方式支持k8s的pv的3種訪問模式ReadWriteOnce,ReadOnlyMany ,ReadWriteMany

Ceph端創建CephFS pool

1、如下操作在ceph的mon或者admin節點
CephFS需要使用兩個Pool來分別存儲數據和元數據

ceph osd pool create fs_data 128
ceph osd pool create fs_metadata 128
ceph osd lspools

2、創建一個CephFS

ceph fs new cephfs fs_metadata fs_data

3、查看

ceph fs ls

部署 cephfs-provisioner

1、使用社區提供的cephfs-provisioner

cat >external-storage-cephfs-provisioner.yaml<<EOF
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-admin 
  namespace: kube-system
type: "kubernetes.io/rbd"
data:
  # ceph auth get-key client.admin | base64
  key: QVFDckJ3dGVBSTdUT2hBQXpGZ1JaTzBNSy9kYTJBRm41RWRkcUE9PQ== 
[root@k8s-master1 cephfs]# cat cephfs-provisioner-deployment.yaml 
apiVersion: apps/v1 
kind: Deployment
metadata:
  name: cephfs-provisioner
  namespace: kube-system
spec:
  selector: 
    matchLabels:
      app: cephfs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: cephfs-provisioner
    spec:
      containers:
      - name: cephfs-provisioner
        image: "quay.io/external_storage/cephfs-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/cephfs
        - name: PROVISIONER_SECRET_NAMESPACE
          value: kube-system
        command:
        - "/usr/local/bin/cephfs-provisioner"
        args:
        - "-id=cephfs-provisioner-1"
        - "-disable-ceph-namespace-isolation=true"
      serviceAccount: cephfs-provisioner
[root@k8s-master1 cephfs]# 
[root@k8s-master1 cephfs]# ls 
cephfs-provisioner-deployment.yaml  pvc.yaml      serviceaccount-rbac.yaml
nginx-pod.yaml                      secrets.yaml  storageclass.yaml
[root@k8s-master1 cephfs]# cat serviceaccount-rbac.yaml 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cephfs-provisioner
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cephfs-provisioner
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cephfs-provisioner
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cephfs-provisioner
subjects:
- kind: ServiceAccount
  name: cephfs-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cephfs-provisioner
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "create", "delete"]
  - apiGroups: ["policy"]
    resourceNames: ["cephfs-provisioner"]
    resources: ["podsecuritypolicies"]
    verbs: ["use"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cephfs-provisioner
subjects:
  - kind: ServiceAccount
    name: cephfs-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cephfs-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1 
kind: Deployment
metadata:
  name: cephfs-provisioner
  namespace: kube-system
spec:
  selector: 
    matchLabels:
      app: cephfs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: cephfs-provisioner
    spec:
      containers:
      - name: cephfs-provisioner
        image: "quay.io/external_storage/cephfs-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/cephfs
        - name: PROVISIONER_SECRET_NAMESPACE
          value: kube-system
        command:
        - "/usr/local/bin/cephfs-provisioner"
        args:
        - "-id=cephfs-provisioner-1"
        - "-disable-ceph-namespace-isolation=true"
      serviceAccount: cephfs-provisioner
EOF
kubectl apply -f external-storage-cephfs-provisioner.yaml

2、查看狀態 等待running之后 再進行后續的操作

kubectl get pod -n kube-system

配置 Storageclass

1、查看key 在ceph的mon或者admin節點

ceph auth get-key client.admin

2、創建 admin secret

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-admin 
  namespace: kube-system
type: "kubernetes.io/rbd"
data:
  # ceph auth get-key client.admin | base64
  key: QVFDckJ3dGVBSTdUT2hBQXpGZ1JaTzBNSy9kYTJBRm41RWRkcUE9PQ== 

3、查看 secret

kubectl get secret ceph-secret-admin -n kube-system -o yaml

4、配置 StorageClass

cat >storageclass-cephfs.yaml<<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: cephfs
provisioner: ceph.com/cephfs
reclaimPolicy: Retain #回收策略
parameters:
    monitors: 192.168.25.224,192.168.25.227,192.168.25.228:6789 # ceph mon 集群地址
    adminId: admin
    adminSecretName: ceph-secret-admin
    adminSecretNamespace: "kube-system"
    claimRoot: /pvc-volumes
EOF

5、創建

kubectl apply -f storageclass-cephfs.yaml

6、查看

kubectl get sc

測試使用

1、創建pvc測試

cat >cephfs-pvc-test.yaml<<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs-claim1
spec:
  storageClassName: cephfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
EOF
kubectl apply -f cephfs-pvc-test.yaml

2、查看

kubectl get pvc
kubectl get pv

3、創建 nginx pod 掛載測試

cat >nginx-pod.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod1
  labels:
    name: nginx-pod1
spec:
  containers:
  - name: nginx-pod1
    image: nginx:alpine
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: ceph-rdb
      mountPath: /usr/share/nginx/html
  volumes:
  - name: ceph-rdb
    persistentVolumeClaim:
      claimName: cephfs-claim1 
EOF
kubectl apply -f nginx-pod.yaml

4、查看

kubectl get pods -o wide

5、修改文件內容

kubectl exec -ti nginx-pod2 -- /bin/sh -c 'echo This is from CephFS!!! > /usr/share/nginx/html/index.html'

6、訪問pod測試

curl http://$podip

7、清理

kubectl delete -f nginx-pod.yaml
kubectl delete -f cephfs-pvc-test.yaml

故障解決:

參考1:https://blog.51cto.com/ygqygq2/2163656

參考2:https://blog.51cto.com/juestnow/2408267 (推薦)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM