k8s使用ceph的rbd作后端存儲


k8s使用rbd作后端存儲

k8s里的存儲方式主要有三種。分別是volume、persistent volumes和dynamic volume provisioning。

  • volume: 就是直接掛載在pod上的組件,k8s中所有的其他存儲組件都是通過volume來跟pod直接聯系的。volume有個type屬性,type決定了掛載的存儲是什么,常見的比如:emptyDir,hostPath,nfs,rbd,以及下文要說的persistentVolumeClaim等。跟docker里面的volume概念不同的是,docker里的volume的生命周期是跟docker緊緊綁在一起的。這里根據type的不同,生命周期也不同,比如emptyDir類型的就是跟docker一樣,pod掛掉,對應的volume也就消失了,而其他類型的都是永久存儲。詳細介紹可以參考Volumes
  • Persistent Volumes:顧名思義,這個組件就是用來支持永久存儲的,Persistent Volumes組件會抽象后端存儲的提供者(也就是上文中volume中的type)和消費者(即具體哪個pod使用)。該組件提供了PersistentVolume和PersistentVolumeClaim兩個概念來抽象上述兩者。一個PersistentVolume(簡稱PV)就是后端存儲提供的一塊存儲空間,具體到ceph rbd中就是一個image,一個PersistentVolumeClaim(簡稱PVC)可以看做是用戶對PV的請求,PVC會跟某個PV綁定,然后某個具體pod會在volume 中掛載PVC,就掛載了對應的PV。關於更多詳細信息比如PV,PVC的生命周期,dockerfile 格式等信息參考Persistent Volumes
  • Dynamic Volume Provisioning: 動態volume發現,比如上面的Persistent Volumes,我們必須先要創建一個存儲塊,比如一個ceph中的image,然后將該image綁定PV,才能使用。這種靜態的綁定模式太僵硬,每次申請存儲都要向存儲提供者索要一份存儲快。Dynamic Volume Provisioning就是解決這個問題的。它引入了StorageClass這個概念,StorageClass抽象了存儲提供者,只需在PVC中指定StorageClass,然后說明要多大的存儲就可以了,存儲提供者會根據需求動態創建所需存儲快。甚至於,我們可以指定一個默認StorageClass,這樣,只需創建PVC就可以了。

PV的訪問模式

  • ReadWriteOnce:可讀可寫,但只支持被單個node掛載。
  • ReadOnlyMany:只讀權限,可以以只讀的方式被多個node掛載。
  • ReadWriteMany:可讀可寫,這種存儲可以以讀寫的方式被多個node共享。

PV回收策略

  • Retain – 手動重新使用
  • Recycle – 基本的刪除操作 (“rm -rf /thevolume/*”)
  • Delete – 關聯的后端存儲卷一起刪除,后端存儲例如AWS EBS, GCE PD或OpenStack Cinder

在CLI下,訪問方式被簡寫為:

RWO – ReadWriteOnce

ROX – ReadOnlyMany

RWX – ReadWriteMany

在當前的定義中,這三種方式都是針對節點級別的,也就是說,對於一個Persistent Volume, 如果是RWO, 那么只能被掛載在某一個Kubernetes的工作節點(以下簡稱節點)上,當再次嘗試在其他節點掛載的時候,系統會報Multi-Attach的錯誤(當然,在只有一台可調度節點的情況,即使RWO也是能夠被多個Pod同時使用的,但除了開發測試,有誰會這么用呢?); 如果是RMX, 那么可以同時在多個節點上掛載並被不同的Pod使用。

官方支持如下:

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes

image

所以,我們這里是無法在多個node節點使用rbd的。

折中解決辦法:

方法一:使用 NodeSelector標簽,限制pod調度到指定的Node上。(如此便是單機)

方法二:不使用k8s,直接將rbd掛載到單台linux機器上。(也是單機)

開始使用rbd

每個k8s node都需要安裝ceph客戶端ceph-common,才能正常使用ceph

[root@k8s-node1 yum.repos.d]# cd /etc/yum.repos.d
[root@k8s-node1 yum.repos.d]# pwd
/etc/yum.repos.d
# 這里ceph源最好和ceph存儲系統里使用的保持一致,保證ceph客戶端版本一致
[root@k8s-node1 yum.repos.d]# vim ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
# 安裝ceph-common
[root@k8s-node1 yum.repos.d]# yum -y install ceph-common

注:依賴包報錯

# 如果yum安裝ceph-common的時候,報這個錯,則先安裝epel-release
[root@k8s-node1 yum.repos.d]# yum -y install ceph-common
...
---> Package python-six.noarch 0:1.9.0-2.el7 will be installed
--> Finished Dependency Resolution
Error: Package: 2:ceph-common-12.2.12-0.el7.x86_64 (Ceph)
           Requires: libleveldb.so.1()(64bit)
Error: Package: 2:librados2-12.2.12-0.el7.x86_64 (Ceph)
           Requires: liblttng-ust.so.0()(64bit)
Error: Package: 2:ceph-common-12.2.12-0.el7.x86_64 (Ceph)
           Requires: libbabeltrace-ctf.so.1()(64bit)
Error: Package: 2:ceph-common-12.2.12-0.el7.x86_64 (Ceph)
           Requires: libbabeltrace.so.1()(64bit)
Error: Package: 2:librbd1-12.2.12-0.el7.x86_64 (Ceph)
           Requires: liblttng-ust.so.0()(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
# 安裝epel-release
[root@k8s-node1 yum.repos.d]# yum install epel-release
# 此時,再次安裝ceph-common即可安裝上
[root@k8s-node1 yum.repos.d]# yum -y install ceph-common
[root@k8s-node1 yum.repos.d]# ceph --version
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)

每個k8s-node都必須要安裝ceph客戶端,否則無法掛載

k8s-master上也安裝

[root@k8s-master yum.repos.d]# cd /etc/yum.repos.d
[root@k8s-master yum.repos.d]# yum -y install ceph-commo
[root@k8s-master yum.repos.d]# ceph --version
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)

ceph配置(在ceph-admin節點設置)

# 創建存儲池rbd_data,(我們測試環境之前創建過,所以略過)
[cephfsd@ceph-admin ceph]$ ceph osd pool create rbd_data 64 64
# 創建rbd鏡像 (因為我們之前配置里指定了format為1,所以這里不需要顯示指定)
[cephfsd@ceph-admin ceph]$ rbd create rbd_data/filecenter_image --size=10G
# 映射該rbd鏡像
[cephfsd@ceph-admin ceph]$ sudo rbd map rbd_data/filecenter_image
/dev/rbd4
# 查看鏡像信息
[cephfsd@ceph-admin ceph]$ rbd info rbd_data/filecenter_image
rbd image 'filecenter_image':
    size 10GiB in 2560 objects
    order 22 (4MiB objects)
    block_name_prefix: rbd_data.376b6b8b4567
    format: 2
    features: layering
    flags:
    create_timestamp: Sat Dec  7 17:37:41 2019
[cephfsd@ceph-admin ceph]$
# 創建secret:
# 在創建pv前,由於ceph是開啟了cephx認證的,於是首先需要創建secret資源,k8s的secret資源采用的是base64加密
# 在ceph monitor上提取key:
# 生成加密的key
[cephfsd@ceph-admin ceph]$ cat ceph.client.admin.keyring
[client.admin]
    key = AQBIH+ld1okAJhAAmULVJM4zCCVAK/Vdi3Tz5Q==
[cephfsd@ceph-admin ceph]$ ceph auth get-key client.admin | base64
QVFCSUgrbGQxb2tBSmhBQW1VTFZKTTR6Q0NWQUsvVmRpM1R6NVE9PQ==

k8s-master上創建ceph的secret

# 創建一個目錄,存放yaml文件,自己可以隨意設置
[root@k8s-master yum.repos.d]# mkdir /root/k8s/nmp/k1/ceph
[root@k8s-master yum.repos.d]# cd /root/k8s/nmp/k1/ceph/
[root@k8s-master ceph]# pwd
/root/k8s/nmp/k1/ceph
# 創建secret
[root@k8s-master ceph]# vim ceph-secret.yaml
[root@k8s-master ceph]# cat ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
type: "kubernetes.io/rbd"
data:
  key: QVFDTTlXOWFOMk9IR3hBQXZyUjFjdGJDSFpoZUtmckY0N2tZOUE9PQ==
[root@k8s-master ceph]# kubectl create -f ceph-secret.yaml
secret "ceph-secret" created
[root@k8s-master ceph]# kubectl get secret
NAME          TYPE                DATA      AGE
ceph-secret   kubernetes.io/rbd   1         7s
[root@k8s-master ceph]#

創建PV

[root@k8s-master ceph]# vim filecenter-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: filecenter-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  rbd:
    monitors:
      - 172.16.143.121:6789
    pool: rbd_data
    image: filecenter_image
    user: admin
    secretRef:
      name: ceph-secret
    fsType: xfs
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle
[root@k8s-master ceph]# kubectl create -f filecenter-pv.yaml
persistentvolume "filecenter-pv" created
[root@k8s-master ceph]# kubectl get pv -o wide
NAME            CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     REASON    AGE
filecenter-pv   1Gi        RWO           Recycle         Available                       18m

創建PVC

[root@k8s-master ceph]# vim filecenter-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: filecenter-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
[root@k8s-master ceph]# kubectl create -f filecenter-pvc.yaml
persistentvolumeclaim "filecenter-pvc" created
[root@k8s-master ceph]# kubectl get pvc -o wide
NAME             STATUS    VOLUME          CAPACITY   ACCESSMODES   AGE
filecenter-pvc   Bound     filecenter-pv   1Gi        RWO           6s
[root@k8s-master ceph]#

創建deployment掛載PVC

這里修改之前已經在使用的php-filecenter.yaml

[root@k8s-master ceph]# vim ../php/file-center/php-filecenter-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    name: php-filecenter-deployment
spec:
    replicas: 1
    selector:
        matchLabels:
            app: php-filecenter
    template:
        metadata:
            labels:
                app: php-filecenter
        spec:
            containers:
                - name: php-filecenter
                  image: 172.16.143.107:5000/php-fpm:v2019120205
                  volumeMounts:
                    - mountPath: "/mnt"
                      name: filedata
            volumes:
              - name: filedata
                persistentVolumeClaim:
                  claimName: filecenter-pvc
[root@k8s-master ceph]# kubectl apply -f ../php/file-center/php-filecenter-deployment.yaml
deployment "php-filecenter-deployment" configured
[root@k8s-master ceph]#

報錯:pvc沒有創建成功,導致pod創建失敗

# pvc並沒有綁定成功,導致pod創建失敗
[root@k8s-master ceph]# kubectl exec -it php-filecenter-deployment-3316474311-g1jmg bash
Error from server (BadRequest): pod php-filecenter-deployment-3316474311-g1jmg does not have a host assigned
[root@k8s-master ceph]# kubectl describe pod php-filecenter-deployment-3316474311-g1jmg
Name:        php-filecenter-deployment-3316474311-g1jmg
Namespace:    default
Node:        /
Labels:        app=php-filecenter
        pod-template-hash=3316474311
Status:        Pending
IP:       
Controllers:    ReplicaSet/php-filecenter-deployment-3316474311
Containers:
  php-filecenter:
    Image:    172.16.143.107:5000/php-fpm:v2019120205
    Port:   
    Volume Mounts:
      /mnt from filedata (rw)
    Environment Variables:    <none>
Conditions:
  Type        Status
  PodScheduled     False
Volumes:
  filedata:
    Type:    PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:    filecenter-pvc
    ReadOnly:    false
QoS Class:    BestEffort
Tolerations:    <none>
Events:
  FirstSeen    LastSeen    Count    From            SubObjectPath    Type        Reason            Message
  ---------    --------    -----    ----            -------------    --------    ------            -------
  8m        1m        29    {default-scheduler }            Warning        FailedScheduling    [SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "filecenter-pvc", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "filecenter-pvc", which is unexpected.]
# 查看pv,pv是Available
[root@k8s-master ceph]# kubectl get pv -o wide
NAME            CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     REASON    AGE
filecenter-pv   1Gi        RWO           Recycle         Available                       39m
# 查看pvc,發現pvc是pending狀態,有問題
[root@k8s-master ceph]# kubectl get pvc -o wide
NAME             STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
filecenter-pvc   Pending                                      35m
[root@k8s-master ceph]# kubectl get pod php-filecenter-deployment-3316474311-g1jmg
NAME                                         READY     STATUS    RESTARTS   AGE
php-filecenter-deployment-3316474311-g1jmg   0/1       Pending   0          9m
[root@k8s-master ceph]#
[root@k8s-master ceph]# kubectl describe pv filecenter-pv
Name:        filecenter-pv
Labels:        <none>
StorageClass:   
Status:        Available
Claim:       
Reclaim Policy:    Recycle
Access Modes:    RWO
Capacity:    1Gi
Message:   
Source:
    Type:        RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:    [172.16.143.121:6789]
    RBDImage:        filecenter_image
    FSType:        xfs
    RBDPool:        rbd_data
    RadosUser:        admin
    Keyring:        /etc/ceph/keyring
    SecretRef:        &{ceph-secret}
    ReadOnly:        false
No events.
[root@k8s-master ceph]# kubectl describe pvc filecenter-pvc
Name:        filecenter-pvc
Namespace:    default
StorageClass:   
Status:        Pending
Volume:       
Labels:        <none>
Capacity:   
Access Modes:   
Events:
  FirstSeen    LastSeen    Count    From                SubObjectPath    Type        Reason        Message
  ---------    --------    -----    ----                -------------    --------    ------        -------
  48m        5s        196    {persistentvolume-controller }            Normal        FailedBinding    no persistent volumes available for this claim and no storage class is set
[root@k8s-master ceph]#
# 這種問題就表示pv和pvc沒有綁定成功,如果沒有指定match labels等規則,那要么就是容量大小不匹配,要么就是access modes不匹配
# 這里檢查,發現是pv定義的1G,而pvc定義的是10G,容量大小不匹配,所以pvc沒有綁定成功。
# 修改pvc的配置
[root@k8s-master ceph]# vim filecenter-pvc.yaml
# 再次重新應用配置
[root@k8s-master ceph]# kubectl apply -f filecenter-pvc.yaml
The PersistentVolumeClaim "filecenter-pvc" is invalid: spec: Forbidden: field is immutable after creation
# 刪除pvc,重新創建
[root@k8s-master ceph]# kubectl delete -f filecenter-pvc.yaml
persistentvolumeclaim "filecenter-pvc" deleted
# 再次檢查配置是否一致
[root@k8s-master ceph]# cat filecenter-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: filecenter-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
[root@k8s-master ceph]# cat filecenter-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: filecenter-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  rbd:
    monitors:
      - 172.16.143.121:6789
    pool: rbd_data
    image: filecenter_image
    user: admin
    secretRef:
      name: ceph-secret
    fsType: xfs
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle
# 重新創建pvc
[root@k8s-master ceph]# kubectl create -f filecenter-pvc.yaml
persistentvolumeclaim "filecenter-pvc" created
[root@k8s-master ceph]# kubectl get pvc -o wide
NAME             STATUS    VOLUME          CAPACITY   ACCESSMODES   AGE
filecenter-pvc   Bound     filecenter-pv   1Gi        RWO           6s
[root@k8s-master ceph]#
# 可以看到pvc是Bound狀態,綁定成功。

沒有rbd map,導致pod綁定rbd失敗

[root@k8s-master ceph]# kubectl apply -f ../php/file-center/php-filecenter-deployment.yaml
service "php-filecenter-service" configured
deployment "php-filecenter-deployment" configured
[root@k8s-master ceph]# kubectl get pod php-filecenter-deployment-3316474311-g1jmg
NAME                                         READY     STATUS              RESTARTS   AGE
php-filecenter-deployment-3316474311-g1jmg   0/1       ContainerCreating   0          41m
[root@k8s-master ceph]# kubectl logs -f php-filecenter-deployment-3316474311-g1jmg
Error from server (BadRequest): container "php-filecenter" in pod "php-filecenter-deployment-3316474311-g1jmg" is waiting to start: ContainerCreating
[root@k8s-master ceph]# kubectl describe pod php-filecenter-deployment-3316474311-g1jmg
Name:        php-filecenter-deployment-3316474311-g1jmg
Namespace:    default
Node:        k8s-node1/172.16.143.108
Start Time:    Sat, 07 Dec 2019 18:52:30 +0800
Labels:        app=php-filecenter
        pod-template-hash=3316474311
Status:        Pending
IP:       
Controllers:    ReplicaSet/php-filecenter-deployment-3316474311
Containers:
  php-filecenter:
    Container ID:   
    Image:        172.16.143.107:5000/php-fpm:v2019120205
    Image ID:       
    Port:       
    State:        Waiting
      Reason:        ContainerCreating
    Ready:        False
    Restart Count:    0
    Volume Mounts:
      /mnt from filedata (rw)
    Environment Variables:    <none>
Conditions:
  Type        Status
  Initialized     True
  Ready     False
  PodScheduled     True
Volumes:
  filedata:
    Type:    PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:    filecenter-pvc
    ReadOnly:    false
QoS Class:    BestEffort
Tolerations:    <none>
Events:
  FirstSeen    LastSeen    Count    From            SubObjectPath    Type        Reason            Message
  ---------    --------    -----    ----            -------------    --------    ------            -------
  41m        4m        133    {default-scheduler }            Warning        FailedScheduling    [SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "filecenter-pvc", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "filecenter-pvc", which is unexpected.]
  3m        3m        1    {default-scheduler }            Normal        Scheduled        Successfully assigned php-filecenter-deployment-3316474311-g1jmg to k8s-node1
  1m        1m        1    {kubelet k8s-node1}            Warning        FailedMount        Unable to mount volumes for pod "php-filecenter-deployment-3316474311-g1jmg_default(5d5d48d9-18da-11ea-8c36-000c29fc3a73)": timeout expired waiting for volumes to attach/mount for pod "default"/"php-filecenter-deployment-3316474311-g1jmg". list of unattached/unmounted volumes=[filedata]
  1m        1m        1    {kubelet k8s-node1}            Warning        FailedSync        Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"php-filecenter-deployment-3316474311-g1jmg". list of unattached/unmounted volumes=[filedata]
  1m        1m        1    {kubelet k8s-node1}            Warning        FailedMount        MountVolume.SetUp failed for volume "kubernetes.io/rbd/5d5d48d9-18da-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "5d5d48d9-18da-11ea-8c36-000c29fc3a73" (UID: "5d5d48d9-18da-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 2019-12-07 18:54:44.268690 7f2f99d00d40 -1 did not load config file, using default settings.
2019-12-07 18:54:44.271606 7f2f99d00d40 -1 Errors while parsing config file!
2019-12-07 18:54:44.271610 7f2f99d00d40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-12-07 18:54:44.271610 7f2f99d00d40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2019-12-07 18:54:44.271610 7f2f99d00d40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-12-07 18:54:44.272599 7f2f99d00d40 -1 Errors while parsing config file!
2019-12-07 18:54:44.272603 7f2f99d00d40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-12-07 18:54:44.272603 7f2f99d00d40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2019-12-07 18:54:44.272604 7f2f99d00d40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-12-07 18:54:44.291155 7f2f99d00d40 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
rbd: sysfs write failed
2019-12-07 18:54:44.297026 7f2f99d00d40 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-12-07 18:54:44.298627 7f2f99d00d40  0 librados: client.admin authentication error (1) Operation not permitted
rbd: couldn't connect to the cluster!
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (1) Operation not permitted
# 上面說沒有ceph.conf和ceph.client.admin.keyring,我們從ceph-admin節點拷貝過來
[root@k8s-master ceph]# rz
 
[root@k8s-master ceph]# ls
ceph.client.admin.keyring  ceph.conf  rbdmap
# k8s所有節點都要拷貝,node節點也拷貝一下
[root@k8s-node1 ceph]# rz
 
[root@k8s-node1 ceph]# ls
ceph.client.admin.keyring  ceph.conf  rbdmap
# 刪除pod,讓它重新掛載一下
[root@k8s-master ceph]# kubectl delete pod php-filecenter-deployment-3316474311-g1jmg
pod "php-filecenter-deployment-3316474311-g1jmg" deleted
[root@k8s-master ceph]# kubectl describe pod php-filecenter-deployment-3316474311-jr48g
Name:        php-filecenter-deployment-3316474311-jr48g
Namespace:    default
Node:        k8s-master/172.16.143.107
Start Time:    Mon, 09 Dec 2019 10:01:29 +0800
Labels:        app=php-filecenter
        pod-template-hash=3316474311
Status:        Pending
IP:       
Controllers:    ReplicaSet/php-filecenter-deployment-3316474311
Containers:
  php-filecenter:
    Container ID:   
    Image:        172.16.143.107:5000/php-fpm:v2019120205
    Image ID:       
    Port:       
    State:        Waiting
      Reason:        ContainerCreating
    Ready:        False
    Restart Count:    0
    Volume Mounts:
      /mnt from filedata (rw)
    Environment Variables:    <none>
Conditions:
  Type        Status
  Initialized     True
  Ready     False
  PodScheduled     True
Volumes:
  filedata:
    Type:    PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:    filecenter-pvc
    ReadOnly:    false
QoS Class:    BestEffort
Tolerations:    <none>
Events:
  FirstSeen    LastSeen    Count    From            SubObjectPath    Type        Reason        Message
  ---------    --------    -----    ----            -------------    --------    ------        -------
  10s        10s        1    {default-scheduler }            Normal        Scheduled    Successfully assigned php-filecenter-deployment-3316474311-jr48g to k8s-master
  8s        8s        1    {kubelet k8s-master}            Warning        FailedMount    MountVolume.SetUp failed for volume "kubernetes.io/rbd/d05aa080-1a27-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "d05aa080-1a27-11ea-8c36-000c29fc3a73" (UID: "d05aa080-1a27-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 rbd: sysfs write failed
2019-12-09 10:01:30.443054 7f96b803fd40  0 librados: client.admin authentication error (1) Operation not permitted
rbd: couldn't connect to the cluster!
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (1) Operation not permitted
 
  6s    6s    1    {kubelet k8s-master}        Warning    FailedMount    MountVolume.SetUp failed for volume "kubernetes.io/rbd/d05aa080-1a27-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "d05aa080-1a27-11ea-8c36-000c29fc3a73" (UID: "d05aa080-1a27-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 rbd: sysfs write failed
2019-12-09 10:01:32.022514 7fb376cb0d40  0 librados: client.admin authentication error (1) Operation not permitted
rbd: couldn't connect to the cluster!
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (1) Operation not permitted
 
  4s    4s    1    {kubelet k8s-master}        Warning    FailedMount    MountVolume.SetUp failed for volume "kubernetes.io/rbd/d05aa080-1a27-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "d05aa080-1a27-11ea-8c36-000c29fc3a73" (UID: "d05aa080-1a27-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 rbd: sysfs write failed
2019-12-09 10:01:34.197942 7f0282d5fd40  0 librados: client.admin authentication error (1) Operation not permitted
rbd: couldn't connect to the cluster!
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (1) Operation not permitted
 
  1s    1s    1    {kubelet k8s-master}        Warning    FailedMount    MountVolume.SetUp failed for volume "kubernetes.io/rbd/d05aa080-1a27-11ea-8c36-000c29fc3a73-filecenter-pv" (spec.Name: "filecenter-pv") pod "d05aa080-1a27-11ea-8c36-000c29fc3a73" (UID: "d05aa080-1a27-11ea-8c36-000c29fc3a73") with: rbd: map failed exit status 1 rbd: sysfs write failed
2019-12-09 10:01:37.602709 7f18facc1d40  0 librados: client.admin authentication error (1) Operation not permitted
rbd: couldn't connect to the cluster!
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (1) Operation not permitted
 
[root@k8s-master ceph]#


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM