Kubernetes使用vSphere存儲 - vSphere-CSI


參考官方:Introduction · vSphere CSI Driver (k8s.io)

前置須知

1. 虛擬機創建於vSpher 6.7U3版本及以上

2. Kubernetes節點網絡能訪問到vCenter域名或IP

3. 官方示例操作系統鏡像http://partnerweb.vmware.com/GOSIG/Ubuntu_18_04_LTS.html

4. 本次測試使用操作系統鏡像CentOS7.6

5. 虛擬機版本不低於15

6. 虛擬機安裝vmtools,CentOS7上安裝yum install -y open-vm-tools && systemctl enable vmtoolsd --now

7. 配置虛擬機參數:disk.EnableUUID=1 (參考govc操作)

 

kubeadm部署集群

1. 環境配置

節點

IP

角色

k8s-master

192.168.190.30

etcd、apiserver、controller-manager、scheduler

k8s-node1

192.168.190.31

kubelet、kube-proxy

k8s-node2

192.168.190.32

kubelet、kube-proxy

2. 安裝docker-ce-19.03,kubeadm-1.19.11,kubelet-1.19.11,kubectl-1.19.11,系統參數配置(略)

3. Kubeadm生成配置文件,vSphere-CSI需要依賴vSphere-CPI作為Cloud-Provider,此處我們配置使用外部擴展的方式(out-of-tree)

# 生成默認配置
kubeadm config print  init-defaults  >kubeadm-config.yaml

# 修改后的配置
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.190.30
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  kubeletExtraArgs:                                           # 添加配置,指定雲供應商驅動為外部擴展模式
    cloud-provider: external                                  # 指定external模式
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers      # 修改倉庫為國內阿里雲倉庫
kind: ClusterConfiguration
kubernetesVersion: v1.19.11                                   # 修改版本為v1.19.11
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16                                    # 添加pod網絡,適配后面flannel網絡
scheduler: {}

4. k8s-master節點執行安裝

[root@k8s-master ~]# kubeadm init --config /etc/kubernetes/kubeadminit.yaml

[init] Using Kubernetes version: v1.19.11
... ...
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
... ...
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.190.30:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:[sha sum from above output]
    
# 配置客戶端認證
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp /etc/kubernetes/admin.conf $HOME/.kube/config

5. 部署flannel網絡插件

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

6. 配置worker節點join配置文件,此處與一般情況下的kubeadm join命名不一樣,需要生成join配置,並添加Cloud-Provider配置

# 導出master配置,並拷貝到所有worker節點/etc/kubernetes目錄下
[root@k8s-master ~]# kubectl -n kube-public get configmap cluster-info -o jsonpath='{.data.kubeconfig}' > discovery.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0l --- 略 --- RVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.190.30:6443
  name: ""
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

# 生成worker節點join配置文件,並拷貝到所有worker節點/etc/kubernetes目錄下
[root@k8s-master ~]# tee /etc/kubernetes/kubeadminitworker.yaml >/dev/null <<EOF
apiVersion: kubeadm.k8s.io/v1beta1
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
  file:
    kubeConfigPath: /etc/kubernetes/discovery.yaml
  timeout: 5m0s
  tlsBootstrapToken: abcdef.0123456789abcdef
kind: JoinConfiguration
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  kubeletExtraArgs:                       # cloud-provider配置為external
    cloud-provider: external
EOF

7. 添加work節點,需要預先把join配置文件拷貝到worker節點上

# 確保worker節點上/etc/kubernetes目錄下已從master上拷貝文件,discover.yaml和kubeadminitworker.yaml
# 添加節點
[root@k8s-node1 ~]# kubeadm join --config /etc/kubernetes/kubeadminitworker.yaml

8. 集群情況

[root@k8s-master ~]# kubectl get node -o wide
NAME         STATUS   ROLES    AGE     VERSION    INTERNAL-IP      EXTERNAL-IP      OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-master   Ready    master   7h28m   v1.19.11   192.168.190.30   192.168.190.30   CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://19.3.15
k8s-node1    Ready    <none>   7h12m   v1.19.11   192.168.190.31   192.168.190.31   CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://19.3.15
k8s-node2    Ready    <none>   7h11m   v1.19.11   192.168.190.32   192.168.190.32   CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://19.3.15

# worker節點的Taints狀態,待vSphere-CPI安裝成功后,會自動移除node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
[root@k8s-master ~]#  kubectl describe nodes | egrep "Taints:|Name:"
Name:               k8s-master
Taints:             node-role.kubernetes.io/master:NoSchedule
Name:               k8s-node1
Taints:             node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
Name:               k8s-node2
Taints:             node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule

 

部署vSphere Cloud Provider Interface(CPI)

1. 生成配置文件,包含vCenter認證及使用的數據中心

[root@k8s-master kubernetes]# cat vsphere.conf
[Global]
port = "443"
insecure-flag = "true"
secret-name = "cpi-engineering-secret"
secret-namespace = "kube-system"

[VirtualCenter "192.168.1.100"]
datacenters = "Datacenter"

# insecure-flag    是否信任ssl
# secret-name      vCenter認證secret
# secret-namespace vCenter認證secret所處namespace
# VirtualCenter    vCenter地址
# datacenters      數據中心名稱

[root@k8s-master kubernetes]# cat cpi-engineering-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: cpi-engineering-secret
  namespace: kube-system
stringData:
  192.168.1.100.username: "administrator@vsphere.local"
  192.168.1.100.password: "xxxxx"

# "vCenter地址".username vc用戶
# "vCenter地址".password vc用戶密碼
 
 
# 通過vsphere.conf文件創建configmap
[root@k8s-master kubernetes]# kubectl create configmap cloud-config --from-file=vsphere.conf --namespace=kube-system
 
# 通過cpi-engineering-secret.yaml創建secret
[root@k8s-master kubernetes]# kubectl create -f cpi-engineering-secret.yaml

2. 安裝CPI

# 安裝CPI前狀態確認,保證work節點包含下面的Taints
[root@k8s-master ~]# kubectl describe nodes | egrep "Taints:|Name:"
Name:               k8s-master
Taints:             node-role.kubernetes.io/master:NoSchedule
Name:               k8s-node1
Taints:             node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
Name:               k8s-node2
Taints:             node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule

# 安裝CPI,可直接通過下面部署
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
clusterrole.rbac.authorization.k8s.io/system:cloud-controller-manager created

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
clusterrolebinding.rbac.authorization.k8s.io/system:cloud-controller-manager created

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/vsphere-cloud-controller-manager-ds.yaml
serviceaccount/cloud-controller-manager created
daemonset.extensions/vsphere-cloud-controller-manager created
service/vsphere-cloud-controller-manager created


# 默認情況下,CPI使用的鏡像倉庫為gcr.io,國內基本就無限拉取失敗了,推薦自己通過阿里雲容器鏡像構建功能代理到國內,比如下面:
    image: registry.cn-hangzhou.aliyuncs.com/gisuni/manager:v1.0.1


# 安裝CPI后的狀態,worker節點Taints自動清除
[root@k8s-master ~]# kubectl describe nodes | egrep "Taints:|Name:"
Name:               k8s-master
Taints:             node-role.kubernetes.io/master:NoSchedule
Name:               k8s-node1
Taints:             <none>
Name:               k8s-node2
Taints:             <none>

 

安裝vSphere Container Storage Interface(CSI)

1. 創建CSI配置文件(為啥不能和CPI公用配置呢?)

# CSI的配置文件為一個secret對象
[root@k8s-master kubernetes]# cat csi-vsphere.conf [Global] insecure-flag = "true" cluster-id = "k8s-20210603" [VirtualCenter "192.168.1.100"] datacenters = "Datacenter" user = "administrator@vsphere.local" password = "xxxxx" port = "443" # cluster-id 自定義一個kubernetes集群id,不超過64個字符 # VirtualCenter vCenter地址 # user vCenter用戶 # password vCenter用戶密碼 # 通過csi-vsphere.conf生成secret [root@k8s-master kubernetes]# kubectl create secret generic vsphere-config-secret --from-file=csi-vsphere.conf --namespace=kube-system

2. 安裝CSI,gcr.io鏡像問題,可以參考CPI的處理辦法

# 安裝rbac
[root@k8s-master kubernetes]# kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v2.2.0/manifests/v2.2.0/rbac/vsphere-csi-controller-rbac.yaml
[root@k8s-master kubernetes]# kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v2.2.0/manifests/v2.2.0/rbac/vsphere-csi-node-rbac.yaml

# 安裝controller和node driver
[root@k8s-master kubernetes]# kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v2.2.0/manifests/v2.2.0/deploy/vsphere-csi-controller-deployment.yaml
[root@k8s-master kubernetes]# kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v2.2.0/manifests/v2.2.0/deploy/vsphere-csi-node-ds.yaml


# 其中用到的gcr.io鏡像可以替換為下面的阿里雲代理鏡像
          image: registry.cn-hangzhou.aliyuncs.com/gisuni/driver:v2.2.0
          image: registry.cn-hangzhou.aliyuncs.com/gisuni/syncer:v2.2.0
          image: registry.cn-hangzhou.aliyuncs.com/gisuni/driver:v2.2.0

3. CSI安裝完成后pod運行情況

[root@k8s-master ~]# kubectl get po -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-6d56c8448f-bj2mw                  1/1     Running   0          7h24m
etcd-k8s-master                           1/1     Running   0          8h
kube-apiserver-k8s-master                 1/1     Running   0          8h
kube-controller-manager-k8s-master        1/1     Running   0          8h
kube-flannel-ds-7rmg8                     1/1     Running   0          7h19m
kube-flannel-ds-8pm67                     1/1     Running   0          7h19m
kube-flannel-ds-dxlwg                     1/1     Running   0          7h19m
kube-proxy-7dzp5                          1/1     Running   0          8h
kube-proxy-8hzmk                          1/1     Running   0          7h46m
kube-proxy-crf9p                          1/1     Running   0          7h47m
kube-scheduler-k8s-master                 1/1     Running   0          8h
vsphere-cloud-controller-manager-fstg2    1/1     Running   0          7h4m
vsphere-csi-controller-5b68fbc4b6-xpxb5   6/6     Running   0          5h17m
vsphere-csi-node-5nml4                    3/3     Running   0          5h17m
vsphere-csi-node-jv8kh                    3/3     Running   0          5h17m
vsphere-csi-node-z6xj5                    3/3     Running   0          5h17m

  

創建容器卷

1. Storage Class創建,選擇基於vSAN的虛擬機存儲策略

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: vsphere-vsan-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.vsphere.vmware.com
parameters:
  storagepolicyname: "storage_for_k8s"  # vCenter上虛擬機存儲策略名稱
  fstype: ext4

2. 創建pvc

[root@k8s-master ~]# cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
[root@k8s-master ~]# kubectl create -f pvc.yaml
persistentvolumeclaim/test created
[root@k8s-master ~]# kubectl get pvc
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
test   Bound    pvc-9408ac89-93ee-41d1-ba71-67957265b8df   4Gi        RWO            vsphere-vsan-sc   4s

3. 創建pvc后vCenter上的任務

 4. 通集群-監控-雲原生存儲-容器卷,查看容器卷

 

 

掛載容器卷

掛載容器卷的流程為:

1. pod調度到某一worker節點,比如k8s-node1

2. 將vSAN容器卷掛載到k8s-node1節點虛擬機上

3. 從k8s-node1上格式化容器卷並掛載到pod中

總結

1. 通過vSphere-CSI可以讓vCenter上自定義的Kubernetes集群使用vSphere原生的容器卷

2. vSphere原生容器卷性能基本和虛擬機磁盤相等

3. 減少因使用longhorn、rook、openebs導致的數據多次副本冗余

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM