K8s - Kubernetes集群安裝部署-CentOS7


本文演示如何搭建一個三節點的 Kubernetes Cluster 集群(一個 master 節點和兩個 node 節點),並且這三台服務器使用的都是 CentOS 7 系統。
 

一、准備工作(三個節點都需要設置)

1,安裝 Docker

所有的節點都需要安裝 Docker,具體步驟可以參考之前的文章:

2,安裝 kubelet、kubeadm 和 kubectl

(1)我們需要在所有節點上安裝 kubelet、kubeadm 和 kubectl,它們作用分別如下:
  • kubeadm:用來初始化集群(Cluster)
  • kubelet:運行在集群中的所有節點上,負責啟動 pod 和 容器。
  • kubectl:這個是 Kubernetes 命令行工具。通過 kubectl 可以部署和管理應用,查看各種資源,創建、刪除和更新各種組件。
 
(2)依次執行下面命令進行安裝這三個工具(為避免出現“網絡不可達”錯誤,這里將谷歌的鏡像換成國內鏡像):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http: //mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http: //mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
 
#安裝kubelet kubeadm kubectl,可以指定版本
$ yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
 
 
 
#啟動kubelet 並設置為默認開機啟動
$ systemctl enable kubelet && systemctl start kubelet
 

3,修改 sysctl 配置

對於 RHEL/CentOS 7 系統,可以會由於 iptables 被繞過導致網絡請求被錯誤的路由。所以還需執行如下命令保證 sysctl 配置中 net.bridge.bridge-nf-call-iptables 被設為1。
(1)使用 vi 命令編輯相關文件:
1
vi /etc/sysctl.conf

(2)在文件中添加如下內容后,保存退出。
1
2
3
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

(3)最后執行如下命令即可:
1
sysctl --system
 
 
 

4,關閉 swap

(1)首先執行如下命令將其關閉:
1
swapoff -a
 
(2)接着編輯 /etc/fstab 文件。
1
vi /etc/fstab

(3)將 /dev/mapper/centos-swap swap swap default 0 0 這一行前面加個 # 號將其注釋掉。
原文:K8s - Kubernetes集群的安裝部署教程(CentOS系統)
(4)編輯完畢后保存退出。這樣機器重啟后 swap 便不會又自動打開了。

5. 修改Cgroup Driver

5.1 修改daemon.json

修改daemon.json,新增‘"exec-opts": ["native.cgroupdriver=systemd"’

[root@master ~]# more /etc/docker/daemon.json

{
"registry-mirrors": ["http://hub-mirror.c.163.com/"],
"exec-opts": ["native.cgroupdriver=systemd"]
}

 

5.2 重新加載docker

[root@master ~]# systemctl daemon-reload [root@master ~]# systemctl restart docker
6.設置SELinux 
 
# 將 SELinux 設置為 permissive 模式(將其禁用)
$ setenforce 0
$ sed -i  's/^SELINUX=enforcing$/SELINUX=permissive/'  /etc/selinux/config
 

二、Master 節點的安裝配置

1,初始化 Master

(1)我們在 Master 上執行如下命令進行初始化:
注意:--pod-network-cidr=10.244.0.0/16 是 k8s 的網絡插件所需要用到的配置信息,用來給 node 分配子網段。然后我們這邊用到的網絡插件是 flannel,就是這么配。
1
$:kubeadm init --pod-network-cidr=10.244.0.0/16
#或者此命令

$:kubeadm init --kubernetes-version=v1.21.0 --apiserver-advertise-address 192.168.37.101 --pod-network-cidr=10.10.0.0/16


(2)初始化的時候 kubeadm 會做一系列的校驗,以檢測你的服務器是否符合 kubernetes 的安裝條件,檢測結果分為 [WARNING] 和 [ERROR] 兩種。其中 [ERROR] 部分要予以解決。
原文:K8s - Kubernetes集群的安裝部署教程(CentOS系統)

(3)比如上圖我這里檢測到三個 error:
  • Master 節點需要至少兩核 CPU:由於我用的是虛擬機,關機后更改下虛擬機配置即可。
  • bridge-nf-call-iptables 這個參數,需要設置為 1:如果我們前面做了准備工作里的第三步,就不會有這個問題了。
  • swap 需要關閉:執行 swapoff -a 將其關閉即可。
 
(4)所有 error 解決后,再執行最開始的 init 命令后 kubeadm 就開始安裝了。但通常這時還是會報錯,kubeadm init 命令默認使用的docker鏡像倉庫為k8s.gcr.io,國內無法直接訪問,於是需要變通一下。
原文:K8s - Kubernetes集群的安裝部署教程(CentOS系統)

(5)我們可以通過國內廠商提供的 kubernetes 的鏡像服務來下載,執行以下命令比如第一個 k8s.gcr.io/kube-apiserver:v1.14.1 鏡像,可以執行如下命令從阿里雲下載:
#創建腳本文件 images.sh,
$:vi  images.sh

#腳本內容如下:
#!/bin/bash
#鏡像倉庫地址
#阿里 registry.aliyuncs.com/google_containers
url=registry.aliyuncs.com/google_containers
version=v1.21.0
images=(`kubeadm config images list --kubernetes-version=$version|awk -F 'io/' '{print $2}'`)
for imagename in ${images[@]} ; do
  if [[ $imagename = coredns* ]] ;
     then
      docker pull $imagename
      docker tag  $imagename k8s.gcr.io/$imagename
      docker rmi -f $imagename
     else
      docker pull $url/$imagename
      docker tag  $url/$imagename k8s.gcr.io/$imagename
      docker rmi -f $url/$imagename
     fi
done
#然后授予執行權限
$:chmod +x ./images.sh
#執行:
$:./images.sh

 

 (6)其中  coredns/coredns:v1.8.0 下載失敗,因從hub.docker上查找不到此版本,我們采用手動的方式進行下載,訪問conedns官網,找到對應版本進行下載

 

 

 

 通過以下命令導入容器

1
 

$:cat coredns_1.8.0_linux_amd64.tgz | docker import - coredns:v1.8.0

(7)鏡像下載下來以后再通過 docker tag 命令將其改成kudeadm安裝時候需要的鏡像名稱。
1
$:docker tag coredns:v1.8.0  k8s.gcr.io/coredns/coredns:v1.8.0
 
(8)鏡像全部下載完畢后,再執行最開始的 init 命令后 kubeadm 就能成功安裝了。最后一行,kubeadm 會提示我們,其他節點需要加入集群的話,只需要執行這條命令就行了,同時里面包含了加入集群所需要的 token(這個要記下來)。

 

 2,配置 kubectl(使用root用戶也正常)

kubectl 是管理 Kubernetes 集群的命令行工具,前面我們已經在所有的節點安裝了 kubectl。Master 初始化安裝完后需要做一些配置工作,然后 kubectl 就能使用了。
(1)具體操作就依照前面 kubeadm init 輸出的第一個紅框內容。這里推薦使用使用普通用戶執行 kubectl(root 會有一些問題),首先我們新建個普通用戶 hangge,具體方法可以參考我之前寫的文章:
 
(2)切換成 app用戶
1
su - app

(3)依次執行如下命令(即前面 kubeadm init 輸出的第一個紅框內容),為 hangge 用戶配置 kubectl:
1
2
3
mkdir  -p  $HOME /.kube
sudo cp -i /etc/kubernetes/admin.conf  $HOME /.kube/config
sudo  chown  $(id -u):$(id -g)  $HOME /.kube/config

(4)為了使用更加便捷,啟用 kubectl 命令的自動補全功能。
1
echo "source <(kubectl completion bash)" >> ~/.bash_profile source .bash_profile 

3,安裝 Pod 網絡

要讓 Kubernetes 集群能夠工作,必須安裝 Pod 網絡,否則 Pod 之間無法通信。(即前面 kubeadm init 輸出的第二個紅框內容)
Kubernetes 支持多種網絡方案,flannel或者calico。執行如下命令即可部署 flannel/calico:
1
kubectl apply -f https: //raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

4,開放端口(可以關閉防火牆)

分別執行下面兩條命令配置 firewall 防火牆策略,開放相關端口:
1
2
3
4
5
6
7
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379/tcp
firewall-cmd --permanent --add-port=2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --reload

三、Node 節點的安裝配置

1,添加節點

(1)在兩個 node 節點上分別執行如下命令(即前面 kubeadm init 輸出的最后一個紅框內容),將其注冊到 Cluster 中:
1

kubeadm join 192.168.37.101:6443 --token ethqh8.nmtfwcg88gnfwvsu --discovery-token-ca-cert-hash sha256:1319e8da4d083b5b2f40161045845674bdbe7823c93c6767326c39cf719cb0f1


(2)顯示如下內容則說明節點添加成功:
原文:K8s - Kubernetes集群的安裝部署教程(CentOS系統)
 
(3)
  安裝過程中,出現以下錯誤,通過使用命令:kubeadm reset  解決

 

 

2,安裝鏡像

(1)在每一個 node 節點上我們還需要下載 quay.io/coreos/flannel:v0.11.0-amd64、k8s.gcr.io/pause 和 k8s.gcr.io/kube-proxy 這三個鏡像,其中后面兩個鏡像具體版本可以執行kubeadm config images list 查看一下:
原文:K8s - Kubernetes集群的安裝部署教程(CentOS系統)

(2)由於網絡問題,后面兩個鏡像可能沒法自動下載下來(第一個可以直接下載)。我們可以通過國內廠商提供的 kubernetes 的鏡像服務來下載,再通過 docker tag 命令將其改成kudeadm 需要的鏡像名稱。
1
2
3
4
5
6
7
docker pull quay.io/coreos/flannel:v0.11.0-amd64
 
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
 
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
 

3,開放端口

10250 端口是一定要開放的,如果控制節點也要運行容器,也需要開啟對應的端口(30000-32767)
1
2
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --reload
 

四、查看節點狀態

(1)在 master 節點上執行 kubectl get nodes 查看節點狀態:
原文:K8s - Kubernetes集群的安裝部署教程(CentOS系統)
(2)目前 node 節點還處於 NotReady 狀態,這是因為每個節點都需要啟動若干組件,這些組件都是在 Pod 中運行,需要首先從 Google 下載鏡像。
 
(3)我們可以通過如下命令查看 Pod 狀態。CrashLoopBackOff、ContainerCreating、Init:0/1 等都表明 Pod 沒有就緒,只有 Running 才是就緒狀態。
1
kubectl get pod --all-namespaces
原文:K8s - Kubernetes集群的安裝部署教程(CentOS系統)
 
(4)我們也可以通過 kubectl describe pod <Pod Name> 查看 Pod 的具體情況,比如我們查看 kube-proxy-96bz6 這個 Pod 目前為何還沒就緒。
1
kubectl describe pod kube-proxy-96bz6 -- namespace =kube-system

(5)結果如下,是由於下載 image 時失敗。這個可能是網絡問題,我們可以繼續等待,因為 Kubernetes 會自動重試。當然我們也可以自己手動執行 docker pull 去下載這個鏡像。
注意:不一定都是 Master 節點下載鏡像失敗,還有可能是 node 節點上下載鏡像失敗,具體是哪里可以看前面部分信息。比如這里的 k8s.gcr.io/pause:3.1 就是 node 節點上沒下載下來。
原文:K8s - Kubernetes集群的安裝部署教程(CentOS系統)

(6)當所有的 Pod 都處於 Running 狀態后,可以發現所有的節點也就准備好了。至此 Kubernetes集群創建成功。
原文:K8s - Kubernetes集群的安裝部署教程(CentOS系統)
 
 

kubectl get cs

 

 

 注釋掉/etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml的 --port=0

即可啟動成功

 
 
 
 
 
kubectl apply -f kube-flannel.yml
 
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

 

 
 
 
 
 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM