Pod


1  Pod的基本概念

  • 最小的部署單元
  • 一組容器的集合
  • 一個pod中的容器共享網絡命名空間
  • pod是短暫的,只要有更新,pod的地址就會變化

 

Pod存在的意義

           pod是為了解決應用程序的親密性,應用場景:

  • 兩個應用之間發生文件交互
  • 兩個應用需要通過127.0.0.1或者socket 通信(例如:nginx+php)
  • 兩個應用需要發生頻繁的調用
              

3 Pod實現機制與設計模式

  • 共享網絡
  • 共享存儲

 

 

在宿主機上通過docker ps可以看到 infra Containter信息,k8s系統為每個Pod都啟動了一個infra containter.使用的鏡像是registry.aliyuncs.com/google_containers/pause:3.2

[root@node1 ~]# docker ps
CONTAINER ID  IMAGE                                               COMMAND                  NAMES
407128176ee2  registry.aliyuncs.com/google_containers/pause:3.2   "/pause"  k8s_POD_java-demo-574598985c-h9nhq_test_0a75cb99-4f2c-41c0-b215-4aff31fce63a_50
58647e3edc65  registry.aliyuncs.com/google_containers/pause:3.2   "/pause"  k8s_POD_k8s-logs-ssqc5_ops_2906b90d-8741-4216-845a-811008cd6761_46
3af7774d708a  registry.aliyuncs.com/google_containers/pause:3.2   "/pause"  k8s_POD_elasticsearch-0_ops_172d4f4d-b4f7-4d55-8f4e-d61b4740048f_86
e625bf497da4  registry.aliyuncs.com/google_containers/pause:3.2   "/pause"  k8s_POD_filebeat-6r2hg_ops_8cefc479-cb85-4c20-9609-4b16db58c7ee_57
daf713bc7722  registry.aliyuncs.com/google_containers/pause:3.2   "/pause"  k8s_POD_dashboard-metrics-scraper-694557449d-pj5kk_kubernetes-dashboard_d69dd18e-3423-4e94-bf14-5fdeb42acaba_1131
bbad63177bf2  registry.aliyuncs.com/google_containers/pause:3.2   "/pause"  k8s_POD_kubernetes-dashboard-5ff49d5845-l5l78_kubernetes-dashboard_ec5f7aa8-813e-4447-b119-11a0c60f7ac3_1144
dfc694ceff78  registry.aliyuncs.com/google_containers/pause:3.2   "/pause"  k8s_POD_nfs-client-provisioner-7676dc9cfc-x5hv7_default_d8036fe1-226f-4304-9bf8-1bb0ba86e500_88
d8315d725f5c  registry.aliyuncs.com/google_containers/pause:3.2   "/pause"  k8s_POD_kube-flannel-ds-amd64-8gk95_kube-system_192f630a-4278-4c09-88fa-28a4da8930e0_60
5a1f4cd0518f  registry.aliyuncs.com/google_containers/pause:3.2   "/pause"  k8s_POD_nginx-ingress-controller-7ct4s_ingress-nginx_b3645238-9acd-4046-b2c5-252d90f7c482_20
7c1397d6b085  registry.aliyuncs.com/google_containers/pause:3.2   "/pause"  k8s_POD_kube-proxy-97sv9_kube-system_37fa295f-94fb-4783-8ff7-634d0e6e1f7e_60
[root@node1 ~]# 
  • Pod中容器的類型
    • Infrastructure Container基礎容器: 維護整個Pod網絡空間
    • InitContainers初始化容器:先於業務容器開始執
    • Containers業務容器 :並行啟動              

4 鏡像拉取策略(imagePullPolicy

  • Always:          默認值,每次創建 Pod 都會重新拉取一次鏡像                        
  • IfNotPresent:鏡像在宿主機上不存在時才拉取
  • Never:       Pod 永遠不會主動拉取這個鏡像
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
    - name: java
      image: java-demo
      imagePullPolicy: IfNotPresent    #imagePullPolicy的位置與image平級


apiVersion: v1
kind: Pod
metadata:
  name: foo
  namespace: awesomeapps
spec:
  containers:
    - name: foo
      image: janedoe/awesomeapp:v1
    imagePullSecrets:                  #當拉取的鏡像是私有創建的鏡像時,需要配置拉鏡像時的憑證
    - name: myregistrykey

 

5 資源限制

       Pod資源配額有兩種:

  • requests 申請配額:容器請求資源時,k8s會根據這個請求,把pod啟動在可以滿足這個資源要求的node上,如果無法滿足那么pod不會被創建。

                                 spec.containers[ ].resources.requests.cpu

                                 spec.containers[ ].resources.requests.memory

  • limits 限制配額:允許這個Pod使用的最大資源,防止單個pod過度消耗node上的資源

                                 spec.containers[ ].resources.limits.cpu

                                 spec.containers[ ].resources.limits.memory

     使用案例:

apiVersion: v1
kind: Pod
metadata:
  name: web
spec:
  containers:
  - name: java
    image: java-demo
    resources:
      requests:                  #該pod期望得到的資源,k8s會盡力把它調度到能滿足這個需求的node節點上
        memory: "500M"           
        cpu: "900m"
      limits:                    #該pod對資源的使用不能大於這個值,limits通常大於requests約20%
        memory: "512M"           #其中cpu值比較抽象,可以這么理解:
        cpu: "1000m"             #1核=1000m;1.5核=1500m    

 

6 重啟策略(restartPolicy

       pod使用的資源如何超出limits會被k8s kill掉,所以需要有重啟策略

  • Always:   默認策略,當容器終止退出后,總是重啟容器,用於守護進程 nginx,mysql
  • OnFailure:當容器異常退出(退出狀態碼非0)時,才重啟容器。
  • Never:    當容器終止退出,從不重啟容器。用於批處理任務

使用案例: 注意 restartPolicy策略要與containers平級

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
    - name: java
      image: java-demo
  restartPolicy: Always    

 

7 健康檢查 (Probe)

健康檢查+重啟策略 可以實現應用程序的修復。                                

       執行kubectl get pods查看pod的狀態時,kubelet根據容器狀態作為健康依據,但不能檢查容器中應用程序狀態,例如程序假死。這就會導致程序假死時無法提供服務,丟失流量。因此引入健康檢查機制確保容器健康存活。

 

Probe健康檢查有兩種類型:

  • livenessProbe (存活檢查):如果檢查失敗,將會殺死容器,然后根據pod的restartPolicy策略來實現重啟。
  • redinessProbe(就緒檢查):如果檢查失敗,Kubernetes會把Pod從service endpoints中剔除

      

 每種類型都支持以下三種健康檢查方法:

  • httpGet     發送HTTP請求,返回200-400范圍狀態碼為在功
  • exec         執行shell命令,返回狀態碼是0為成功
  • tcpSocket  發起tcpSocket建立成功

livenessProberexec案例演示

 啟動一個busybox容器,創建一個/tmp/healthy文件 ,休眠30,再刪除這個文件,再休眠600s
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness                  
    image: k8s.gcr.io/busybox       
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    livenessProbe:                  #livenessProbe與resources平級
      exec:                         
        command:                    
        - cat
        - /tmp/healthy              #通過檢查/tmp/healthy是否存在,來判斷是否重啟Pod
      initialDelaySeconds: 10       #容器啟動之后多少秒執行健康檢查
      periodSeconds: 5              #每次健康檢查的周期    

[root@master Pod]# kubectl apply -f livenessProbe.yaml 
pod/liveness-exec created
NAME                                      READY   STATUS    RESTARTS   AGE
liveness-exec                             1/1     Running   1          2m6s
[root@master Pod]# 
[root@master Pod]# kubectl describe pod liveness-exec
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  2m40s                default-scheduler  Successfully assigned default/liveness-exec to node2
  Normal   Pulling    65s (x2 over 2m40s)  kubelet, node2     Pulling image "busybox"
  Normal   Pulled     64s (x2 over 2m18s)  kubelet, node2     Successfully pulled image "busybox"
  Normal   Created    64s (x2 over 2m18s)  kubelet, node2     Created container liveness
  Normal   Started    64s (x2 over 2m18s)  kubelet, node2     Started container liveness    #健康檢查已開始執行
  Warning  Unhealthy  20s (x6 over 105s)   kubelet, node2     Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
  Normal   Killing    20s (x2 over 95s)    kubelet, node2     Container liveness failed liveness probe, will be restarted
[root@master Pod]# 

 

livenessProberHttpGet案例演示

      nginx中的頁面文件在/usr/share/nginx/html/index.html, 對容器web2的網頁文件進行檢查
spec:
  containers:
  - name: web
    image: lizhenliang/java-demo
    ports:
    - containerPort: 80
  - name: web2
    image: nginx            
    livenessProbe:
      httpGet:
        path: /index.html    
        port: 80
      initialDelaySeconds: 10                 
      periodSeconds: 5

 

redinessProberHttpGet案例演示

         配置方法和livenessProbe一樣, 如果redinessProber的健康檢查被觸發,它會移除kubectl  get ep中發生故障pod的IP。

         通常livenessProber和redinessProber策略可以同時配置。

spec:
  containers:
  - name: web
    image: lizhenliang/java-demo
    ports:
    - containerPort: 80
  - name: web2
    image: nginx             
    readinessProbe:
      httpGet:
        path: /index.html    
        port: 80
      initialDelaySeconds: 10                 
      periodSeconds: 5

 

8 調度策略

 

 

創建一個pod的工作流程:

步驟A

1  user通過kubectl指令創建一個pod,它把指令發送給了API Server

2  API Server收到這個事件后,它會把它寫入etcd數據庫

3   etcd寫入成功后,響應給API Server

4  API Server從ectd中拿到事件,響應給kubectl

           A步驟僅僅是指令執行成功,后台還沒有創建pod

 

步驟B : scheduler負責調度pod

1  scheduler從API中 watch list. 獲取到有新的pod,根據自己的調度算法選擇一個合適的節點

2  scheduler對該pod打一個標簽,記錄它被分配到了哪個node,然后把這個信息返回給API

3  API Server把信息寫入到etcd,

4  ectd把響應信息反饋給API Server

5  API Server把反饋信息返回給scheduler

         B步驟完在對該pod應該分配到哪個node的調度任務

 

步驟C:

1 對應node節點的kubectl會收到分配到自己節點pod的信息

2  kubectl 調用docker的API(/var/run/docker.sock)並創建容器

3 docker把創建信息反饋給kubecctl

4  kubectl把創建容器的狀態響應給API Server

5  API Server把信息寫入到etcd

6  etcd把寫入信息響應給API Server

7 API Server把信息反饋給kubectl

     User再次通過kubectl get pod時,就可以看到pod的信息了

 

二  在調度過程中有哪些屬性會影響調度的結果

 

        影響Pod最終放在哪個節點上的因素有哪些:

                1 控制器中的資源調度依據:resources

                        ks8s會根據resources中request的值找有足夠資源的node來存放pod

                    2 調度策略:

                        schedulerName: default-scheduler

                        nodeName: " "    

                        nodeSelector: { }   根據node的label來篩選

                        affinity: { }

                        tolerations: [ ]

 

 

 

 

 

  • 資源限制對Pod調度的影響

 

 

 

  • 調度策略對Pod的影響

nodeSelector的使用案例:

 

[root@master ~]# kubectl get node
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   6d16h   v1.18.0
node1    Ready    <none>   6d14h   v1.18.0
node2    Ready    <none>   6d14h   v1.18.0
[root@master ~]#                                          
[root@master ~]# kubectl label nodes node1 disktype=ssd   #在Node1節點上添加一個標簽:disktype=ssd
node/node1 labeled
[root@master ~]# kubectl get node --show-labels           #查看node1上的標簽
NAME     STATUS   ROLES    AGE     VERSION   LABELS
node1    Ready    <none>   6d14h   v1.18.0                  beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,
kubernetes.io/hostname=node1,kubernetes.io/os=linux
[root@master ~]#                                          #創建一個deploy,使用nodeSelector匹配ssd的node
[root@master ~]# vim deploy-nodeSelector.yaml 
  1 apiVersion: apps/v1
  2 kind: Deployment
  3 metadata:
  4   name: nodeselecto
  5 spec:
  6   replicas: 1 
  7   selector:
  8     matchLabels:
  9       project: blog
 10       app: java-demo
 11   template:
 12     metadata:
 13       labels:
 14         project: blog
 15         app: java-demo
 16     spec:
 17       nodeSelector:
 18         disktype: "ssd"  
 19       containers:
 20       - name: web
 21         image: java-demo
 22         ports:
 23         - containerPort: 80
[root@master ~]# kubectl apply -f deploy-nodeSelector.yaml 
deployment.apps/nodeselecto created
                                                          #確認新的Pod是否落在了node1上
[root@master ~]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP            NODE     
nodeselecto-6f587b749-q45qr      1/1     Running   0          32s     10.244.1.37   node1    
[root@master ~]# 

 

 

nodeAffinity的使用案例:

       它的功能和nodeSelector類似,可以根據節點上的標簽來約束Pod可以調度到哪些節點,。nodeSelector是精確匹配,如果沒有node滿足它的要求,pod一直處於Pending, 無法被創建。而nodeSelector有更多的邏輯組合,可以實現更錄活的匹配。它能實現必須滿足,也可以實現嘗試滿足。

硬(required): 必須滿足

軟(preferred):嘗試滿足,但不保證

操作符:In、NotIn、Exists、DoesNotExist、Gt、Lt

 

 

 

 

案例分析1

         硬性要求node節點必須滿足 gpu=nvidia-tesla這個標簽時,才可以創建pod

1) 在node2節點上打gpu的標簽

[root@master ~]# kubectl label nodes node2 gpu=nvidia-tesla
node/node2 labeled
[root@master ~]# kubectl get node --show-labels | grep gpu
node2    Ready    <none>   296d   v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,gpu=nvidia-tesla,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
[root@master ~]#

 

2  yaml創建一個Pod

[root@master Pod]# cat pod-nodeaffinity.yaml
apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec:
  affinity:
    nodeAffinity:                  
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: gpu
            operator: In
            values:
            - nvidia-tesla
  containers:
  - name: web
    image: nginx
[root@master Pod]# 
[root@master Pod]# kubectl apply -f pod-nodeaffinity.yaml 
pod/with-node-affinity created
[root@master Pod]#

 

3  確認Pod是否創建成功,且落在了node2上

[root@master Pod]# kubectl get pod -o wide
NAME                   READY   STATUS       RESTARTS   AGE    IP             NODE    NOMINATED NODE   READINESS GATES
with-node-affinity     1/1     Running      0          43s    10.244.2.122   node2   <none>           <none>
[root@master Pod]#
[root@master Pod]# kubectl delete -f pod-nodeaffinity.yaml 
pod "with-node-affinity" deleted
[root@master Pod]#

 

4 刪除node2上的標簽,再次創建這個Pod

[root@master Pod]# kubectl label nodes  node2 gpu-   
node/node2 labeled
[root@master Pod]# kubectl get node --show-labels | grep gpu
[root@master Pod]# 
[root@master Pod]# kubectl apply -f pod-nodeaffinity.yaml 
pod/with-node-affinity created

 

5  因為硬性要求無法滿足,這個Pod無法創建成功

[root@master Pod]#
[root@master Pod]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
with-node-affinity                        0/1     Pending   0          8s
[root@master Pod]# kubectl describe pod with-node-affinity
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 3 node(s) didn't match node selector.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 3 node(s) didn't match node selector.
[root@master Pod]#

 

案例分析2

         軟性要求,嘗試node節點滿足 gpu=nvidia-tesla這個標簽,如果無法滿足也會把pod創建成功

[root@master Pod]# cat pod-nodeaffinity2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec:
  affinity:
    nodeAffinity:                                                         
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: gpu
            operator: In
            values:
            - nvidia-tesla
  containers:
  - name: web
    image: nginx
[root@master Pod]# 
Node2上沒有gpu的標簽,最終pod也可以創建成功
[root@master Pod]# kubectl apply -f pod-nodeaffinity2.yaml 
pod/with-node-affinity created
[root@master Pod]# kubectl get pod -o wide
NAME                       READY   STATUS         RESTARTS   AGE    IP             NODE    NOMINATED NODE   READINESS GATES
with-node-affinity         1/1     Running        0          24s    10.244.2.123   node2   <none>           <none>
[root@master Pod]# 

 

 

Taint(污點) 對Pod調度的影響

        Taints:避免Pod調度到特定Node上

        Tolerations:允許Pod調度到持有Taints的Node上

         應用場景

            • 專用節點:根據業務線將Node分組管理,希望在默認情況下不調度該節點,只有配置了污點容忍才允許分配

            • 配備特殊硬件:部分Node配有SSD硬盤、GPU,希望在默認情況下不調度該節點,只有配置了污點容忍才允許分配

            • 基於Taint的驅逐

           

           配置方法

           第一步:給節點添加污點 格式:

                      kubectl taint node [node] key=value:[effect]

                      例如:kubectl taint node k8s-node1 gpu=yes:NoSchedule

                      驗證:kubectl describe node k8s-node1 |grep Taint    

             其中[effect] 可取值:

                    • NoSchedule :當node節點的effect=NoSchedule,沒有配置污點容忍的pod一定不會被分配到這個node上

            • PreferNoSchedule:盡量不要調度,沒有污點容忍的pod還有可能會被分配到這個節點。

            • NoExecute:不僅不會調度,還會驅逐Node上已有的Pod

   

             第二步:添加污點容忍(tolrations)字段到Pod配置中

                 

              去掉污點:

              kubectl taint node [node] key:[effect]-

 

案例演示

1  在node1上打污點gpu,值為yes; 在node2上打污點gpu,值為no,並查看污點.

[root@master ~]# kubectl taint node node1 gpu=yes:NoSchedule
node/node1 tainted
[root@master ~]# 
[root@master ~]# kubectl taint node node2 gpu=no:NoSchedule
node/node2 tainted
[root@master ~]# 
[root@master ~]# kubectl describe node | grep -i taint
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             gpu=yes:NoSchedule
Taints:             gpu=no:NoSchedule
[root@master ~]# 

 

2 使用deploy啟動一個pod,因為沒有為pod配置污點容忍,所以無法pod無法創建成功。

[root@master Pod]# kubectl create deployment web666  --image=nginx
deployment.apps/web666 created
[root@master Pod]# kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
web666-79d4bf78c9-5s9wt                   0/1     Pending   0          6s
[root@master Pod]#
[root@master Pod]# kubectl describe pod  web666-79d4bf78c9-5s9wt
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  52s (x2 over 52s)  default-scheduler  0/3 nodes are available: 1 node(s) had taint {gpu: no}, that the pod didn't tolerate, 1 node(s) had taint {gpu: yes}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
[root@master Pod]# 

 

3 用yaml方式創建一個pod, 並為pod配置污點容忍,並讓pod落在node2上。

[root@master Pod]# cat tolerations.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:                          #污點容忍tolerations與containers同級
      tolerations:                 #配置它的key為  gpu=no,
      - key: "gpu"                 #操作符為等於
        operator: "Equal"          #effect: "NoSchedule"  
        value: "no"
        effect: "NoSchedule"
      containers:
      - image: lizhenliang/java-demo
        name: java-demo
[root@master Pod]# 

[root@master Pod]# kubectl  apply -f tolerations.yaml 
deployment.apps/web created
[root@master Pod]# 
[root@master Pod]# kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
web-69c49b7845-vtkx5                      1/1     Running   0          29s
[root@master Pod]# 
[root@master Pod]# kubectl get pods -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
web-69c49b7845-vtkx5                      1/1     Running   0          43s   10.244.2.131   node2   <none>           <none>
[root@master Pod]# 

 

nodeName 調度策略

        用於將Pod調度到指定的Node上,不經過調度器

 

[root@master Pod]# cat nodename.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      nodeName: "node2"
      containers:
      - image: lizhenliang/java-demo
        name: java-demo
[root@master Pod]# 
[root@master Pod]# kubectl apply -f nodename.yaml 
deployment.apps/web configured
[root@master Pod]#
[root@master Pod]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
web-859db5f7df-6k5ll                      1/1     Running   0          83s   10.244.2.138   node2   <none>           <none>
[root@master Pod]# 
此時node1,node2上仍是有污點的, 因為使用了Nodename的策略所以,控制器上配置的所有策略都會被跳過,所以最終pod最終可以落在node2上

刪除污點

[root@master Pod]# kubectl taint node node1 gpu-
node/node1 untainted
[root@master Pod]# kubectl taint node node2 gpu-
node/node2 untainted
[root@master Pod]# 
[root@master Pod]# kubectl  describe node | grep -i taint
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             <none>
Taints:             <none>
[root@master Pod]# 

 

 

9 故障的排查方法

        Pod故障的排差方法:

              kubectl describe TYPE/NAME

                                查看創建,啟動容器時的問題

                        

             kubectl logs TYPE/NAME [-c CONTAINER]

                                查看容器的日志,觀察有沒有日志異常的日志

                                        

             kubectl exec POD [-c CONTAINER] -- COMMAND [args...]

                                pod已啟動成功,進入調度容器內的應用

 

 

一個pod中運行兩個容器模板

[root@master pod]# vim deployment.yaml 
  1 apiVersion: apps/v1
  2 kind: Deployment
  3 metadata:
  4   name: java-demo2
  5 spec:
  6   replicas: 1
  7   selector:
  8     matchLabels:
  9       project: blog
 10       app: java-demo
 11   template:
 12     metadata:
 13       labels:
 14         project: blog
 15         app: java-demo
 16     spec:
 17       containers:
 18       - name: web
 19         image: lizhenliang/java-demo
 20         ports:
 21         - containerPort: 80
 22       - name: web2
 23         image: nginx
[root@master pod]# 
[root@master pod]# kubectl  apply -f deployment.yaml 
deployment.apps/java-demo2 configured
[root@master pod]# kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
java-demo2-c579f88db-kf6fw   2/2     Running   0          63s
[root@master pod]# 

 

 

 

 

Init Container

        用於初始化工作,執行完后就結束,可以理解為一次性任務。支持大部分應用容器配置,但不支持健康檢查;優先於應用容器執行。

應用場景:

       環境檢查:例如確保應用容器依賴的服務啟動后再啟動應用容器

       初始化配置:例如給應用容器准備配置文件

 

案例:部署一個web網站,網站的首頁因為變動較為頻繁,所以沒有放到應用鏡像中。希望可以從代碼倉庫中動態摘取放到應用容器中。

apiVersion: v1
kind: Pod
metadata:
  name: init-demo
spec:
  initContainers:      #initContainer執行了一個wget動作,去www.ctnrs.com下首頁並保存到了/opt/目錄下
  - name: download
    image: busybox
    command:
    - wget
    - "-O"
    - "/opt/index.html"
    - http://www.ctnrs.com
    volumeMounts:    #在initContainer中掛載一個emptyDir類型的卷,這個卷掛載在/opt中
    - name: wwwroot
      mountPath: "/opt"
  containers:        
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80
    volumeMounts:    #應用容器中也掛載這個卷從而實現與initConter中的卷實現共享
    - name: wwwroot
      mountPath: /usr/share/nginx/html
  volumes:
  - name: wwwroot
    emptyDir: {}

 

示例

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: busybox
    image: busybox
#   command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
    args:
    - /bin/sh
    - -c 
    - sleep 3600
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    hostPath: 
      path: /tmp
      type: Directory

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM