自主式Pod資源
資源的清單格式
一級字段:apiVersion (group/version),kind,metadata(name,namespace,labels,annotatinos,....),spec ,status(只讀)
Pod資源:
spec.containers <[]object>
kubectl explain pods.spec.containers
- name <string>
image <string>
imagePullPolicy <string>
Always 總是去拉取,Never 只使用本地鏡像,不拉取,IfNotPresent 如果本地沒有在拉取
Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always
if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.
上面的意思是說明,如果標簽是latest默認就是Always ,其他默認就是IfNotPresent
修改鏡像中的默認應用:
command,args 具體內容參考官方手冊
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

標簽:
Label其實是一對 key/value,有效的Label keys必須是部分:一個可選前綴+名稱,通過/來區分,名稱部分是必須的,並且最多63個字符,開始和結束的字符必須是字母或者數字,
中間是字母數字和”_”,”-“,”.”,前綴是刻有可無的,如果指定了,那么前綴必須是一個DNS子域,一系列的DNSlabel通過”.”來划分,長度不超過253個字符,“/”來結尾。如果前綴被省略了
,這個Label的key被假定為對用戶私有的,自動系統組成部分(比如kube-scheduler, kube-controller-manager, kube-apiserver, kubectl)
,這些為最終用戶添加標簽的必須要指定一個前綴,Kuberentes.io 前綴是為Kubernetes 內核部分保留的。
合法的label值必須是63個或者更短的字符。要么是空,要么首位字符必須為字母數字字符,中間必須是橫線,下划線,點或者數字字母。
kubectl get pods --show-labels
[root@k8s-master k8s]# kubectl get pods -L app 顯示帶app的標簽, 並單獨弄一列來顯示,沒有為空
[root@k8s-master k8s]# kubectl get pods -L app,run 可以指定多個
只顯示帶app
[root@k8s-master k8s]# kubectl get pods -l app --show-labels
NAME READY STATUS RESTARTS AGE LABELS
pod-demo 2/2 Running 0 3m33s app=myapp,tier=frontend
修改標簽
[root@k8s-master k8s]# kubectl label pods pod-demo release=canary
pod/pod-demo labeled
[root@k8s-master k8s]# kubectl get pods -l app --show-labels
NAME READY STATUS RESTARTS AGE LABELS
pod-demo 2/2 Running 0 6m37s app=myapp,release=canary,tier=frontend
如果有這個標簽,會報錯
[root@k8s-master k8s]# kubectl label pods pod-demo release=canary
error: 'release' already has a value (canary), and --overwrite is false
如果要修改,可以加參數
[root@k8s-master k8s]# kubectl label pods pod-demo release=canary --overwrite
pod/pod-demo not labeled
標簽選擇器:
等值關系:=,==,!=
[root@k8s-master k8s]# kubectl get pods -l release=canary --show-labels
NAME READY STATUS RESTARTS AGE LABELS
pod-demo 2/2 Running 0 10m app=myapp,release=canary,tier=frontend
[root@k8s-master k8s]# kubectl get pods -l release=canary,app=myapp --show-labels
NAME READY STATUS RESTARTS AGE LABELS
pod-demo 2/2 Running 0 12m app=myapp,release=canary,tier=frontend
root@k8s-master k8s]# kubectl get pods -l release!=canary
NAME READY STATUS RESTARTS AGE
myapp-9b4987d5-244dl 1/1 Running 0 7h16m
myapp-9b4987d5-4rn4q 1/1 Running 0 7h16m
myapp-9b4987d5-7xc9z 1/1 Running 0 7h16m
myapp-9b4987d5-8hppt 1/1 Running 0 7h16m
myapp-9b4987d5-clvtg 1/1 Running 0 7h16m
nginx-deploy-5b66f76f68-lv66h 1/1 Running 2 34d
pod-client 1/1
集合關系:
KEY in (VALUE1,VALUE2,....)
KEY notin(VALUE1,VALUE2,...)
KEY
!KEY
[root@k8s-master k8s]# kubectl get pods -l "release in (canary,beta,alpha)"
NAME READY STATUS RESTARTS AGE
pod-demo 2/2 Running 0 15m
[root@k8s-master k8s]# kubectl get pods -l "release notin (canary,beta,alpha)"
NAME READY STATUS RESTARTS AGE
myapp-9b4987d5-244dl 1/1 Running 0 7h18m
myapp-9b4987d5-4rn4q 1/1 Running 0 7h18m
myapp-9b4987d5-7xc9z 1/1 Running 0 7h18m
myapp-9b4987d5-8hppt 1/1 Running 0 7h18m
myapp-9b4987d5-clvtg 1/1 Running 0 7h18m
nginx-deploy-5b66f76f68-lv66h 1/1 Running 2 4d
pod-client 1/1 Running 0 6h31m
許多資源支持內嵌字段定義其使用的標簽選擇器:
matchlLabels:直接給定鍵值
matchExpressions: 基於給定的表達式來定義使用標簽選擇器{key:"KEY",operator:"OPERATOR",values:[VAL1,VAL2,....]}
參考文檔
https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#set-based-requirement
操作符:
In,NotIn:values字段的值必須為非空列表;
Exists,Notexists:values字段的值必須為空列表;
nodeSelector <map[string]string> 節點標簽選擇器,
[root@k8s-master k8s]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master Ready master 37d v1.13.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-master,node-role.kubernetes.io/master=
k8s-node1 Ready <none> 37d v1.13.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-node1
k8s-node2 Ready <none> 37d v1.13.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-node2
[root@k8s-master k8s]# kubectl label nodes k8s-node1 disktype=ssd
node/k8s-node1 labeled
[root@k8s-master k8s]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master Ready master 37d v1.13.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-master,node-role.kubernetes.io/master=
k8s-node1 Ready <none> 37d v1.13.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=k8s-node1
k8s-node2 Ready <none> 37d v1.13.4 beta.kubernetes.io/arch=amd64,beta
vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command:
- "/bin/sh"
- "-c"
- "sleep 3600"
nodeSelector:
disktype: ssd
[root@k8s-master k8s]# kubectl delete -f pod-demo.yaml
pod "pod-demo" deleted
[root@k8s-master k8s]# kubectl create -f pod-demo.yaml
pod/pod-demo created
[root@k8s-master k8s]# kubectl get pods pod-demo -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-demo 2/2 Running 0 38s 10.244.2.22 k8s-node1 <none> <none>
kubectl describe pods pod-demo
里面可以看到:
Normal Scheduled 6m24s default-scheduler Successfully assigned default/pod-demo to k8s-node1
nodeName <sring>
anotations:
與label不同的地方在於,它不能用於挑選資源對象,只用於為對象提供“元數據”。
vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
tier: frontend
annotations:
doudou/create-by: "cluster admin"
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command:
- "/bin/sh"
- "-c"
- "sleep 3600"
nodeSelector:
disktype: ssd
刪除在重新創建
kubectl create -f pod-demo.yaml
[root@k8s-master k8s]# kubectl describe pods pod-demo
Name: pod-demo
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: k8s-node1/10.211.55.12
Start Time: Tue, 23 Apr 2019 00:28:33 +0800
Labels: app=myapp
tier=frontend
Annotations: doudou/create-by: cluster admin
Status: Running
Pod的生命周期:
因為pod代表着一個集群中節點上運行的進程,讓這些進程不再被需要,優雅的退出是很重要的(與粗暴的用一個KILL信號去結束,讓應用沒有機會進行清理操作)。用戶應該能請求刪除,並且在室進程終止的情況下能知道,而且也能保證刪除最終完成。
當一個用戶請求刪除pod,系統記錄想要的優雅退出時間段,在這之前Pod不允許被強制的殺死,TERM信號會發送給容器主要的進程。一旦優雅退出的期限過了,KILL信號會送到這些進程,pod會從API服務器其中被刪除。如果在等待進程結束的時候
,Kubelet或者容器管理器重啟了,結束的過程會帶着完整的優雅退出時間段進行重試。
示例流程如下:
- 用戶發送刪除pod的命令,默認寬限期是30秒;
- 在Pod超過該寬限期后API server就會更新Pod的狀態為“dead”;
- 在客戶端命令行上顯示的Pod狀態為“terminating”;
- 跟第三步同時,當kubelet發現pod被標記為“terminating”狀態時,開始停止pod進程:
- 如果在pod中定義了preStop hook,在停止pod前會被調用。如果在寬限期過后,preStop hook依然在運行,第二步會再增加2秒的寬限期;
- 向Pod中的進程發送TERM信號;
- 跟第三步同時,該Pod將從該service的端點列表中刪除,不再是replication controller的一部分。關閉的慢的pod將繼續處理load balancer轉發的流量;
- 過了寬限期后,將向Pod中依然運行的進程發送SIGKILL信號而殺掉進程。
- Kublete會在API server中完成Pod的的刪除,通過將優雅周期設置為0(立即刪除)。Pod在API中消失,並且在客戶端也不可見。
刪除寬限期默認是30秒。 kubectl delete命令支持 —grace-period=<seconds> 選項,允許用戶設置自己的寬限期。如果設置為0將強制刪除pod。在kubectl>=1.5版本的命令中,你必須同時使用 --force 和 --grace-period=0 來強制刪除pod。
Pod的強制刪除是通過在集群和etcd中將其定義為刪除狀態。當執行強制刪除命令時,API server不會等待該pod所運行在節點上的kubelet確認,就會立即將該pod從API server中移除,這時就可以創建跟原pod同名的pod了。這時,在節點上的pod會被立即設置為terminating狀態,不過在被強制刪除之前依然有一小段優雅刪除周期


Container probes
Probe 是在容器上 kubelet 的定期執行的診斷,kubelet 通過調用容器實現的 Handler 來診斷。目前有三種 Handlers :
ExecAction:在容器內部執行指定的命令,如果命令以狀態代碼 0 退出,則認為診斷成功。
TCPSocketAction:對指定 IP 和端口的容器執行 TCP 檢查,如果端口打開,則認為診斷成功。
HTTPGetAction:對指定 IP + port + path路徑上的容器的執行 HTTP Get 請求。如果響應的狀態代碼大於或等於 200 且小於 400,則認為診斷成功
pod生命周期的重要行為:

初始化容器
Init Container在所有容器運行之前執行(run-to-completion),常用來初始化配置。
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
容器探測
為了確保容器在部署后確實處在正常運行狀態,Kubernetes提供了兩種探針(Probe,支持exec、tcp和httpGet方式)來探測容器的狀態:
Probe 是在容器上 kubelet 的定期執行的診斷,kubelet 通過調用容器實現的 Handler 來診斷。目前有三種 Handlers :
ExecAction:在容器內部執行指定的命令,如果命令以狀態代碼 0 退出,則認為診斷成功。
TCPSocketAction:對指定 IP 和端口的容器執行 TCP 檢查,如果端口打開,則認為診斷成功。
HTTPGetAction:對指定 IP + port + path路徑上的容器的執行 HTTP Get 請求。如果響應的狀態代碼大於或等於 200 且小於 400,則認為診斷成功
LivenessProbe:探測應用是否處於健康狀態,如果不健康則刪除重建改容器
ReadinessProbe:探測應用是否啟動完成並且處於正常服務狀態,如果不正常則更新容器的狀態

kubectl explain pods.spec.containers.livenessProbe
vim liveness-exec-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-exec-pod.yaml
namespace: default
spec:
containers:
- name: liveness-exec-container
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c","touch /tmp/healthy;sleep 30;rm -rf /tmp/healthy;sleep 3600"]
livenessProbe:
exec:
command: ["test","-e","/tmp/healthy"]
initialDelaySeconds: 1
periodSeconds: 3
kubectl create -f liveness-exec.yaml
[root@k8s-master k8s]# kubectl get pods liveness-exec-pod
NAME READY STATUS RESTARTS AGE
liveness-exec-pod 1/1 Running 3 4m4s
[root@k8s-master k8s]#kubectl describe pods liveness-exec-pod
Name: liveness-exec-pod
Containers:
liveness-exec-container:
Container ID: docker://8d09c9e59b7ab18cc777f7408b7ae889c4b8439d2481a3bb85b0b5dcde166e44
Command:
/bin/sh
-c
touch /tmp/healthy;sleep 30;rm -rf /tmp/healthy;sleep 3600
State: Running
Started: Tue, 23 Apr 2019 18:05:46 +0800
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Tue, 23 Apr 2019 18:04:37 +0800
Finished: Tue, 23 Apr 2019 18:05:46 +0800
Ready: True
Restart Count: 2
Liveness: exec [test -e /tmp/healthy] delay=1s timeout=1s period=3s #success=1 #failure=3
vim liveness-httpget-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-httpget-pod
namespace: default
spec:
containers:
- name: liveness-httpget-container
image: ikubernetes/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
livenessProbe:
httpGet:
port: http
path: /index.html
initialDelaySeconds: 1
periodSeconds: 3
kubectl exec -it liveness-httpget-pod -- /bin/sh
/ # rm -rf /usr/share/nginx/html/index.html
[root@k8s-master k8s]# kubectl get pod liveness-httpget-pod
NAME READY STATUS RESTARTS AGE
liveness-httpget-pod 1/1 Running 1 93s
只會重啟一次,因為在重啟文件恢復了,沒有問題了
查看詳細的信息
kubectl describe pod liveness-httpget-pod
Name: liveness-httpget-pod
Containers:
liveness-httpget-container:
Last State: Terminated
Ready: True
Restart Count: 1
Liveness: http-get http://:http/index.html delay=1s timeout=1s period=3s #success=1 #failure=3
Normal Killing 93s kubelet, k8s-node1 Killing container with id
docker://liveness-httpget-container:Container failed liveness probe. Container will be killed and recreated.
kubectl explain pods.spec.containers.readinessProbe
vim readiness-httpget-pod
apiVersion: v1
kind: Pod
metadata:
name: readiness-httpget-pod
namespace: default
spec:
containers:
- name: readiness-httpget-container
image: ikubernetes/myapp:v1
magePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
readinessProbe:
httpGet:
port: http
path: /index.html
[root@k8s-master k8s]# kubectl get pods readiness-httpget-pod
NAME READY STATUS RESTARTS AGE
readiness-httpget-pod 1/1 Running 0 59s
[root@k8s-master ~]# kubectl exec -it readiness-httpget-pod -- /bin/sh
/ # rm -rf /usr/share/nginx/html/index.html
[root@k8s-master k8s]# kubectl get pods readiness-httpget-pod
NAME READY STATUS RESTARTS AGE
readiness-httpget-pod 0/1 Running 0 60s
[root@k8s-master k8s]# kubectl describe pods readiness-httpget-pod
Name: readiness-httpget-pod
readiness-httpget-container:
Ready: False
Restart Count: 0
Readiness: http-get http://:http/index.html delay=0s timeout=1s period=10s #success=1 #failure=3
Warning Unhealthy 0s (x15 over 10m) kubelet, k8s-node1 Readiness probe failed: HTTP probe failed with statuscode: 404
[root@k8s-master ~]# kubectl exec -it readiness-httpget-pod -- /bin/sh
/ # echo "test" >> /usr/share/nginx/html/index.html
[root@k8s-master k8s]# kubectl get pods readiness-httpget-pod
NAME READY STATUS RESTARTS AGE
readiness-httpget-pod 1/1 Running 0 2m41s
看到恢復了正常的狀態
kubectl explain pods.spec.containers.lifecycle
kubectl explain pods.spec.containers.lifecycle.postStart
kubectl explain pods.spec.containers.lifecycle.preStop
容器生命周期鈎子(Container Lifecycle Hooks)監聽容器生命周期的特定事件,並在事件發生時執行已注冊的回調函數。支持兩種鈎子:
postStart: 容器啟動后執行,注意由於是異步執行,它無法保證一定在ENTRYPOINT之后運行。如果失敗,容器會被殺死,
並根據RestartPolicy決定是否重啟preStop:容器停止前執行,常用於資源清理。如果失敗,容器同樣也會被殺死
而鈎子的回調函數支持兩種方式:
exec:在容器內執行命令
httpGet:向指定URL發起GET請求
postStart和preStop鈎子示例:
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
vim poststart-pod.yaml
apiVesion: v1
kind: Pod
metadata:
name: poststart-pod
namespace: default
spec:
containers:
- name: busybox-httpd
image: busybox:latest
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command: ['mkdir','-p','/data/web/html']
command: ["/bin/httpd"]
args: ["-f","-h /data/web/html"]
[root@k8s-master k8s]# kubectl get pods poststart-pod
NAME READY STATUS RESTARTS AGE
poststart-pod 0/1 PostStartHookError: command 'mkdir -p /data/web/html' exited with 126: 2 18s
[root@k8s-master k8s]# kubectl get pods poststart-pod
NAME READY STATUS RESTARTS AGE
poststart-pod 0/1 PostStartHookError: comman exited with 126: 4 103s
kubectl describe pod poststart-pod
Warning FailedPostStartHook 6s (x4 over 53s)
kubelet, k8s-node1 Exec lifecycle hook ([mkdir -p /data/web/html]) for Container "busybox-httpd" in Pod
"poststart-pod_default(6c37ff00-65bd-11e9-b018-001c42baaf43)" failed - error:
command 'mkdir -p /data/web/html' exited with 126: , message: "cannot exec in a stopped state: unknown\r\n"
vim vim poststart-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: poststart-pod
namespace: default
spec:
containers:
- name: busybox-httpd
image: busybox:latest
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command: ['mkdir','-p','/data/web/html']
command: ['/bin/sh','-c','sleep 3600']
注意:postStart里面的command不能強依賴 pod的command,上面的報錯是有誤導的,應該是POD command 運行早於postStat
restartPolicy <string>
Always:只要退出就重啟
OnFailure:失敗退出(exit code不等於0)時重啟
Never:只要退出就不再重啟 (注意,這里的重啟是指在Pod所在Node上面本地重啟,並不會調度到其他Node上去)
參考文檔:
https://www.jianshu.com/p/91625e7a8259?utm_source=oschina-app
https://blog.csdn.net/horsefoot/article/details/52324830
https://www.cnblogs.com/linuxk/p/9569618.html
