k8s容器-運維管理篇


二. 運維和管理

維護參考網址

https://jimmysong.io/kubernetes-handbook/practice/install-kubernetes-on-centos.html

1. node管理

禁止pod調度到該節點上    
kubectl cordon <node>

驅逐該節點上的所有pod
kubectl drain <node>

允許調度新的pod到該節點
kubectl uncordon <node>

附:該命令會刪除該節點上的所有Pod(DaemonSet除外),在其他node上重新啟動它們,通常該節點需要維護時使用該命令。直接使用該命令會自動調用命令。
   當該節點維護完成,啟動了kubelet后,再使用即可將該節點添加到kubernetes集群中
-------------------------------------------------------------------------------------------------------------------------
eg: 
    列出集群里面所有的節點:
    kubectl get nodes

    告知 Kubernetes 移除節點:     
    kubectl drain <node name>

    執行完成后,如果沒有任何錯誤返回,您可以關閉節點(如果是在雲平台上,可以刪除支持該節點的虛擬機)。如果在維護操作期間想要將節點留在集群,
    那么您需要運行下面命令:
    kubectl uncordon <node name>
    然后,它將告知 Kubernetes 允許調度新的 pod 到該節點

2. deployment控制器的創建

  • 簡單的nginx應用可以定義為
  • cat nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
  • 擴縮pod數量
kubectl scale deployment nginx-deployment --replicas 6
  • 如果集群支持horizo​​ntal pod autoscaling 的話,還可以為Deployment設置自動擴展:
kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
  • 更新鏡像
kubectl set image deployment/nginx-deployment  nginx=nginx:1.14.2

附: 第一個deployment:代表的是deployment控制器
    第二個nginx-deployment: 代表的是控制器的名字
    "="號前:代表的是容器的名字
    "="號后:代表的是更新后的鏡像名
  • 回滾鏡像
kubectl rollout undo deployment/nginx-deployment
  • 查看回滾狀態
kubectl rollout status deployment/nginx
kubectl get deployments
  • 我們通過執行kubectl get rs可以看到Deployment更新了Pod,通過創建一個新的ReplicaSet並擴容了3個replica,同時將原來的ReplicaSet縮容到了0個replica
# kubectl get rs
NAME               DESIRED   CURRENT   READY   AGE
nginx-68ccc6f75f   0         0         0       34m
nginx-755464dd6c   3         3         3       2d3h
  • 下次更新這些pod 的時候,只需要更新Deployment 中的pod 的template 即可。
Deployment 可以保證在升級時只有一定數量的Pod 是down 的。默認的,它會確保至少有比期望的Pod數量少一個是up狀態(最多一個不可用)。

Deployment 同時也可以確保只創建出超過期望數量的一定數量的Pod。默認的,它會確保最多比期望的Pod數量多一個的Pod 是up 的(最多1個surge )

附:升級時,pod是逐個升級,不會出現大批量的pod不可用的狀態現象

3. kubectl工具的使用

  • 3.1 創建
    kubectl run nginx --replicas=3 --labels="app=nginx-example" --image=nginx:1.17.4 --port=80

  • 3.2 查看
    kubectl get deploy
    kubectl get pods --show-labels
    kubectl get pods -l app=example
    kubectl get pods -o wide

  • 3.3 發布
    kubectl expose deployment nginx --port=88 --type=NodePort --tartget-port=80 --name=nginx-service 
    kubectl describe service nginx-service

  • 3.4 故障排查
    kubectl describe TYPE NAME_PREFIX
    kubectl logs nginx-xxx
    kubectl exec -it nginx-xxx bash

  • 3.5 更新 
    kubectl set image deployment/nginx nginx=nginx:1.17.4 
     
    kubectl edit deployment/nginx

  • 3.6 資源發布管理
    kubectl rollout status deployment/nginx
    kubectl rollout history deployment/nginx
    kubectl rollout history deployment/nginx --revision=3 kubectl scale deployment nginx --replicas=10

  • 3.7 回滾 
    kubectl rollout undo deployment/nginx-deployment
    kubectl rollout undo deployment/nginx-deployment --to-revision=3

  • 3.8 刪除
    kubectl delete deploy/nginx
    kubectl delete svc/nginx-service

  • 3.9 寫yaml文件用到的api
    定義配置時,指定最新穩定版API(當前為v1)
    kubectl api-versions

4. web服務的deployment和service文件的編排

  • 4.1 nginx的deployment文件的編排
cat > nginx-deployment.yaml << EOF 
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.10
        ports:
        - containerPort: 80
EOF
  • 4.2 nginx的service文件的編排
cat > nginx-service.yaml << EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  ports:
  - port: 88
    targetPort: 80
  selector:
    app: nginx
EOF
  • 4.3 執行創建
kubectl  create  -f  nginx-deployment.yaml
kubectl  create  -f  nginx-service.yaml

5. pod的基本管理

  • 創建/查詢/更新/刪除
  • 資源限制
  • 調度約束
  • 重啟策略
  • 健康檢查
  • 問題定位
deployment 控制器控制pod的創建更新等操作
  • 5.1 創建pod對象:
cat > pod.yaml  << EOF  
apiVersion: v1  
kind: Pod  
metadata:  
  name: nginx-pod  
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.14
EOF
  • 5.2 創建pod資源
    kubectl create -f pod.yaml

  • 5.3 查看pod
    kubectl get pod [nginx-pod]

  • 5.4 查看pod的詳細描述信息
    kubectl describe pod nginx-pod

  • 5.5 更新資源
    kubectl apply -f pod.yaml

  • 5.6 刪除資源
    注:這種刪除方法和直接指定類型刪除的效果是一樣的
    kubectl delete -f pod.yaml

6. pod資源限制

  • cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.14
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

7. pod 調度約束和與重啟策略

7.1 調度約束

Pod.spec.nodeName: 強制約束Pod調度到指定Node節點上 Pod.spec.nodeSelector: 通過lable-selector機制選擇節點

  • 驗證方法:
  • 1) 在master端,先給某個節點指定一個標簽
  • 2) 修改pod.yaml文件,並將標簽配置到將要創建的pod配置中
  • 3) 通過pod.yaml文件匹配帶有指定標簽的節點,進行分配資源;如果不指定資源的話,會平均分配所有節點資源,進行調度
  • 4) 如上,將pod創建在指定的數據節點上
  • 創建指定數據節點的標簽
kubectl  label node  192.168.10.22 env_role=dev
  • 查看並確認指定標簽
kubectl  describe  node  192.168.10.22
  • 配置pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod2
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.14
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  nodeSelector:
    env_role: dev
  • 創建新pod
kubectl  create -f  pod.yaml
  • 查看新pod所在節點
# kubectl   get pod  -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES
nginx-deployment-5694557fbc-5jhxh   1/1     Running   1          26h   172.50.32.4   192.168.10.24   <none>           <none>
nginx-deployment-5694557fbc-bdtd4   1/1     Running   1          26h   172.50.36.2   192.168.10.23   <none>           <none>
nginx-deployment-5694557fbc-gkr9x   1/1     Running   1          26h   172.50.26.2   192.168.10.22   <none>           <none>
nginx-pod                           1/1     Running   0          26m   172.50.26.3   192.168.10.22   <none>           <none>
nginx-pod2                          1/1     Running   0          3s    172.50.26.4   192.168.10.22   <none>           <none>

如上,此標簽的約束起作用了。

7.2 重啟策略

  • 三種重啟策略
    Always: 當容器停止,總是重建容器,默認策略。
    OnFailure: 當容器異常退出(退出狀態碼非0)時,才重啟容器。
    Never: 當容器終止退出,從不重啟容器。

eg:

  • cat pod.yaml
apiVersion: v1 
kind: Pod
metadata:
  name: nginx-pod2
  labels: 
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.14
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  restartPolicy: OnFailure

8. 健康檢查

提供Probe機制,有兩種類型:

  • livenessProbe
    如果檢查失敗,將殺死容器,根據Pod的restartPolicy來操作

  • readinessProbe
    若果檢查失敗,Kubernetes 會把Pod從service endpoints中剔除。

Probe支持三種檢查方法:

  • httpGet
    發送HTTP請求,返回200-400范圍狀態碼為成功

  • exec 執行shell命令返回狀態碼是0為成功

  • tcpSocket 發起TCP Socket 建立成功。

eg:

  • cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  restartPolicy: OnFailure
  containers:
  - name: nginx
    image: nginx:1.14
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
    ports:
    - containerPort: 80
    livenessProbe:
      httpGet:
        path: /index.html
        port: 80


如上配置,一旦網站首頁進行檢查出狀態碼不在200-400之間,則會殺死容器,重新啟動一個新的容器

9. service代理模式與負載均衡

9.1 service

代理模式,目前應用較為廣泛的是基於iptables的轉發 未來1.8+版本以后將會用基於ipvs內核的轉發

9.2 負載均衡代理

服務代理

  • cat service.yaml
apiVersion: v1                      
kind: Service                       
metadata:                           
  name: my-service                  
spec:                               
  selector:                         
    app: MyApp                      
  ports:                            
  - name: http                      
    protocol: TCP                   
    port: 80                        
    targetPort: 80                  
  - name: https                     
    protocol: TCP                   
    port: 443                       
    targetPort: 443                 
  • 創建並查看service
[root@k8s-master pod]# kubectl  create -f  service.yaml
[root@k8s-master pod]# kubectl  get svc
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes      ClusterIP   10.10.10.1     <none>        443/TCP          31d
my-service      ClusterIP   10.10.10.234   <none>        80/TCP,443/TCP   14h
nginx-service   ClusterIP   10.10.10.61    <none>        88/TCP           42h
  • 編輯my-service服務
    修改里面的服務代理標簽 selector,進行代理

  • kubectl edit svc/my-service
    如下內容是會自動根據已執行創建的service加載出來,無需手動添加,修改即可

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-10-24T10:50:44Z"
  name: my-service
  namespace: default
  resourceVersion: "270677"
  selfLink: /api/v1/namespaces/default/services/my-service
  uid: 21553546-f64c-11e9-b55f-000c2960f61c
spec:
  clusterIP: 10.10.10.234
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    app: nginx
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
  • 查看服務代理的對應的后端節點
[root@k8s-master pod]# kubectl   get  endpoints  my-service
NAME         ENDPOINTS                                                  AGE
my-service   172.50.26.2:80,172.50.26.3:80,172.50.32.4:80 + 5 more...   15h
[root@k8s-master pod]# kubectl   get  ep  my-service
NAME         ENDPOINTS                                                  AGE
my-service   172.50.26.2:80,172.50.26.3:80,172.50.32.4:80 + 5 more...   15h
  • 訪問:
可以在node的任意節點上,訪問  curl -I 10.10.10.234:80

10. service服務發現與DNS

10.1 service服務發現

  • 服務發現支持Service環境變量和DNS兩種模式:
  • 環境變量
當一個pod運行到Node,kubelet會為每個容器添加一組環境變量,Pod容器中程序就可以使用這些環境變量發現service。  
環境變量名格式如下:  
{SVCNAME}_SERVICE_HOST
{SVCNAME}_SERVICE_PORT
其中服務名和端口名轉為大寫,連字符轉換為下划線

限制:
1) Pod和Service的創建順序是有要求的,Service必須在Pod創建之前被創建,否則環境變量不會設置到Pod中。
2) Pod只能獲取同Namespace中的Service環境變量
  • DNS
DNS服務監視Kubernetes API, 為每一個Service創建DNS記錄用於域名解析。 這樣Pod中就可以通過DNS域名獲取Service的訪問地址。 

11. service 發布 服務

11.1 訪問方式

直接訪問service暴露的IP+端口

11.2 三種服務類型

1) Cluster IP
分配一個內部集群地址,只能在集群內部訪問(同Namespace內的Pod),默認ServiceType

2) NodePort 
分配一個內部集群地址,並在每個節點上啟用一個端口來暴露服務,可以在集群外部訪問。
訪問地址:<NodeIP>:<NodePort>

3) LoadBalancer
分配一個內部就去哪IP地址,並在每個節點上啟用一個端口來暴露服務。
除此之外,kubernetes會請求底層雲平台上的負載均衡器,將每個Node([NodeIP]:[NodePort])作為后端添加進去。 
  • eg:
注意 type值的使用
# cat  nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80

12. Ingress發布服務

  • 先訪問的是ingress---> service
  • 容器的命名空間以及ConfigMap的配置
# cat  configmap.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    myapp: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx

---
  • 容器角色,權限的相關配置
# cat  rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    myapp: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-rolebinding
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrolebinding
  labels:
    myapp: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx    
---
  • ingress 的控制器配置
  • 注意,鏡像的修改
# cat  ingress-controller.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      myapp: ingress-nginx
  template:
    metadata:
      labels:
        myapp: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          #ingress-控制器的鏡像地址
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---
  • ingress發布的pod以及規則整合配置文件如下:
# cat  ingress-pod.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-nginx
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp-nginx
          image: nginx:1.7.9
          ports:
            - name: http
              containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: myapp-nginx
  namespace: default
spec:
  type: NodePort
  selector:
    app: myapp
  ports:
    - name: http
      port: 80
      targetPort: 80
      nodePort: 30000

---

到此,創建執行文件后,通過各個<NodeIP>:<NodePort> 方式進行調用訪問
  • 配置ingress域名訪問
# cat  ingress-nginx.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: nginx.k8s.com
    http:
      paths:
      - path:
        backend:
          serviceName: myapp-nginx
          servicePort: 80
---

此配置文件,是通過ingress--->Service 模式進行訪問
  • 運行ingress所有的配置文
cd  ingress ; kubectl  apply  -f .
  • 訪問任意一個地址均可訪問
http://192.168.10.22:30000
http://192.168.10.23:30000
http://192.168.10.24:30000
  • 若單獨啟用一個nginx作為前端代理,調用每個數據節點暴露的端口,也可使用
  • 這樣的話,每個數據節點就可以走內網進行調用了。
# cat  /etc/nginx/conf.d/test.k8s.com.conf 
upstream k8s_nginx {
    server 192.168.10.22:30000 weight=2;
    server 192.168.10.23:30000 weight=2;
    server 192.168.10.24:30000 weight=2;
}

server {
    listen 80;
    server_name test.k8s.com;
    index       index.html index.htm index.php;
    access_log  /var/log/nginx/a.ccess.log main;
    error_log   /var/log/nginx/error.log; 
    location / {
        proxy_pass http://k8s_nginx;
        proxy_set_header Host $host:$server_port;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header REMOTE-HOST $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}
  • 訪問 前端域名即可
http://test.k8s.com

未完,待續 ^_^


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM