k8s核心概念Controller、Service、Secret、ConfigMap
1. Controller
1. 什么是Controller
集群上管理和運行容器的對象。
2. Controller 和 pod 的關系
pod 是通過Controller 實現應用的運維,比如伸縮,滾動升級等待。pod 和 controller 通過label 標簽建立關系。
3. ReplicationController 和 ReplicaSet
1. ReplicationController
當我們定義了一個RC並且提交到k8s 集群之后,master 節點上的ControllerManager組件就得到通知,定期檢查系統中存活的pod, 當低於或者高於期望值,就進行伸縮。
2. ReplicaSet
本質上和RC沒有區別,RC和k8s 代碼模塊的ReplicationControllerController 重名,所以在k8sv1.2 之后將RC 升級為RS(升級版RC),與RC區別: RS 支持基於集合的labelSelector,而RC只支持基於等式的Label Selector。
官方的建議是不要越過RC或者RS直接創建Pod, 即使只有一個pod 副本,也強烈建議使用RC來定義Pod。也不要直接使用RS, 而應該通過deployment 來創建RS 和 pod。
4. deployment 控制器
deployment 是k8s1.2 引入的概念,目的是為了更好的解決pod 的編排問題,Deployment 內部使用ReplicaSet 來實現。
1. 部署無狀態應用: 認為pod 都一樣,沒有順序要求, 不用考慮在哪個node 運行,隨意進行擴展和伸縮
2. 管理Pod和 ReplicaSet
3. 部署、滾動升級等
4. 典型的像web服務、分布式服務等
5. deployment 部署無狀態應用
1. 導出yml 文件,並且查看內容

[root@k8smaster1 ~]# cat web2.yml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: web name: web spec: replicas: 1 selector: matchLabels: app: web strategy: {} template: metadata: creationTimestamp: null labels: app: web spec: containers: - image: nginx name: nginx resources: {} status: {}
2. 執行創建資源
[root@k8smaster1 ~]# kubectl apply -f web2.yml deployment.apps/web created [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web-5dcb957ccc-mjvjn 1/1 Running 0 9s 10.244.2.18 k8snode2 <none> <none>
3. 暴露端口
[root@k8smaster1 ~]# kubectl expose deployment web --port=80 --type=NodePort --target-port=80 --name=web3 -o yaml > web3.yml [root@k8smaster1 ~]# cat web3.yml apiVersion: v1 kind: Service metadata: creationTimestamp: "2022-01-16T02:52:51Z" labels: app: web managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:app: {} f:spec: f:externalTrafficPolicy: {} f:ports: .: {} k:{"port":80,"protocol":"TCP"}: .: {} f:port: {} f:protocol: {} f:targetPort: {} f:selector: .: {} f:app: {} f:sessionAffinity: {} f:type: {} manager: kubectl operation: Update time: "2022-01-16T02:52:51Z" name: web3 namespace: default resourceVersion: "1114657" selfLink: /api/v1/namespaces/default/services/web3 uid: f7cc35a4-4a4e-403a-b890-bf13673792c9 spec: clusterIP: 10.101.240.157 externalTrafficPolicy: Cluster ports: - nodePort: 30445 port: 80 protocol: TCP targetPort: 80 selector: app: web sessionAffinity: None type: NodePort status: loadBalancer: {} [root@k8smaster1 ~]# kubectl apply -f web3.yml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply service/web3 configured [root@k8smaster1 ~]# kubectl get pods,svc -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/web-5dcb957ccc-mjvjn 1/1 Running 0 3m53s 10.244.2.18 k8snode2 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d <none> service/web3 NodePort 10.101.240.157 <none> 80:30445/TCP 35s app=web
6. 升級回滾和動態伸縮
(1) 查看目前版本
[root@k8smaster1 ~]# kubectl exec -it web-5dcb957ccc-mjvjn bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@web-5dcb957ccc-mjvjn:/# nginx -v nginx version: nginx/1.21.5
(2) 應用升級
[root@k8smaster1 ~]# kubectl set image deployment web nginx=nginx:1.15 deployment.apps/web image updated
(3) 查看應用升級狀態: 可以看出來升級之前新起了一個pod, pod唯一Id 發生了變化。其實是類似於滾動發布,如果有多個副本,會先停掉部分node,然后升級,這個之后測試下。
[root@k8smaster1 ~]# kubectl rollout status deployment web deployment "web" successfully rolled out [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-bbcf684cb-p9j2t 1/1 Running 0 114s [root@k8smaster1 ~]# kubectl exec -it web-bbcf684cb-p9j2t bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@web-bbcf684cb-p9j2t:/# nginx -v nginx version: nginx/1.15.12
關於升級其策略有兩種:recreate方式會先停止所有就版本,停止完后才部署新版本; RollingUpdate 滾動發布, 先停部分服務,然后啟動幾個新的服務,然后將原來停掉的刪除, 再這樣滾動剩余的部分。
[root@k8smaster01 ~]# kubectl explain deploy.spec.strategy.type KIND: Deployment VERSION: apps/v1 FIELD: type <string> DESCRIPTION: Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate.
(4) 查看升級版本且回滾到指定版本
[root@k8smaster1 ~]# kubectl rollout history deployment web deployment.apps/web REVISION CHANGE-CAUSE 1 <none> 2 <none> [root@k8smaster1 ~]# kubectl rollout undo deployment web --to-revision=1 deployment.apps/web rolled back [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-5dcb957ccc-7hns9 1/1 Running 0 25s [root@k8smaster1 ~]# kubectl exec -it web-5dcb957ccc-7hns9 bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@web-5dcb957ccc-7hns9:/# nginx -v nginx version: nginx/1.21.5
(5) 回滾到上一個版本: 可以看到是先新起一個,然后停掉原來的,類似於滾動
[root@k8smaster1 ~]# kubectl rollout undo deployment web deployment.apps/web rolled back [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-5dcb957ccc-7hns9 1/1 Terminating 0 110s web-bbcf684cb-hxhf7 1/1 Running 0 17s [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-bbcf684cb-hxhf7 1/1 Running 0 23s
(6) 動態伸縮
[root@k8smaster1 ~]# kubectl scale deployment web --replicas=10 deployment.apps/web scaled [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web-bbcf684cb-7lkcf 1/1 Running 0 8s 10.244.1.23 k8snode1 <none> <none> web-bbcf684cb-bdpck 0/1 ContainerCreating 0 8s <none> k8snode1 <none> <none> web-bbcf684cb-blqn8 0/1 ContainerCreating 0 8s <none> k8snode2 <none> <none> web-bbcf684cb-d22w9 0/1 ContainerCreating 0 8s <none> k8snode2 <none> <none> web-bbcf684cb-hxhf7 1/1 Running 0 119s 10.244.1.21 k8snode1 <none> <none> web-bbcf684cb-ls88v 0/1 ContainerCreating 0 8s <none> k8snode1 <none> <none> web-bbcf684cb-qnm98 0/1 ContainerCreating 0 8s <none> k8snode2 <none> <none> web-bbcf684cb-rswzl 0/1 ContainerCreating 0 8s <none> k8snode2 <none> <none> web-bbcf684cb-w9ctz 0/1 ContainerCreating 0 8s <none> k8snode2 <none> <none> web-bbcf684cb-wgwd5 1/1 Running 0 8s 10.244.1.22 k8snode1 <none> <none>
6. StatefulSet 部署有狀態應用
有狀態應用,每個pod 都獨立運行,保持pod 啟動順序和唯一性; 有唯一的網絡標識符,持久存儲; 有序,比如mysql 主從; 主機名稱固定。 而且其擴容以及升級等操作也是按順序進行的操作。
前置點: 無頭Service, 說的是clusterIP 是一個None 值。
1. 查看描述
[root@k8smaster1 ~]# kubectl explain statefulsets KIND: StatefulSet VERSION: apps/v1 DESCRIPTION: StatefulSet represents a set of pods with consistent identities. Identities are defined as: - Network: A single stable DNS and hostname. - Storage: As many VolumeClaims as requested. The StatefulSet guarantees that a given network identity will always map to the same storage identity. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind <string> Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata <Object> spec <Object> Spec defines the desired identities of pods in this set. status <Object> Status is the current status of Pods in this StatefulSet. This data may be out of date by some window of time.
2. 編寫sts.yml 內容如下:
apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: nginx-statefulset namespace: default spec: serviceName: nginx replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80
3. 創建資源:
[root@k8smaster1 ~]# kubectl apply -f sts.yml service/nginx created statefulset.apps/nginx-statefulset created
4. 查看
[root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-statefulset-0 1/1 Running 0 68s 10.244.1.26 k8snode1 <none> <none> nginx-statefulset-1 1/1 Running 0 65s 10.244.2.25 k8snode2 <none> <none> nginx-statefulset-2 1/1 Running 0 59s 10.244.2.26 k8snode2 <none> <none> [root@k8smaster1 ~]# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d2h <none> nginx ClusterIP None <none> 80/TCP 74s app=nginx
可以看到:
每個pod 有唯一的name, name 規則是自己的名稱 + "-" + index 生成的; 有一個clusterip 是None 的無頭Service。
再次查看statefulset以及查看主機名稱: (可以看到主機名稱也是固定的)
[root@k8smaster1 ~]# kubectl get statefulsets -o wide NAME READY AGE CONTAINERS IMAGES nginx-statefulset 3/3 9m35s nginx nginx:latest [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-statefulset-0 1/1 Running 0 9m48s 10.244.1.26 k8snode1 <none> <none> nginx-statefulset-1 1/1 Running 0 9m45s 10.244.2.25 k8snode2 <none> <none> nginx-statefulset-2 1/1 Running 0 9m39s 10.244.2.26 k8snode2 <none> <none> [root@k8smaster1 ~]# kubectl exec nginx-statefulset-0 -- hostname nginx-statefulset-0
7. DaemonSet 部署守護進程
DaemonSet保證在每個Node上都運行一個容器副本,常用來部署一些集群的日志、監控或者其他系統管理應用。 新加入的node 也同樣運行在一個pod 里面。
1. 創建ds.yaml 文件, 內容如下:
apiVersion: apps/v1 kind: DaemonSet metadata: name: ds-test labels: app: filebeat spec: selector: matchLabels: app: filebeat template: metadata: labels: app: filebeat spec: containers: - name: logs image: nginx ports: - containerPort: 80 volumeMounts: - name: varlog mountPath: /tmp/log volumes: - name: varlog hostPath: path: /var/log
2. 創建資源
[root@k8smaster1 ~]# kubectl apply -f ds.yaml daemonset.apps/ds-test created [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-test-hgpcn 1/1 Running 0 14s 10.244.2.27 k8snode2 <none> <none> ds-test-zphv6 1/1 Running 0 14s 10.244.1.27 k8snode1 <none> <none>
可以看到每個節點都運行一個pod。
3. 查看詳細信息

[root@k8smaster1 log]# kubectl describe pod ds-test-zphv6 Name: ds-test-zphv6 Namespace: default Priority: 0 Node: k8snode1/192.168.13.104 Start Time: Sun, 16 Jan 2022 00:44:02 -0500 Labels: app=filebeat controller-revision-hash=9fbd55487 pod-template-generation=1 Annotations: <none> Status: Running IP: 10.244.1.27 IPs: IP: 10.244.1.27 Controlled By: DaemonSet/ds-test Containers: logs: Container ID: docker://c5d7d5b970210a2c2a03fc264f8e1e77b95ef89d3095467fa54300c2305f663c Image: nginx Image ID: docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31 Port: 80/TCP Host Port: 0/TCP State: Running Started: Sun, 16 Jan 2022 00:44:04 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /tmp/log from varlog (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-5r9hq (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: varlog: Type: HostPath (bare host directory volume) Path: /var/log HostPathType: default-token-5r9hq: Type: Secret (a volume populated by a Secret) SecretName: default-token-5r9hq Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/pid-pressure:NoSchedule node.kubernetes.io/unreachable:NoExecute node.kubernetes.io/unschedulable:NoSchedule Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m3s default-scheduler Successfully assigned default/ds-test-zphv6 to k8snode1 Normal Pulling 3m49s kubelet, k8snode1 Pulling image "nginx" Normal Pulled 3m48s kubelet, k8snode1 Successfully pulled image "nginx" Normal Created 3m48s kubelet, k8snode1 Created container logs Normal Started 3m48s kubelet, k8snode1 Started container logs
4. 查看目錄掛載關系:
[root@k8smaster1 log]# kubectl exec -it ds-test-zphv6 bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@ds-test-zphv6:/# ls /tmp/log/ anaconda cron-20211222 maillog-20211222 sa tallylog vmware-vmsvc-root.3.log audit cron-20211226 maillog-20211226 samba tuned vmware-vmsvc-root.4.log boot.log cron-20220102 maillog-20220102 secure vmware-network.1.log vmware-vmsvc-root.5.log boot.log-20211228 cron-20220109 maillog-20220109 secure-20211222 vmware-network.2.log vmware-vmsvc-root.6.log boot.log-20211231 cups messages secure-20211226 vmware-network.3.log vmware-vmsvc-root.log boot.log-20220103 dmesg messages-20211222 secure-20220102 vmware-network.4.log vmware-vmtoolsd-root.log boot.log-20220105 dmesg.old messages-20211226 secure-20220109 vmware-network.5.log wtmp boot.log-20220107 firewalld messages-20220102 speech-dispatcher vmware-network.6.log yum.log boot.log-20220108 gdm messages-20220109 spooler vmware-network.7.log yum.log-20211224 boot.log-20220109 glusterfs ntpstats spooler-20211222 vmware-network.8.log yum.log-20220101 btmp grubby pluto spooler-20211226 vmware-network.9.log btmp-20220101 grubby_prune_debug pods spooler-20220102 vmware-network.log chrony lastlog ppp spooler-20220109 vmware-vgauthsvc.log.0 containers libvirt qemu-ga sssd vmware-vmsvc-root.1.log cron maillog rhsm swtpm vmware-vmsvc-root.2.log root@ds-test-zphv6:/# exit exit [root@k8smaster1 log]# ls /var/log/ anaconda cron-20211222 maillog-20211222 sa tallylog vmware-vmsvc-root.3.log audit cron-20211226 maillog-20211226 samba tuned vmware-vmsvc-root.4.log boot.log cron-20220102 maillog-20220102 secure vmware-network.1.log vmware-vmsvc-root.5.log boot.log-20211222 cron-20220109 maillog-20220109 secure-20211222 vmware-network.2.log vmware-vmsvc-root.6.log boot.log-20211223 cups messages secure-20211226 vmware-network.3.log vmware-vmsvc-root.log boot.log-20211228 dmesg messages-20211222 secure-20220102 vmware-network.4.log vmware-vmtoolsd-root.log boot.log-20211231 dmesg.old messages-20211226 secure-20220109 vmware-network.5.log wtmp boot.log-20220103 firewalld messages-20220102 speech-dispatcher vmware-network.6.log yum.log boot.log-20220105 gdm messages-20220109 spooler vmware-network.7.log yum.log-20211224 boot.log-20220108 glusterfs ntpstats spooler-20211222 vmware-network.8.log yum.log-20220101 btmp grubby pluto spooler-20211226 vmware-network.9.log btmp-20220101 grubby_prune_debug pods spooler-20220102 vmware-network.log chrony lastlog ppp spooler-20220109 vmware-vgauthsvc.log.0 containers libvirt qemu-ga sssd vmware-vmsvc-root.1.log cron maillog rhsm swtpm vmware-vmsvc-root.2.log
8. job 一次性任務
Job負責批量處理短暫的一次性任務 (short lived one-off tasks),即僅執行一次的任務,它保證批處理任務的一個或多個Pod成功結束。
1. 創建 job.yaml
apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never backoffLimit: 4
2. 創建資源
[root@k8smaster1 ~]# kubectl create -f job.yaml job.batch/pi created [root@k8smaster1 ~]# kubectl get jobs NAME COMPLETIONS DURATION AGE pi 1/1 96s 14m [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE pi-ppc7n 0/1 Completed 0 44m [root@k8smaster1 ~]# kubectl logs pi-ppc7n 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
9. 創建周期性定時任務
1. 創建yml, cronjob.yaml
apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure
2. 執行創建
[root@k8smaster1 ~]# kubectl delete -f job.yaml job.batch "pi" deleted
3. 查看日志
[root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hello-1642318260-c58sn 0/1 Completed 0 2m37s 10.244.1.43 k8snode1 <none> <none> hello-1642318320-5r8kq 0/1 Completed 0 97s 10.244.2.35 k8snode2 <none> <none> hello-1642318380-kxw7q 0/1 Completed 0 37s 10.244.1.45 k8snode1 <none> <none> [root@k8smaster1 ~]# kubectl logs hello-1642318320-5r8kq Sun Jan 16 07:32:12 UTC 2022 Hello from the Kubernetes cluster
2. Service
service 是為了防止pod 失聯,提供的服務發現,類似於微服務的注冊中心。定義一組pod 的訪問策略。可以為一組具有相同功能的容器應用提供一個統一的入口地址,並將請求負載分發到后端的各個容器應用上。
service 通過selector 來管控對應的pod。根據label 和 selector 建立關聯,通過service 實現pod 的負載均衡。
1. 常有service 類型
ClusterIp: 集群內部使用 (默認也是這個值)
NodePort: 對外暴露端口使用
LoabBalancer:對外訪問應用使用,公有雲
2. 測試:
(1) NodePort 之前已經測試過
(2) 測試使用 ClusterIp
1》 新建service.yml
apiVersion: v1 kind: Service metadata: name: web labels: app: web spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: web status: loadBalancer: {}
2》 創建並且查看svc
[root@k8smaster1 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d6h web ClusterIP 10.108.161.219 <none> 80/TCP 6s
3》 啟動一個nginx
[root@k8smaster1 ~]# kubectl create deployment web --image=nginx deployment.apps/web created [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-5dcb957ccc-wf2kt 0/1 ContainerCreating 0 8s [root@k8smaster1 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web-5dcb957ccc-wf2kt 1/1 Running 0 2m18s 10.244.2.97 k8snode2 <none> <none>
4》 k8snode2 節點使用虛擬IP訪問80 端口
[root@k8snode1 ~]# curl 10.105.134.45 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
3. Secret
將加密數據存在etcd, 讓pod 容器以掛載volume的形式進行訪問。通常的應用場景是憑證。
Secret解決了密碼、token、密鑰等敏感數據的配置問題,而不需要把這些敏感數據暴露到鏡像或者Pod Spec中。Secret可以以Volume或者環境變量的方式使用。
Secret有三種類型:
- Service Account:用來訪問Kubernetes API,由Kubernetes自動創建,並且會自動掛載到Pod的
/run/secrets/kubernetes.io/serviceaccount
目錄中; - Opaque:base64編碼格式的Secret,用來存儲密碼、密鑰等;
kubernetes.io/dockerconfigjson
:用來存儲私有docker registry的認證信息。
測試如下:
1. ServiceAccount
查看默認創建的:
[root@k8smaster1 ~]# kubectl create deployment web --image=nginx deployment.apps/web created [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-5dcb957ccc-sdhjk 0/1 ContainerCreating 0 5s [root@k8smaster1 ~]# kubectl exec -it web-5dcb957ccc-sdhjk -- bash root@web-5dcb957ccc-sdhjk:/# ls /run/secrets/kubernetes.io/serviceaccount ca.crt namespace token
2. Opaque
1. 查看base64編碼串
[root@k8snode1 ~]# echo -n "admin" | base64 YWRtaW4= [root@k8snode1 ~]# echo -n "1f2d1e2e67df" | base64 MWYyZDFlMmU2N2Rm
2. 創建secrets.yml 文件
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: MWYyZDFlMmU2N2Rm
username: YWRtaW4=
3. 創建secrets
[root@k8smaster1 ~]# kubectl create -f secrets.yml secret/mysecret created [root@k8smaster1 ~]# kubectl get secret NAME TYPE DATA AGE default-token-5r9hq kubernetes.io/service-account-token 3 7d6h mysecret Opaque 2 10s
4. 使用
1》 以環境變量的方式
創建secret-var.yaml
apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: nginx image: nginx env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: mysecret key: password
執行創建以及查看:
[root@k8smaster1 ~]# kubectl delete pods --all # 刪除所有的pods pod "web-5dcb957ccc-wf2kt" deleted [root@k8smaster1 ~]# kubectl delete deployment --all deployment.apps "web" deleted [root@k8smaster1 ~]# kubectl apply -f secret-var.yaml # 創建 pod/mypod created [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mypod 1/1 Running 0 11s 10.244.2.98 k8snode2 <none> <none> [root@k8smaster1 ~]# kubectl exec -it mypod -- bash # 進入pod的第一個容器 root@mypod:/# echo $SECRET_USERNAME # 查看環境變量 admin root@mypod:/# echo $SECRET_PASSWORD 1f2d1e2e67df root@mypod:/# exit exit
2》 Volume方式使用: 相當於將secret的變量以目錄的形式掛載到指定目錄,然后可以到目錄下面進行查看解密后變量
新建secret-vol.yml, 內容如下:
apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: nginx image: nginx volumeMounts: - name: foo mountPath: "/etc/foo" readOnly: true volumes: - name: foo secret: secretName: mysecret
創建且查看:
[root@k8smaster1 ~]# kubectl apply -f secret-vol.yaml pod/mypod created [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mypod 1/1 Running 0 6s 10.244.1.82 k8snode1 <none> <none> [root@k8smaster1 ~]# kubectl exec -it mypod -- bash root@mypod:/# ls /etc/foo/ password username root@mypod:/# cd /etc/foo/ root@mypod:/etc/foo# cat username admin
4. ConfigMap
configMap用於保存配置數據的鍵值對,可以用來保存單個屬性,也可以用來保存配置文件。ConfigMap跟secret很類似,但它可以更方便地處理不包含敏感信息的字符串。
1. ConfigMap創建
可以使用kubectl create configmap從文件、目錄或者key-value字符串創建等創建ConfigMap。
(1) 從key-value字符串創建ConfigMap
[root@k8smaster1 ~]# kubectl create configmap special-config --from-literal=special.how=very configmap/special-config created [root@k8smaster1 ~]# kubectl get configmap special-config -o go-template='{{.data}}' map[special.how:very]
對個kv可以用如下形式
kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
(2) 從配置文件創建
[root@k8smaster1 ~]# echo -e "a=b\nc=d" | tee config.env a=b c=d [root@k8smaster1 ~]# kubectl create configmap special-config2 --from-env-file=config.env configmap/special-config2 created [root@k8smaster1 ~]# kubectl describe cm special-config2 Name: special-config2 Namespace: default Labels: <none> Annotations: <none> Data ==== a: ---- b c: ---- d Events: <none> [root@k8smaster1 ~]# kubectl get configmap special-config2 -o go-template='{{.data}}' map[a:b c:d]
(3) 使用yml 創建
1》 新建 myconfig.yaml
apiVersion: v1 kind: ConfigMap metadata: name: myconfig namespace: default data: special.level: info special.type: hello
2》創建
[root@k8smaster1 ~]# kubectl apply -f myconfig.yaml configmap/myconfig created [root@k8smaster1 ~]# kubectl get cm NAME DATA AGE myconfig 2 4s special-config 1 5m27s special-config2 2 3m50s
2. configmap 使用
(1) 使用環境變量的形式使用
新建config-var.yaml
apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: busybox image: busybox command: [ "/bin/sh", "-c", "echo $(LEVEL) $(TYPE)" ] env: - name: LEVEL valueFrom: configMapKeyRef: name: myconfig key: special.level - name: TYPE valueFrom: configMapKeyRef: name: myconfig key: special.type restartPolicy: Never
創建並查看:
[root@k8smaster1 ~]# kubectl apply -f config-var.yaml pod/mypod created [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mypod 0/1 Completed 0 7s 10.244.2.99 k8snode2 <none> <none> [root@k8smaster1 ~]# kubectl logs mypod info hello
(2) 使用掛載的方式使用
將創建的ConfigMap直接掛載至Pod的/etc/config目錄下,其中每一個key-value鍵值對都會生成一個文件,key為文件名,value為內容
新建cm.yml
apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: nginx image: nginx command: [ "/bin/sh","-c","cat /etc/config/special.level" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config2 restartPolicy: Never
支持創建且查看日志
[root@k8smaster1 ~]# kubectl apply -f cm.yaml pod/mypod created [root@k8smaster1 ~]# kubectl logs mypod b
補充: 關於service 端口
targetPort - Container監聽的Port.
port - 通過Service暴露出來的一個Port, 可以在Cluster內進行訪問。
nodePort - Cluster向外網暴露出來的端口,可以讓外網能夠訪問到Pod/Container.
1. 一個svc 其yaml 如下
apiVersion: v1 kind: Service metadata: annotations: kubesphere.io/alias-name: mynginx-svc kubesphere.io/creator: admin kubesphere.io/description: mynginx-svc creationTimestamp: "2022-02-11T07:38:59Z" labels: app: mynginx-svc name: mynginx-svc namespace: default resourceVersion: "710977" selfLink: /api/v1/namespaces/default/services/mynginx-svc uid: faf5c2bc-b358-43d5-b938-60085d672371 spec: clusterIP: 10.1.222.4 clusterIPs: - 10.1.222.4 externalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: port-http nodePort: 31689 port: 81 protocol: TCP targetPort: 80 selector: app: mynginx sessionAffinity: None type: NodePort status: loadBalancer: {}
2. 端口規則如下: