Kubernetes(1) kubectl 入門


繼上章的搭建實驗環境后,本章節主要介紹一些k8s的日常操作命令以及應用。

首先我們會熟悉一下 kubectl 的API 功能,隨后我們會手動通過命令行輸入的方式創建 Pod Deployment,且給 Deployment 添加 Service 提供對內,對外訪問。后面會涉及到一些自動擴容,縮容,升級,降級的操作,由此體現出使用k8s 管理容器服務的方便。

注: iptables 部分需要特定網絡知識,作為開發人員只需大概了解即可。在此不做特殊展示。

1 Kubectl

kubectl 是apiserver 為了運行k8s指令的命令行接口。詳見: https://kubernetes.io/docs/reference/kubectl/overview/

本文主要涵蓋了 kubectl 的基本語法,以及通過簡單例子介紹對語法的描述。想要進一步了解每一個命令,以及該命令下的眾多選項需要看 kubectl 的文檔。

以下為kubectl 的常見API 匯總

基礎命令 (初級):
  create         從配置文件或命令行輸入創建一個新的k8s 資源
  expose         使用replication controller, service, deployment or pod 且創建一個新的Kubernetes Service
  run            在集群上運行特定鏡像
  set            為某k8s 對象設置屬性

Basic Commands (Intermediate):
  explain        展示某k8s 資源的文檔
  get            顯示一個或多個資源
  edit           編輯/配置一個資源
  delete         通過文件名,命令行輸入,資源和名字(或通過表情,選擇器)刪除某個資源

Deploy 部署命令:
  rollout        管理k8s資源的 rollout
  scale          給 Deployment, ReplicaSet, Replication Controller, or Job 對象設置Size
  autoscale      給Deployment, ReplicaSet, or ReplicationController 自動擴容

集群管理命令:
  certificate    配置 certificate 資源
  cluster-info   顯示集群信息
  top            展示資源的 (CPU/Memory/Storage) 使用
  cordon         標記一個節點為不可調用
  uncordon       標記某個節點為可用
  drain          使某節點進入維護狀態
  taint          Update the taints on one or more nodes 更新打上 taint 標記的節點

Troubleshooting and Debugging 命令:
  describe       顯示特定某資源的或組資源的詳細信息
  logs           打印Pod 下某容器的日志信息
  attach         進入某運行中的容器
  exec           在某容器內運行Linux 命令
  port-forward   轉發一個或多個本地端口至某 Pod
  proxy          給Kubernetes API server 運行一個Proxy服務
  cp             拷貝文件或文件夾到特定容器中 或從容器中拷貝至本地
  auth           檢查授權

Advanced Commands:
  diff           展示目前版本與理論值版本的偏差
  apply          通過資源配置文件或者命令行部署特定資源
  patch          通過編輯資源屬性patch 該資源
  replace        Replace a resource by filename or stdin
  wait           實驗階段:在特定條件下等待一個或多個資源
  convert        在不同API 版本直接轉換配置文件

Settings Commands:
  label          更新某資源的 Label
  annotate       更新某資源的 annotate
completion Output shell completion code
for the specified shell (bash or zsh) Other Commands: api-resources 打印當前服務器上支持的 API 資源 api-versions 打印當前服務器上支持的 API 資源, in the form of "group/version"
config 更改kubeconfig 文件 plugin 提供與 plugins互動的工具 version 打印客戶端與服務器端的版本信息 Usage: kubectl [flags] [options] Use "kubectl <command> --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands).

 

下面通過實例來具體體會一下這些 API 的用法

例如:使用 kubectl describe node 命令顯示k8smaster節點資源信息

kubectl describe node k8smaster
root@k8smaster ~]# kubectl describe node k8smaster
Name:               k8smaster
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8smaster
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"76:80:68:34:94:6c"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 172.16.0.11
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 02 Jan 2019 13:30:57 +0100
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 03 Jan 2019 12:49:50 +0100   Wed, 02 Jan 2019 13:30:51 +0100   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 03 Jan 2019 12:49:50 +0100   Wed, 02 Jan 2019 13:30:51 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 03 Jan 2019 12:49:50 +0100   Wed, 02 Jan 2019 13:30:51 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 03 Jan 2019 12:49:50 +0100   Wed, 02 Jan 2019 13:30:51 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 03 Jan 2019 12:49:50 +0100   Wed, 02 Jan 2019 13:57:40 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.16.0.11
  Hostname:    k8smaster
Capacity:
 cpu:                2
 ephemeral-storage:  17394Mi
 hugepages-2Mi:      0
 memory:             3861508Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  16415037823
 hugepages-2Mi:      0
 memory:             3759108Ki
 pods:               110
System Info:
 Machine ID:                 8d2b3fec09894a6eb6e69d45ce7a9996
 System UUID:                34014D56-A1A0-0F33-A35B-56A3947191DF
 Boot ID:                    e0019567-d852-4000-991c-51e2a1061863
 Kernel Version:             3.10.0-957.1.3.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.9.0
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.0.0/24
Non-terminated Pods:         (8 in total)
  Namespace                  Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                 ------------  ----------  ---------------  -------------  ---
  kube-system                coredns-78fcdf6894-5v9g9             100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     23h
  kube-system                coredns-78fcdf6894-lpwfw             100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     23h
  kube-system                etcd-k8smaster                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22h
  kube-system                kube-apiserver-k8smaster             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22h
  kube-system                kube-controller-manager-k8smaster    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22h
  kube-system                kube-flannel-ds-amd64-n5j7l          100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      22h
  kube-system                kube-proxy-rjssr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23h
  kube-system                kube-scheduler-k8smaster             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (42%)  100m (5%)
  memory             190Mi (5%)  390Mi (10%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                    From                   Message
  ----    ------                   ----                   ----                   -------
  Normal  Starting                 4m58s                  kubelet, k8smaster     Starting kubelet.
  Normal  NodeAllocatableEnforced  4m58s                  kubelet, k8smaster     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     4m57s (x5 over 4m58s)  kubelet, k8smaster     Node k8smaster status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientDisk    4m55s (x6 over 4m58s)  kubelet, k8smaster     Node k8smaster status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  4m55s (x6 over 4m58s)  kubelet, k8smaster     Node k8smaster status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    4m55s (x6 over 4m58s)  kubelet, k8smaster     Node k8smaster status is now: NodeHasNoDiskPressure
  Normal  Starting                 4m21s                  kube-proxy, k8smaster  Starting kube-proxy.

 

2 部署/創建一個新 Pod資源

部署一個新 Pod,且讓該Pod 在集群內可見。

######################################################

# 啟動一個新鏡像,且給其配置杠可用備份                                      #
# 創建一個 deployment 或 job 管理容器            #

######################################################

你需要一個 deployment 對象比如 replication controller or replicaset - 這種控制器可以復制 Pods 以提供高可用服務。

下面例子會生成一個 Deployment,自動從docker hub下載nginx:1.14-alpine 鏡像,暴露80端口給內部集群資源,且生成5個副本備份:

kubectl run nginx --image=nginx:1.14-alpine --port 80 --replicas=5
[root@k8smaster ~]# kubectl get deployment
NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx   5         5         5            5           2h
[root@k8smaster ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE
nginx-79976cbb47-sg2t9   1/1     Running   1          2h    10.244.1.4   k8snode1
nginx-79976cbb47-tl5r7   1/1     Running   1          2h    10.244.2.7   k8snode2
nginx-79976cbb47-vkzww   1/1     Running   1          2h    10.244.2.6   k8snode2
nginx-79976cbb47-wvvtq   1/1     Running   1          2h    10.244.1.5   k8snode1
nginx-79976cbb47-x4wjt   1/1     Running   1          2h    10.244.2.5   k8snode2


[root@k8smaster ~]# curl 10.244.1.4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

使用 ifconfig 命令來觀察Pod 的IP 地址:

[root@k8smaster ~]# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::3cd7:71ff:fee7:b4d  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:f4:00:01  txqueuelen 1000  (Ethernet)
        RX packets 2778  bytes 180669 (176.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2841  bytes 1052175 (1.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:8b:97:c3:4b  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.0.11  netmask 255.255.255.0  broadcast 172.16.0.255
        inet6 fe80::20c:29ff:fe71:91df  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:71:91:df  txqueuelen 1000  (Ethernet)
        RX packets 6403  bytes 688725 (672.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7824  bytes 7876155 (7.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::7480:68ff:fe34:946c  prefixlen 64  scopeid 0x20<link>
        ether 76:80:68:34:94:6c  txqueuelen 0  (Ethernet)
        RX packets 5  bytes 1118 (1.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7  bytes 446 (446.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0
...

通過 flannel 網卡我們可以看到所有 Pod IP 都分配到10.244.0.1/24 區間

 

3 給內部調用暴露新服務,生成新 Service 對象

通常你需要Service 對象給 Deployment 對象提供服務,因為 Deployment 下的Pod可能會由於人為或自然原因停止進程,在這種情況下 虛擬IP (ClusterIP)地址會重新分配。在有Service的條件下,應用只需要與Service 通信即可得到需要資源,而不用去關心新Pod 的虛擬IP 是什么。 

kubectl expose 會暴露一個穩定的 IP給 Deployment 

Possible resources include (case insensitive): 

pod (po), service (svc), replicationcontroller (rc), deployment (deploy), replicaset (rs)
Examples:
  # Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000.
  kubectl expose rc nginx --port=80 --target-port=8000

port -> 服務端口用於對外提供訪問

target-port -> 容器端口用於轉發

 

下面我們模擬創建一個新的Service 資源, 資源名為"nginx-service",  "--dry-run=true" 用於模擬

kubectl expose deployment nginx --name=nginx-service --port=80 --target-port=80 --protocol=TCP --dry-run=true

deployment nginx -> 該容器目前已被存儲在本地docker registry (創建命令為:kubectl run nginx --image=nginx:1.14-alpine --port 80 --replicas=5)

如果該 nginx 實例沒有被創建則會出現下面報錯,因為 這個名為nginx-deploy 的 Deployment 尚未被創建:

[root@k8smaster ~]# kubectl expose deployment nginx-deploy --name=mynginx --port=80 --target-port=80 --protocol=TCP --dry-run=true
Error from server (NotFound): deployments.extensions "nginx-deploy" not found

一旦 nginx1.14-alpine 被創建 ->你可以用docker images 命令在節點上看到該鏡像:

[root@k8snode1 ~]# docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             SIZE
nginx 1.14-alpine         c5b6f731fbc0        13 days ago         17.7MB
k8s.gcr.io/kube-proxy-amd64   v1.11.1             d5c25579d0ff        5 months ago        97.8MB
quay.io/coreos/flannel        v0.10.0-amd64       f0fad859c909        11 months ago       44.6MB
k8s.gcr.io/pause              3.1                 da86e6ba6ca1        12 months ago       742kB

 現在我們創建一個真正的 Service對象給 Deployment(剛才的命令去掉 --dry-run)運行下面命令來檢查運行情況為健康后,暴露服務:

[root@k8smaster ~]# kubectl get deployment -o wide --show-labels
NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES              SELECTOR    LABELS
nginx   5         5         5            5           1m    nginx        nginx:1.14-alpine   run=nginx   run=nginx

[root@k8smaster
~]# kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE LABELS nginx-79976cbb47-2xrhk 1/1 Running 0 1m 10.244.1.7 k8snode1 pod-template-hash=3553276603,run=nginx nginx-79976cbb47-8dqnk 1/1 Running 0 1m 10.244.2.10 k8snode2 pod-template-hash=3553276603,run=nginx nginx-79976cbb47-gprlc 1/1 Running 0 1m 10.244.2.9 k8snode2 pod-template-hash=3553276603,run=nginx nginx-79976cbb47-p247g 1/1 Running 0 1m 10.244.2.8 k8snode2 pod-template-hash=3553276603,run=nginx nginx-79976cbb47-ppbqv 1/1 Running 0 1m 10.244.1.6 k8snode1 pod-template-hash=3553276603,run=nginx

[root@k8smaster ~]# kubectl expose deployment nginx --name=nginx-service --port=80 --target-port=80 --protocol=TCP
service/nginx-service exposed

[root@k8smaster ~]# kubectl get svc -o wide --show-labels
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR    LABELS
kubernetes      ClusterIP   10.96.0.1        <none>        443/TCP   1d    <none>      component=apiserver,provider=kubernetes
nginx-service   ClusterIP   10.109.139.168   <none>        80/TCP    40s   run=nginx   run=nginx

[root@k8smaster ~]# kubectl describe svc nginx
Name:              nginx-service
Namespace:         default
Labels:            run=nginx
Annotations:       <none>
Selector:          run=nginx
Type:              ClusterIP
IP:                10.109.139.168
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.6:80,10.244.1.7:80,10.244.2.10:80 + 2 more...
Session Affinity:  None
Events:            <none>

[root@k8smaster ~]# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::3cd7:71ff:fee7:b4d  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:f4:00:01  txqueuelen 1000  (Ethernet)
        RX packets 22161  bytes 1423430 (1.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 22547  bytes 8296119 (7.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:8b:97:c3:4b  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.0.11  netmask 255.255.255.0  broadcast 172.16.0.255
        inet6 fe80::20c:29ff:fe71:91df  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:71:91:df  txqueuelen 1000  (Ethernet)
        RX packets 41975  bytes 5478270 (5.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 53008  bytes 54186252 (51.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::7480:68ff:fe34:946c  prefixlen 64  scopeid 0x20<link>
        ether 76:80:68:34:94:6c  txqueuelen 0  (Ethernet)
        RX packets 10  bytes 2236 (2.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14  bytes 894 (894.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0
...

 在篇文章中我們在創建集群時使用了命令:

kubeadm init --ignore-preflight-errors all --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

我們手動定義了 Service IP 分配的空間 --service-cidr=10.96.0.0/12 所以看我們的nginx-service 的虛擬IP 滿足該條件:

[root@k8smaster ~]# kubectl get svc --show-labels
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   LABELS
kubernetes      ClusterIP   10.96.0.1        <none>        443/TCP   1d    component=apiserver,provider=kubernetes
nginx-service   ClusterIP   10.109.139.168   <none>        80/TCP    7m    run=nginx
 
        

* 我們沒有定義 external-ip

 

4 Deployment 對象的擴容,縮容

使用Deployment 的目的就是可以利用 k8s 封裝好的功能來做到自動化管理容器, 你可以使用 kubectl scale api 來手動控制這個縮容操作:

[root@k8smaster ~]# kubectl get deployment -o wide --show-labels
NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES              SELECTOR    LABELS
nginx   5         5         5            5           14m   nginx        nginx:1.14-alpine   run=nginx   run=nginx

[root@k8smaster
~]# kubectl scale --replicas=3 deployment nginx deployment.extensions/nginx scaled [root@k8smaster ~]# kubectl get deployment -o wide --show-labels NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR LABELS nginx 3 3 3 3 15m nginx nginx:1.14-alpine run=nginx run=nginx

[root@k8smaster ~]# kubectl get pods -o wide --show-labels
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE       LABELS
nginx-79976cbb47-8dqnk   1/1     Running   0          18m   10.244.2.10   k8snode2   pod-template-hash=3553276603,run=nginx
nginx-79976cbb47-p247g   1/1     Running   0          18m   10.244.2.8    k8snode2   pod-template-hash=3553276603,run=nginx
nginx-79976cbb47-ppbqv   1/1     Running   0          18m   10.244.1.6    k8snode1   pod-template-hash=3553276603,run=nginx

這樣我們就輕松的將容器復制副本的數量從 5 降為 3

 

5 滾動式升級(Rolling Update)

該例子會手動更新 nginx 版本 從原來的 v1 到 v2

[root@k8smaster ~]# kubectl set image deployment nginx nginx=nginx:1.15-alpine/nginx:v2
deployment.extensions/nginx image updated
[root@k8smaster
~]# kubectl rollout status deployment nginx Waiting for deployment "nginx" rollout to finish: 1 out of 3 new replicas have been updated...

使用describe 命令觀察詳細信息:

[root@k8smaster ~]# kubectl describe pod nginx-79976cbb47-8dqnk
Name:               nginx-79976cbb47-8dqnk
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               k8snode2/172.16.0.13
Start Time:         Thu, 03 Jan 2019 14:34:16 +0100
Labels:             pod-template-hash=3553276603
                    run=nginx
Annotations:        <none>
Status:             Running
IP:                 10.244.2.10
Controlled By:      ReplicaSet/nginx-79976cbb47
Containers:
  nginx:
    Container ID:   docker://151150f6350d891c6504f5edb17b03da6b213d6ad207188301ce3eab6ff5264a
    Image:          nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:e3f77f7f4a6bb5e7820e013fa60b96602b34f5704e796cfd94b561ae73adcf96
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 03 Jan 2019 14:34:17 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rxs5t (ro)
Conditions:
  Type              Status
  Initialized       True 
........

 

如果對升級不滿意可以使用 rollback undo 回滾為上一個版本

kubectl rollout undo deployment nginx

 

6 Iptables dump

[root@k8smaster ~]# iptables -vnL -t nat
Chain PREROUTING (policy ACCEPT
0 packets, 0 bytes) pkts bytes target prot opt in out source destination 27 1748 KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ 167 10148 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 6328 383K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ 1037 62220 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 6529 395K KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */ 0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 1716 103K RETURN all -- * * 10.244.0.0/16 10.244.0.0/16 0 0 MASQUERADE all -- * * 10.244.0.0/16 !224.0.0.0/4 0 0 RETURN all -- * * !10.244.0.0/16 10.244.0.0/24 0 0 MASQUERADE all -- * * !10.244.0.0/16 10.244.0.0/16 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 Chain KUBE-MARK-DROP (0 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000 Chain KUBE-MARK-MASQ (12 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000 Chain KUBE-NODEPORTS (1 references) pkts bytes target prot opt in out source destination Chain KUBE-POSTROUTING (1 references) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000 Chain KUBE-SEP-23Y66C2VAJ3WDEMI (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 172.16.0.11 0.0.0.0/0 /* default/kubernetes:https */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ tcp to:172.16.0.11:6443 Chain KUBE-SEP-CGXZZGLWTRRVTMXB (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.1.6 0.0.0.0/0 /* default/nginx-service: */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service: */ tcp to:10.244.1.6:80 Chain KUBE-SEP-DA57TZEG5V5IUCZP (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.2.10 0.0.0.0/0 /* default/nginx-service: */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service: */ tcp to:10.244.2.10:80 Chain KUBE-SEP-L4GNRLZIRHIXQE24 (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.2.8 0.0.0.0/0 /* default/nginx-service: */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service: */ tcp to:10.244.2.8:80 Chain KUBE-SEP-LBMQNJ35ID4UIQ2A (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.0.9 0.0.0.0/0 /* kube-system/kube-dns:dns */ 0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.244.0.9:53 Chain KUBE-SEP-S7MPVVC7MGYVFSF3 (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.0.9 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.9:53 Chain KUBE-SEP-SISP6ORRA37L3ZYK (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.0.8 0.0.0.0/0 /* kube-system/kube-dns:dns */ 0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.244.0.8:53 Chain KUBE-SEP-XRFUWCXKVCLGWYQC (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 10.244.0.8 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.8:53 Chain KUBE-SERVICES (2 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ tcp -- * * !10.244.0.0/16 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443 0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443 0 0 KUBE-MARK-MASQ udp -- * * !10.244.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53 0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53 0 0 KUBE-MARK-MASQ tcp -- * * !10.244.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53 0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53 0 0 KUBE-MARK-MASQ tcp -- * * !10.244.0.0/16 10.109.139.168 /* default/nginx-service: cluster IP */ tcp dpt:80 0 0 KUBE-SVC-GKN7Y2BSGW4NJTYL tcp -- * * 0.0.0.0/0 10.109.139.168 /* default/nginx-service: cluster IP */ tcp dpt:80 15 900 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-XRFUWCXKVCLGWYQC all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ statistic mode random probability 0.50000000000 0 0 KUBE-SEP-S7MPVVC7MGYVFSF3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ Chain KUBE-SVC-GKN7Y2BSGW4NJTYL (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-CGXZZGLWTRRVTMXB all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service: */ statistic mode random probability 0.33332999982 0 0 KUBE-SEP-DA57TZEG5V5IUCZP all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service: */ statistic mode random probability 0.50000000000 0 0 KUBE-SEP-L4GNRLZIRHIXQE24 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service: */ Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-23Y66C2VAJ3WDEMI all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-SISP6ORRA37L3ZYK all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ statistic mode random probability 0.50000000000 0 0 KUBE-SEP-LBMQNJ35ID4UIQ2A all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */

 

7 從外部訪問集群資源

為了讓外部應用可以訪問到集群內部服務可以使用NodePort 選項,即更改配置文件的 ClusterIP為 NodePor

使用命令 kubectl edit svc nginx-service

[root@k8smaster ~]# kubectl edit svc nginx-service
# change ClusterIP to NodePort -> save and close editing service
/nginx-service edited
[root@k8smaster
~]# kubectl get svc nginx-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service NodePort 10.109.139.168 <none> 80:30200/TCP 39m

我們看到 PORTS 80:30200/TCP 是被新更改的部分,現在我們可以使用節點IP地址:30200 來訪問 Pod資源 (通過Service nginx-service)

現在使用我宿主機用 curl 命令調用 master節點的 nginx 資源:

 bai@bai  ~  curl http://172.16.0.11:30200                                              ✔  ⚙  230  15:23:11 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

*  k8s service自動提供loadbalance 功能

完成實驗!

 

 

更新某資源的


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM