K8s概述以及集群環境搭建&kubectl 簡單使用


  之前學習了docker, 想的成套的學習下K8S相關。

  參考: https://www.kubernetes.org.cn/k8s

1. Kubernetes 概述

1. 基本介紹

  kubernetes 簡稱k8s, 因為k和s 中間有'ubernete' 8個單詞,所以簡稱k8s。是一個開源的,用於管理雲平台中多個主機上的容器化的應用, k8s 的目標是讓部署容器化的應用簡單並且高效,k8s 提供了應用部署、規划、更新、維護的一種機制。是google 開源的一個容器編排引擎,它支持自動化部署、大規模可伸縮、應用容器化管理。

  最新的部署方式是通過部署容器的方式實現,每個容器之間互相隔離,每個容器有自己的文件系統,容器之間進程不會互相影響, 能區分計算資源。相對於虛擬機,容器能快速部署,由於容器與底層設施、機器文件系統解耦的,所以能再在不同雲、不同版本操作系統間進行遷移。

  在生產環境部署一個應用時,通常要部署該應用的多個實例以便對應用請求進行負載均衡。在k8s 中,我們可以創建多個容器,每個容器里面運行一個實例,然后通過內置的負載均衡策略,實現對這一組應用實例的管理、發現、訪問。

2. k8s 功能

1. 自動裝箱:基於容器對應用運行環境的資源配置要求自動部署應用容器

2.自我修復:當容器失敗時,會對容器進行重啟;當所部署的node 節點有問題時,會對容器進行重新部署和重新調度

3. 水平擴展:基於kubernetes 自身能力實現服務發現和負載均衡

4.滾動更新:根據應用的變化,對應用容器運行的應用,進行一次性或批量式更新

5.版本回退

6.秘鑰和配置管理:在不需要重新構建鏡像的情況下,可以部署和更新秘鑰和應用配置,類似於熱部署

7.存儲編排:自動實現存儲系統掛載及應用,特別對有狀態應用實現數據持久化非常重要。存儲系統可以來自於本地目錄、網絡存儲、公共雲存儲等服務

8.批處理:提供一次性任務,定時任務了滿足批量數據梳理和分析的場景

3. k8s 集群架構以及核心概念

1. 集群架構

(1) master  主控節點: 對集群進行調度管理, 接收集群外用戶操作請求

apiserver:資源操作的唯一入口,並提供認證、授權、訪問控制、API注冊和發現等機制,以restful 方式,交給etcd 存儲

scheduler:節點調度,選擇node 節點應用部署;按照預定的調度策略將Pod調度到相應的機器上

controller manager:負責維護集群的狀態,比如故障檢測、自動擴展、滾動更新等,一個資源對應一個控制器

etcd:存儲系統,用於保存集群相關的數據,比如說狀態數據、pod數據等

(2) worker node 工作節點, 運行用戶業務應用容器

kubelet: 簡單說就是master 派到node 節點的代表,管理本機容器的相關操作。負責維護容器的生命周期,同時也負責Volume(CVI)和網絡(CNI)的管理

kube-proxy:網絡上的代理,包括負載均衡等操作。負責為Service提供cluster內部的服務發現和負載均衡

2. 核心概念

(1) pod:k8s部署的最小單元,一個pod 可以包含多個容器,也就是一組容器的集合;容器內部共享網絡;pod 的生命周期是短暫的,重新部署之后是一個新的pod。

  pod是K8s集群中所有業務類型的基礎,可以看作運行在K8s集群中的小機器人,不同類型的業務就需要不同類型的小機器人去執行。目前K8s中的業務主要可以分為長期伺服型(long-running)、批處理型(batch)、節點后台支撐型(node-daemon)和有狀態應用型(stateful application);分別對應的小機器人控制器為Deployment、Job、DaemonSet和PetSet

(2) Replication Controller(復制控制器):

  RC是K8s集群中最早的保證Pod高可用的API對象。通過監控運行中的Pod來保證集群中運行指定數目的Pod副本。指定的數目可以是多個也可以是1個;少於指定數目,RC就會啟動運行新的Pod副本;多於指定數目,RC就會殺死多余的Pod

副本。即使在指定數目為1的情況下,通過RC運行Pod也比直接運行Pod更明智,因為RC也可以發揮它高可用的能力,保證永遠有1個Pod在運行。RC是K8s較早期的技術概念,只適用於長期伺服型的業務類型,比如控制小機器人提供高可用的

Web服務。

(3)Service:定義一組pod 的訪問規則。每個Service會對應一個集群內部有效的虛擬IP,集群內部通過虛擬IP訪問一個服務。

  通過service 定義規則統一入口訪問,通過controller 創建pod 進行部署。

3. 集群搭建方式

目前部署k8s 主要有兩種方式:

(1) kubeadm

kubeadm 是一個K8S 部署工具,提供kubeadm init 和 kube join, 用於快速部署k8s 集群。 官網: https://kubernetes.io/docs/reference/setup-tools/kubeadm/

(2) 二進制包

從github 下載發行版的二進制包,手動部署每個組件,組成k8s 集群。

kubeadm 部署比較簡單,但是屏蔽了很多細節,遇到問題可能比較難以排查。 二進制包部署k8s 集群,可以學習很多原理,也利於后期維護。

2. k8s 集群搭建

  簡單的搭建一個master, 兩個node, 相關機器以及ip配置如下: 每個機子都需要訪問到外網下載相關依賴

k8smaster1    192.168.13.103
k8snode1    192.168.13.104
k8snode2    192.168.13.105

1. 系統初始化(除非指定master, 否則三個節點都執行)

三個機器都需要做如下操作,我選擇用虛擬機,然后克隆

1. 關閉防火牆

systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld 查看防火牆狀態

2. 關閉selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 臨時

3. 關閉swap

free -g #查看分區狀態
swapoff -a #臨時關閉
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久關閉

4. 修改主機名

hostnamectl set-hostname <hostname>

5. 修改ip 地址為靜態IP(注意設置靜態IP需要設置DNS,參考之前的rocketmq 集群)

vim /etc/sysconfig/network-scripts/ifcfg-ens33

6. 同步時間

yum install ntpdate -y
ntpdate time.windows.com

7. master節點修改hosts, 配置主機可達

cat >> /etc/hosts << EOF
192.168.13.103 k8smaster1
192.168.13.104 k8snode1
192.168.13.105 k8snode2
EOF

8.  將橋接的IPv4流量傳遞到iptables的鏈

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl
--system # 生效

9. 所有節點安裝 Docker/kubeadm/kubelet

Kubernetes 默認 CRI(容器運行時)為 Docker,因此先安裝 Docker。

(1) 安裝 Docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
docker --version

(2) 添加阿里雲 YUM 軟件源

設置倉庫地址

cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

添加 yum 源:

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

(3) 安裝 kubeadm,kubelet 和 kubectl

yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable kubelet

安裝成功后可以驗證相關信息:

[root@k8smaster1 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:56:30Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
[root@k8smaster1 ~]# kubelet --version
Kubernetes v1.18.0

2.  部署k8s master 

 1. 在master 節點上執行:

kubeadm init --apiserver-advertise-address=192.168.13.103 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

  其中apiserver-advertise-address 需要修改為master 節點的IP地址; 的哥點指定阿里雲鏡像倉庫地址;第三個是指定kubernetes版本; 后面兩個是指定內部訪問的ip, 只要不和當前網段沖突即可。 如果上面報錯,可以增加--v=6 查看詳細的日志, 關於kubeadm 參數詳細解釋可參考官網。 

     執行完上述命令后,會拉取一系列的docker 鏡像,可以新開一個terminal 然后使用docker images 查看相關鏡像以及容器如下:

[root@k8smaster1 ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.0             43940c34f24f        21 months ago       117MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.0             a31f78c7c8ce        21 months ago       95.3MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.0             74060cea7f70        21 months ago       173MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.0             d3e55153f52f        21 months ago       162MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        23 months ago       683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        23 months ago       43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        2 years ago         288MB
[root@k8smaster1 ~]# docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
3877168ddb09        43940c34f24f                                        "/usr/local/bin/kube…"   3 minutes ago       Up 3 minutes                            k8s_kube-proxy_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0
18a32d328d49        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0
5f62d3184cd7        303ce5db0e90                                        "etcd --advertise-cl…"   3 minutes ago       Up 3 minutes                            k8s_etcd_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0
2af2a1b5d169        a31f78c7c8ce                                        "kube-scheduler --au…"   3 minutes ago       Up 3 minutes                            k8s_kube-scheduler_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0
c77506ee4dd2        d3e55153f52f                                        "kube-controller-man…"   3 minutes ago       Up 3 minutes                            k8s_kube-controller-manager_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0
303545e4eca9        74060cea7f70                                        "kube-apiserver --ad…"   3 minutes ago       Up 3 minutes                            k8s_kube-apiserver_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0
f9da54e2bfae        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0
007e2a0cd10b        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0
0666c8b43c32        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0
0ca472d7f2cd        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0

最后下載完成之后主窗口提示如下:(看到successful 即可視為是成功了)

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.13.103:6443 --token tcpixp.g14hyo8skehh9kcp \
    --discovery-token-ca-cert-hash sha256:6129c85d48cbf0ca946ea2c65fdc1055c12363be07b18dd7994c3c0a242a286a 

使用kubectl工具:執行上面成功后的信息

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看:

[root@k8smaster1 ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8smaster1   NotReady   master   7m17s   v1.18.0
[root@k8smaster1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

 3.  加入Kubernetes Node

  在k8snode1、k8snode2 節點執行如下命令。向集群添加新節點,執行在kubeadm init 輸出的kubeadm join命令(有token 相關):

kubeadm join 192.168.13.103:6443 --token tcpixp.g14hyo8skehh9kcp \
    --discovery-token-ca-cert-hash sha256:6129c85d48cbf0ca946ea2c65fdc1055c12363be07b18dd7994c3c0a242a286a

  需要注意這里只能是master 控制台打出的kubeadm join 相關參數,因為token 不一樣,默認token有效期為24小時,當過期之后,該token就不可用了。這時就需要重新創建token, 可以參考官網使用kubeadm token 相關命令。

  當join 成功之后輸出的日志如下:

[root@k8snode1 ~]# kubeadm join 192.168.13.103:6443 --token tcpixp.g14hyo8skehh9kcp \
>     --discovery-token-ca-cert-hash sha256:6129c85d48cbf0ca946ea2c65fdc1055c12363be07b18dd7994c3c0a242a286a
W0108 21:20:24.380628   25524 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "k8snode1" could not be reached
        [WARNING Hostname]: hostname "k8snode1": lookup k8snode1 on 114.114.114.114:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

最后master 節點查看nodes:(最終集群信息如下)

[root@k8smaster1 ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE    VERSION
k8smaster1   NotReady   master   69m    v1.18.0
k8snode1     NotReady   <none>   49m    v1.18.0
k8snode2     NotReady   <none>   4m7s   v1.18.0

  狀態是notready, 需要安裝網絡插件

 4. 部署CNI網絡插件

sed命令修改為docker hub鏡像倉庫, 在master 節點執行如下命令。

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl get pods -n kube-system

 執行完第二條命令后等相關的組件都是RUNNING 狀態后再次查看集群狀態

[root@k8smaster1 ~]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-stfqz             1/1     Running   0          106m
coredns-7ff77c879f-vhwr7             1/1     Running   0          106m
etcd-k8smaster1                      1/1     Running   0          107m
kube-apiserver-k8smaster1            1/1     Running   0          107m
kube-controller-manager-k8smaster1   1/1     Running   0          107m
kube-flannel-ds-9bx4w                1/1     Running   0          5m31s
kube-flannel-ds-qzqjq                1/1     Running   0          5m31s
kube-flannel-ds-tldt5                1/1     Running   0          5m31s
kube-proxy-6vcvj                     1/1     Running   1          86m
kube-proxy-hn4gx                     1/1     Running   0          106m
kube-proxy-qzwh6                     1/1     Running   0          41m
kube-scheduler-k8smaster1            1/1     Running   0          107m
[root@k8smaster1 ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8smaster1   Ready    master   107m   v1.18.0
k8snode1     Ready    <none>   86m    v1.18.0
k8snode2     Ready    <none>   41m    v1.18.0

查看集群詳細信息:

[root@k8smaster1 ~]# kubectl get nodes -o wide
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8smaster1   Ready    master   9h    v1.18.0   192.168.13.103   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1
k8snode1     Ready    <none>   8h    v1.18.0   192.168.13.104   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1
k8snode2     Ready    <none>   8h    v1.18.0   192.168.13.105   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1

5.  測試kubernetes集群

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc

最后生成的信息如下:

[root@k8smaster1 ~]# kubectl get pod,svc
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-cnj62   1/1     Running   0          3m5s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        113m
service/nginx        NodePort    10.96.201.24   <none>        80:30951/TCP   2m40s

測試: 從任意一個主機訪問即可,端口是30951 端口

curl http://192.168.13.103:30951/
curl http://192.168.13.104:30951/
curl http://192.168.13.105:30951/

查看三個機子的docker 進程:

1. k8smaster

[root@k8smaster1 ~]# docker ps 
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
e71930a745f3        67da37a9a360                                        "/coredns -conf /etc…"   14 minutes ago      Up 14 minutes                           k8s_coredns_coredns-7ff77c879f-vhwr7_kube-system_9553d8cc-8efb-48d1-9790-4120e09869c7_0
5aaacb75700b        67da37a9a360                                        "/coredns -conf /etc…"   14 minutes ago      Up 14 minutes                           k8s_coredns_coredns-7ff77c879f-stfqz_kube-system_81bbe584-a3d1-413a-a785-d8edaca7b4c1_0
756d66c75a56        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 14 minutes ago      Up 14 minutes                           k8s_POD_coredns-7ff77c879f-vhwr7_kube-system_9553d8cc-8efb-48d1-9790-4120e09869c7_0
658b02e25f89        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 14 minutes ago      Up 14 minutes                           k8s_POD_coredns-7ff77c879f-stfqz_kube-system_81bbe584-a3d1-413a-a785-d8edaca7b4c1_0
8a6f86753098        404fc3ab6749                                        "/opt/bin/flanneld -…"   14 minutes ago      Up 14 minutes                           k8s_kube-flannel_kube-flannel-ds-qzqjq_kube-system_bf0155d2-0f27-492c-8029-2fe042869579_0
b047ca53a8fe        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 14 minutes ago      Up 14 minutes                           k8s_POD_kube-flannel-ds-qzqjq_kube-system_bf0155d2-0f27-492c-8029-2fe042869579_0
3877168ddb09        43940c34f24f                                        "/usr/local/bin/kube…"   2 hours ago         Up 2 hours                              k8s_kube-proxy_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0
18a32d328d49        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 hours ago         Up 2 hours                              k8s_POD_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0
5f62d3184cd7        303ce5db0e90                                        "etcd --advertise-cl…"   2 hours ago         Up 2 hours                              k8s_etcd_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0
2af2a1b5d169        a31f78c7c8ce                                        "kube-scheduler --au…"   2 hours ago         Up 2 hours                              k8s_kube-scheduler_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0
c77506ee4dd2        d3e55153f52f                                        "kube-controller-man…"   2 hours ago         Up 2 hours                              k8s_kube-controller-manager_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0
303545e4eca9        74060cea7f70                                        "kube-apiserver --ad…"   2 hours ago         Up 2 hours                              k8s_kube-apiserver_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0
f9da54e2bfae        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 hours ago         Up 2 hours                              k8s_POD_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0
007e2a0cd10b        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 hours ago         Up 2 hours                              k8s_POD_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0
0666c8b43c32        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 hours ago         Up 2 hours                              k8s_POD_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0
0ca472d7f2cd        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 hours ago         Up 2 hours                              k8s_POD_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0

2. k8sndoe1

[root@k8snode1 ~]# docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
8189b507fc4a        404fc3ab6749                                        "/opt/bin/flanneld -…"   10 minutes ago      Up 10 minutes                           k8s_kube-flannel_kube-flannel-ds-9bx4w_kube-system_aa9751c9-7dcf-494b-b720-606cb8950a6d_0
f8e8103639c1        43940c34f24f                                        "/usr/local/bin/kube…"   10 minutes ago      Up 10 minutes                           k8s_kube-proxy_kube-proxy-6vcvj_kube-system_55bc4b97-f479-4dd5-977a-2a61e0fce705_1
6675466fcc0e        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 11 minutes ago      Up 10 minutes                           k8s_POD_kube-flannel-ds-9bx4w_kube-system_aa9751c9-7dcf-494b-b720-606cb8950a6d_0
51d248df0e8c        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 11 minutes ago      Up 10 minutes                           k8s_POD_kube-proxy-6vcvj_kube-system_55bc4b97-f479-4dd5-977a-2a61e0fce705_1

3. k8snode2

[root@k8snode2 ~]# docker ps
CONTAINER ID        IMAGE                                                COMMAND                  CREATED             STATUS              PORTS               NAMES
d8bbbe754ebc        nginx                                                "/docker-entrypoint.…"   4 minutes ago       Up 4 minutes                            k8s_nginx_nginx-f89759699-cnj62_default_9e3e4d90-42ac-4f41-9af7-f7939a52cb01_0
04fbdd617724        registry.aliyuncs.com/google_containers/pause:3.2    "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_nginx-f89759699-cnj62_default_9e3e4d90-42ac-4f41-9af7-f7939a52cb01_0
e9dc459f9664        404fc3ab6749                                         "/opt/bin/flanneld -…"   15 minutes ago      Up 15 minutes                           k8s_kube-flannel_kube-flannel-ds-tldt5_kube-system_5b1ea760-a19e-4b5b-9612-6d9a6dda7f59_0
f1d0312d2308        registry.aliyuncs.com/google_containers/pause:3.2    "/pause"                 15 minutes ago      Up 15 minutes                           k8s_POD_kube-flannel-ds-tldt5_kube-system_5b1ea760-a19e-4b5b-9612-6d9a6dda7f59_0
d6bae886cb61        registry.aliyuncs.com/google_containers/kube-proxy   "/usr/local/bin/kube…"   About an hour ago   Up About an hour                        k8s_kube-proxy_kube-proxy-qzwh6_kube-system_7d72b4a8-c6ee-4982-9e88-df70d9745b2c_0
324507774c8e        registry.aliyuncs.com/google_containers/pause:3.2    "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-proxy-qzwh6_kube-system_7d72b4a8-c6ee-4982-9e88-df70d9745b2c_0

可以看到nginx的容器跑在k8snode2 節點上。

 

也可以用kubectl 查看運行情況

[root@k8smaster1 ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-f89759699-cnj62   1/1     Running   0          10m   10.244.2.2   k8snode2   <none>           <none>

以yaml 格式輸出:

[root@k8smaster1 ~]# kubectl get pods -o yaml 
apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: "2022-01-09T03:49:49Z"
    generateName: nginx-f89759699-
    labels:
      app: nginx
。。。

查看所有namespace 下面所有的pods 相關且輸出詳細信息

[root@k8smaster1 ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE    IP               NODE         NOMINATED NODE   READINESS GATES
default                nginx-f89759699-cnj62                        1/1     Running   0          51m    10.244.2.2       k8snode2     <none>           <none>
kube-system            coredns-7ff77c879f-stfqz                     1/1     Running   0          161m   10.244.0.3       k8smaster1   <none>           <none>
kube-system            coredns-7ff77c879f-vhwr7                     1/1     Running   0          161m   10.244.0.2       k8smaster1   <none>           <none>
kube-system            etcd-k8smaster1                              1/1     Running   0          161m   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-apiserver-k8smaster1                    1/1     Running   0          161m   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-controller-manager-k8smaster1           1/1     Running   0          161m   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-flannel-ds-9bx4w                        1/1     Running   0          59m    192.168.13.104   k8snode1     <none>           <none>
kube-system            kube-flannel-ds-qzqjq                        1/1     Running   0          59m    192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-flannel-ds-tldt5                        1/1     Running   0          59m    192.168.13.105   k8snode2     <none>           <none>
kube-system            kube-proxy-6vcvj                             1/1     Running   1          140m   192.168.13.104   k8snode1     <none>           <none>
kube-system            kube-proxy-hn4gx                             1/1     Running   0          161m   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-proxy-qzwh6                             1/1     Running   0          95m    192.168.13.105   k8snode2     <none>           <none>
kube-system            kube-scheduler-k8smaster1                    1/1     Running   0          161m   192.168.13.103   k8smaster1   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-78f5d9f487-sfjlr   1/1     Running   0          25m    10.244.2.3       k8snode2     <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-577bd97bc-f2v5g         1/1     Running   0          25m    10.244.1.2       k8snode1     <none>           <none>

3. kubectl 命令行工具

   kubectl 是kubernetes集群的命令行工具,通過kubectl 能對集群本身進行管理,並能夠在集群上進行容器化的安裝部署。

基本語法:

kubectl [command] [type] [name] [flags]

command: 指要對資源執行的操作,例如create、get、describe、delete

type:指定資源類型,資源類型是大小寫敏感的,開發者能夠以單數、復數或縮略圖的形式。 資源類型可以用 kubectl api-resources 查看

name: 指定資源名稱,大小寫敏感,如果不指定名稱會顯示所有

flags:指定可選的參數,例如可以用-s 或者-server 指定Kubernetes APIServer 的地址和端口

[root@k8smaster1 ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8smaster1   Ready    master   8h      v1.18.0
k8snode1     Ready    <none>   7h59m   v1.18.0
k8snode2     Ready    <none>   7h14m   v1.18.0
[root@k8smaster1 ~]# kubectl get node
NAME         STATUS   ROLES    AGE     VERSION
k8smaster1   Ready    master   8h      v1.18.0
k8snode1     Ready    <none>   7h59m   v1.18.0
k8snode2     Ready    <none>   7h14m   v1.18.0
[root@k8smaster1 ~]# kubectl get node k8snode1
NAME       STATUS   ROLES    AGE     VERSION
k8snode1   Ready    <none>   7h59m   v1.18.0

1. kubectl help

[root@k8smaster1 ~]# kubectl help
kubectl controls the Kubernetes cluster manager.

 Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/

Basic Commands (Beginner):
  create        Create a resource from a file or from stdin.
  expose        Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run           Run a particular image on the cluster
  set           Set specific features on objects

Basic Commands (Intermediate):
  explain       Documentation of resources
  get           Display one or many resources
  edit          Edit a resource on the server
  delete        Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout       Manage the rollout of a resource
  scale         Set a new size for a Deployment, ReplicaSet or Replication Controller
  autoscale     Auto-scale a Deployment, ReplicaSet, or ReplicationController

Cluster Management Commands:
  certificate   Modify certificate resources.
  cluster-info  Display cluster info
  top           Display Resource (CPU/Memory/Storage) usage.
  cordon        Mark node as unschedulable
  uncordon      Mark node as schedulable
  drain         Drain node in preparation for maintenance
  taint         Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe      Show details of a specific resource or group of resources
  logs          Print the logs for a container in a pod
  attach        Attach to a running container
  exec          Execute a command in a container
  port-forward  Forward one or more local ports to a pod
  proxy         Run a proxy to the Kubernetes API server
  cp            Copy files and directories to and from containers.
  auth          Inspect authorization

Advanced Commands:
  diff          Diff live version against would-be applied version
  apply         Apply a configuration to a resource by filename or stdin
  patch         Update field(s) of a resource using strategic merge patch
  replace       Replace a resource by filename or stdin
  wait          Experimental: Wait for a specific condition on one or many resources.
  convert       Convert config files between different API versions
  kustomize     Build a kustomization target from a directory or a remote url.

Settings Commands:
  label         Update the labels on a resource
  annotate      Update the annotations on a resource
  completion    Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  alpha         Commands for features in alpha
  api-resources Print the supported API resources on the server
  api-versions  Print the supported API versions on the server, in the form of "group/version"
  config        Modify kubeconfig files
  plugin        Provides utilities for interacting with plugins.
  version       Print the client and server version information

Usage:
  kubectl [flags] [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

  上面命令進行了分類。包括基礎命令、部署和集群、故障和調試、高級命令、設置命令、其他命令。

2. 基本使用

[root@k8smaster1 ~]# kubectl get nodes -o wide 
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8smaster1   Ready    master   25h   v1.18.0   192.168.13.103   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1
k8snode1     Ready    <none>   24h   v1.18.0   192.168.13.104   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1
k8snode2     Ready    <none>   23h   v1.18.0   192.168.13.105   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1
[root@k8smaster1 ~]# kubectl get nodes k8smaster1 -o wide 
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8smaster1   Ready    master   25h   v1.18.0   192.168.13.103   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1
[root@k8smaster1 ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-f89759699-cnj62   1/1     Running   0          23h   10.244.2.2   k8snode2   <none>           <none>
[root@k8smaster1 ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
default                nginx-f89759699-cnj62                        1/1     Running   0          23h   10.244.2.2       k8snode2     <none>           <none>
kube-system            coredns-7ff77c879f-stfqz                     1/1     Running   0          25h   10.244.0.3       k8smaster1   <none>           <none>
kube-system            coredns-7ff77c879f-vhwr7                     1/1     Running   0          25h   10.244.0.2       k8smaster1   <none>           <none>
kube-system            etcd-k8smaster1                              1/1     Running   0          25h   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-apiserver-k8smaster1                    1/1     Running   0          25h   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-controller-manager-k8smaster1           1/1     Running   0          25h   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-flannel-ds-9bx4w                        1/1     Running   0          23h   192.168.13.104   k8snode1     <none>           <none>
kube-system            kube-flannel-ds-qzqjq                        1/1     Running   0          23h   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-flannel-ds-tldt5                        1/1     Running   0          23h   192.168.13.105   k8snode2     <none>           <none>
kube-system            kube-proxy-6vcvj                             1/1     Running   1          24h   192.168.13.104   k8snode1     <none>           <none>
kube-system            kube-proxy-hn4gx                             1/1     Running   0          25h   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-proxy-qzwh6                             1/1     Running   0          23h   192.168.13.105   k8snode2     <none>           <none>
kube-system            kube-scheduler-k8smaster1                    1/1     Running   0          25h   192.168.13.103   k8smaster1   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-78f5d9f487-sfjlr   1/1     Running   0          22h   10.244.2.3       k8snode2     <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-577bd97bc-f2v5g         1/1     Running   0          22h   10.244.1.2       k8snode1     <none>           <none>
[root@k8smaster1 ~]# kubectl cluster-info    # 查看集群信息
Kubernetes master is running at https://192.168.13.103:6443
KubeDNS is running at https://192.168.13.103:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8smaster1 ~]# kubectl logs nginx-f89759699-cnj62    # 查看日志
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/01/09 03:52:10 [notice] 1#1: using the "epoll" event method
2022/01/09 03:52:10 [notice] 1#1: nginx/1.21.5
2022/01/09 03:52:10 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2022/01/09 03:52:10 [notice] 1#1: OS: Linux 3.10.0-1160.49.1.el7.x86_64
2022/01/09 03:52:10 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/01/09 03:52:10 [notice] 1#1: start worker processes
2022/01/09 03:52:10 [notice] 1#1: start worker process 31
2022/01/09 03:52:10 [notice] 1#1: start worker process 32
2022/01/09 03:52:10 [notice] 1#1: start worker process 33
2022/01/09 03:52:10 [notice] 1#1: start worker process 34
10.244.0.0 - - [09/Jan/2022:03:52:11 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.29.0" "-"
10.244.0.0 - - [09/Jan/2022:03:52:24 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" "-"
2022/01/09 03:52:24 [error] 32#32: *2 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.244.0.0, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "192.168.13.103:30951", referrer: "http://192.168.13.103:30951/"
10.244.0.0 - - [09/Jan/2022:03:52:24 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://192.168.13.103:30951/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" "-"
10.244.0.0 - - [09/Jan/2022:03:55:20 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.67.0" "-"
10.244.1.0 - - [09/Jan/2022:03:55:24 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.67.0" "-"
10.244.2.1 - - [09/Jan/2022:03:55:28 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.67.0" "-"
127.0.0.1 - - [09/Jan/2022:04:48:39 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.74.0" "-"
[root@k8smaster1 ~]# kubectl exec -it nginx-f89759699-cnj62 bash    # 進入容器(等價於docker exec -it cid bash)
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
root@nginx-f89759699-cnj62:/# exit
exit
[root@k8smaster1 ~]# kubectl exec -it nginx-f89759699-cnj62 -- bash    # 進入容器,-- command, 代替上面命令
root@nginx-f89759699-cnj62:/# exit
exit

  可以直接從master 節點進入跑在node 節點的container 容器。 kubectl get pods -A 等價於 kubectl get pods --all-namespaces。 kubectl logs podName -f 可以實時查看pod中第一個container日志, 如果查看某個container 日志,可以加-c containerName 指定。

也可以查看一個pod 的詳細信息:可以看到namespace、運行的node、labels、container 容器、events容器啟動事件等信息

[root@k8smaster1 ~]# kubectl describe pods nginx-statefulset-0
Name:         nginx-statefulset-0
Namespace:    default
Priority:     0
Node:         k8snode1/192.168.13.104
Start Time:   Sat, 15 Jan 2022 23:30:04 -0500
Labels:       app=nginx
              controller-revision-hash=nginx-statefulset-6df8f484ff
              statefulset.kubernetes.io/pod-name=nginx-statefulset-0
Annotations:  <none>
Status:       Running
IP:           10.244.1.26
IPs:
  IP:           10.244.1.26
Controlled By:  StatefulSet/nginx-statefulset
Containers:
  nginx:
    Container ID:   docker://b8d73855d62c401749f654a5f3876e96ba992b5f8a24a4fac8d4753e15ff0a5c
    Image:          nginx:latest
    Image ID:       docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 15 Jan 2022 23:30:06 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5r9hq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-5r9hq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5r9hq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  11m   default-scheduler  Successfully assigned default/nginx-statefulset-0 to k8snode1
  Normal  Pulling    11m   kubelet, k8snode1  Pulling image "nginx:latest"
  Normal  Pulled     11m   kubelet, k8snode1  Successfully pulled image "nginx:latest"
  Normal  Created    11m   kubelet, k8snode1  Created container nginx
  Normal  Started    11m   kubelet, k8snode1  Started container nginx

 獲取所有的資源信息:

[root@k8smaster1 ~]# kubectl get all -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
pod/nginx-f89759699-cnj62   1/1     Running   0          26h   10.244.2.2   k8snode2   <none>           <none>

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        28h   <none>
service/nginx        NodePort    10.96.201.24   <none>        80:30951/TCP   26h   app=nginx

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR
deployment.apps/nginx   1/1     1            1           26h   nginx        nginx    app=nginx

NAME                              DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES   SELECTOR
replicaset.apps/nginx-f89759699   1         1         1       26h   nginx        nginx    app=nginx,pod-template-hash=f89759699

獲取service 信息:

[root@k8smaster1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        28h
nginx        NodePort    10.96.201.24   <none>        80:30951/TCP   27h
[root@k8smaster1 ~]# kubectl get service
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        28h
nginx        NodePort    10.96.201.24   <none>        80:30951/TCP   27h
[root@k8smaster1 ~]# kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        28h
nginx        NodePort    10.96.201.24   <none>        80:30951/TCP   27h

刪除默認default namespace 下面所有的pods:

[root@k8smaster1 ~]# kubectl delete pods --all
pod "mytomcat" deleted
[root@k8smaster1 ~]# kubectl get pods
No resources found in default namespace.

如果想查看一個資源的相關信息,可以導出為yaml 格式,也可以用edit 直接編輯:

[root@k8smaster1 ~]# kubectl edit pod nginx-f89759699-vkf7d

  編輯進入的頁面類似於yml 直接編輯,編輯后保存即可生效,如下:

  explain 可以分析相關的參數以及意思:

[root@k8smaster02 /]# kubectl explain secret
KIND:     Secret
VERSION:  v1

DESCRIPTION:
     Secret holds secret data of a certain type. The total bytes of the values
     in the Data field must be less than MaxSecretSize bytes.

FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   data <map[string]string>
     Data contains the secret data. Each key must consist of alphanumeric
     characters, '-', '_' or '.'. The serialized form of the secret data is a
     base64 encoded string, representing the arbitrary (possibly non-string)
。。。
[root@k8smaster02 /]# kubectl explain secret.type
KIND:     Secret
VERSION:  v1

FIELD:    type <string>

DESCRIPTION:
     Used to facilitate programmatic handling of secret data. 

補充:  docker 宿主機查看容器內進程信息

    可以看到從主機能看到容器內的應用程序,也就是容器與容器之間隔離,容器與宿主機之間互通。

[root@k8snode2 ~]# docker top d8b
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                13923               13905               0                   Jan09               ?                   00:00:00            nginx: master process nginx -g daemon off;
101                 13984               13923               0                   Jan09               ?                   00:00:00            nginx: worker process
101                 13985               13923               0                   Jan09               ?                   00:00:00            nginx: worker process
101                 13986               13923               0                   Jan09               ?                   00:00:00            nginx: worker process
101                 13987               13923               0                   Jan09               ?                   00:00:00            nginx: worker process
[root@k8snode2 ~]# ps -ef | grep nginx
root      13923  13905  0 Jan09 ?        00:00:00 nginx: master process nginx -g daemon off;
101       13984  13923  0 Jan09 ?        00:00:00 nginx: worker process
101       13985  13923  0 Jan09 ?        00:00:00 nginx: worker process
101       13986  13923  0 Jan09 ?        00:00:00 nginx: worker process
101       13987  13923  0 Jan09 ?        00:00:00 nginx: worker process
root      52391  12958  0 02:29 pts/0    00:00:00 grep --color=auto nginx
[root@k8snode2 ~]# ps -ef | grep 13905
root      13905   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d8bbbe754ebc3bc4c933862e0f98d0b5c15bfb6f14967791043b193b3c6be72b -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      13923  13905  0 Jan09 ?        00:00:00 nginx: master process nginx -g daemon off;
root      54452  12958  0 02:39 pts/0    00:00:00 grep --color=auto 13905
[root@k8snode2 ~]# ps -ef | grep 9575
root       9575   9567  0 Jan09 ?        00:02:00 docker-containerd --config /var/run/docker/containerd/containerd.toml
root       9982   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/324507774c8ea90c31d8e4f09ee6cc0e85a627f9f9669c66544af0a256eb0d45 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      10088   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d6bae886cb61fc6a75a242f4ddf5caf1367bec4b643a09dd58347bfc4d402496 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      11068   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/f1d0312d230818df533be053f358bde6cd33bb32396d933d210d13dcfc898a23 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      11391   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/e9dc459f966475bdf6eada28ee4fb9799f0ba9295658c9143d3d78a96568c3da -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      13177   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/04fbdd61772412e3c903547f0b5f78df486daddae954a9f11841277754661c63 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      13905   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d8bbbe754ebc3bc4c933862e0f98d0b5c15bfb6f14967791043b193b3c6be72b -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      19161   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/c6c8e41e9e5e975ad1471fd89a0c9b60e8bd1e5e36284bfb11f186c11d6267a3 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      19406   9575  0 Jan09 ?        00:00:02 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d6930163d286c98f7b91b55e489653df2a534100031dd9b7fcb9957edcd57ca2 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      54494  12958  0 02:39 pts/0    00:00:00 grep --color=auto 9575
[root@k8snode2 ~]# ps -ef | grep 9567
root       9567      1  1 Jan09 ?        00:08:47 /usr/bin/dockerd
root       9575   9567  0 Jan09 ?        00:02:00 docker-containerd --config /var/run/docker/containerd/containerd.toml
root      54805  12958  0 02:40 pts/0    00:00:00 grep --color=auto 9567

 

補充:在master 執行kubeadm init初始化時 過程我本地報錯如下

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

解決辦法:

1. 創建/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 文件,內容如下:

Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"

2. 重啟kubelet

systemctl daemon-reload
systemctl restart kubelet

3. 重新執行kubeadm init 

補充: master 多次執行kubeadm init 報錯如下

error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists

是因為上次執行kubeadm init 之后沒有清理干凈,解決辦法:

kubeadm reset

補充:master  執行kubeadm init 報錯如下 

[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

網上說是下載一些鏡像倉庫超時, 沒有找到相關解決辦法,最終我刪除該虛擬機,重新克隆了一個虛擬機,然后執行上述初始化以及安裝過程。

補充: 在master 部署程中遇到各種錯誤,嘗試安裝不同版本kubeadm、kubelet、kubectl

1. 查看指定版本:

yum list kubeadm --showduplicates

2. 刪除相關

yum remove -y kubelet kubeadm kubectl

3. 重新安裝指定版本 yum install packagename-version

yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0

補充: 在節點加入k8s 集群啟動后network 啟動失敗,報錯如下 

Failed to start LSB 

解決辦法:

1. 關閉NetworkManager 服務

systemctl stop NetworkManager
systemctl disable NetworkManager

2. 重新啟動

systemctl restart network

補充:  node 節點重啟后網絡訪問不通:現象是通過service 暴露的端口只能通過pod 所在node 的ip 進行訪問

解決辦法: 重啟docker 服務即可解決

systemctl daemon-reload
systemctl restart docker

補充:Terminating 狀態的pod 不能刪除 

現象就是: 一個pod 部署在節點1, 然后我節點1 宕機之后節點1 的pod 自動調度到節點2, 相當於新創建一個pod 到節點2, 原來節點1 的pod 狀態變為Terminating

解決辦法: 增加--force 參數強制刪除

kubectl delete pods web1-f864c756b-zltz7 --force

 補充: 節點重啟后不可被調度, 且網絡不可達

1. 機器在關機同時,k8s自動為這個節點添加了不可被調度污點, 查看節點的污點值:

[root@k8smaster1 ~]# kubectl describe node k8snode2 | grep Tain
Taints:             node.kubernetes.io/unreachable:NoSchedule

嘗試手動刪除污點值也無效。

2. 解決辦法

    k8s圈內有句話說的比較恰當,如果說APIServer是整個集群的大腦,那么kubelet就是每個節點的小腦,它主要用於跟APIServer交互,讓APIServer獲取節點的狀態信息,現在kubelet已經掛了,很自然無法進行Pod的調度了。

(1) 到node 節點查看kubelet

[root@k8snode2 ~]# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: inactive (dead)
     Docs: https://kubernetes.io/docs/

(2) 重啟kubelet並且設置開機自動啟動

[root@k8snode2 ~]# systemctl enable kubelet
[root@k8snode2 ~]# systemctl is-enabled kubelet
enabled
[root@k8snode2 ~]# systemctl start kubelet
[root@k8snode2 ~]# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2022-01-17 05:10:48 EST; 2min 9s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 1976 (kubelet)
   CGroup: /system.slice/kubelet.service
           └─1976 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kube...

Jan 17 05:10:49 k8snode2 kubelet[1976]: W0117 05:10:49.642289    1976 kuberuntime_container.go:758] No ref for container {"...ce92d"}
Jan 17 05:10:49 k8snode2 kubelet[1976]: I0117 05:10:49.653407    1976 reconciler.go:319] Volume detached for volume "tmp-vo...Path ""
Jan 17 05:10:49 k8snode2 kubelet[1976]: I0117 05:10:49.653437    1976 reconciler.go:319] Volume detached for volume "defaul...Path ""
Jan 17 05:10:49 k8snode2 kubelet[1976]: I0117 05:10:49.653453    1976 reconciler.go:319] Volume detached for volume "kubern...Path ""
Jan 17 05:10:52 k8snode2 kubelet[1976]: I0117 05:10:52.239206    1976 topology_manager.go:219] [topologymanager] RemoveCont...568c3da
Jan 17 05:10:58 k8snode2 kubelet[1976]: I0117 05:10:58.899501    1976 kubelet_node_status.go:70] Attempting to register node k8snode2
Jan 17 05:10:58 k8snode2 kubelet[1976]: I0117 05:10:58.910103    1976 kubelet_node_status.go:112] Node k8snode2 was previou...istered
Jan 17 05:10:58 k8snode2 kubelet[1976]: I0117 05:10:58.910256    1976 kubelet_node_status.go:73] Successfully registered no...8snode2
Jan 17 05:11:48 k8snode2 kubelet[1976]: I0117 05:11:48.858077    1976 topology_manager.go:219] [topologymanager] RemoveCont...c4ee0a2
Jan 17 05:11:48 k8snode2 kubelet[1976]: W0117 05:11:48.942110    1976 cni.go:331] CNI failed to retrieve network namespace ...48afb9"
Hint: Some lines were ellipsized, use -l to show in full.

(3) k8smaster 節點繼續查看節點污點值: 發現污點已經不存在

[root@k8smaster1 ~]# kubectl describe node k8snode2 | grep Tain
Taints:             <none>

(4) 測試

  節點可以接受正常的調度以及網絡也正常,可以通過該節點的端口訪問service 暴露的服務

 補充: kubernetes 實現優雅的停止pod

讓其副本數量為0 即可

kubectl scale --replicas=0 deployment/<your-deployment>

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM