kubernetes之四---使用kubeadm部署K8S集群


k8s組件介紹:

(1)kube-apiserver:Kubernetes API server 為api對象驗證並配置數據,包括pods、services、replicationcontrollers和其它api對象API Server提供REST操作和到集群共享狀態的前端,所有其他組件通過它進行交互。

 

(2)Kubernetes scheduler:是一個擁有豐富策略、能夠感知拓撲變化、支持特定負載的功能組件,它對集群的可用性、性能表現以及容量都影響巨大。 scheduler 需要考慮獨立的和集體的資源需求、

服務質量需求、硬件/軟件/策略限制、親和與反親和規范、數據位置、內部負載接口、截止時間等等。如有必要,特定的負載需求可以通過API暴露出來。


(3)kubecontroller manager :Controller Manager 作為集群內部的管理控制中心,負責集群內的 Node 、 Pod 副本、服務端點( Endpoint )、命名空間 Namespace )、服務賬號(ServiceAccount)、資源定額 ResourceQuota)的管理,當某個Node意外宕機時, Controller Manager 會及時發現並執行自動化修復流程,確保集群始終處於預期的工作狀態。

 

(4)kube-proxy:Kubernetes 網絡代理運行在 node 上,它反映了 node 上Kubernetes API 中定義的服務,並可以通過一組后端進行簡單的 TCP 、 UDP 流轉發或循環模式( round robin)robin))的TCP 、UDP 轉發,用戶必須使用 apiserver API 創建一個服務來配置代理,其實就是kube proxy通過在主機上維護網絡規則並執行連接轉發來實現 Kubernetes服務訪問 。

官方文檔:https://k8smeetup.github.io/docs/admin/kube-proxy 

(5)kubelet:是主要的節點代理,它會監視已分配給節點的pod,具體功能如下:

官方文檔:https://k8smeetup.github.io/docs/admin/kubelet/

  • 向master匯報node節點的狀態信息
  • 接受指令並在Pod中創建docker容器
  • 准備Pod所需的數據卷
  • 返回pod的運行狀態
  • 在node節點執行容器健康檢查

 

(6)etcd:是Kubernetes提供默認的存儲系統,保存所有集群數據,使用時需要為etcd數據提供備份計划

官方文檔:https://github.com/etcd-io/etcd

基於Kubeadm部署k8s集群

部署架構圖

環境准備

環境准備說明:

測試使用的Kubernetes集群可由一個master主機及一個以上(建議至少兩個)node主機組成,這些主機可以是物理服務器,也可以運行於vmware、virtualbox或kvm等虛擬化平台上的虛擬機,甚至是公有雲上的VPS主機。

本測試環境將由master1、node1和node2三個獨立的主機組成,它們分別擁有4核心的CPU及4G的內存資源,操作系統環境均為CentOS 7.5 1804。此外,各主機需要預設的系統環境如下:

(1)借助於NTP服務設定各節點時間精確同步;
(2)通過DNS完成各節點的主機名稱解析,測試環境主機數量較少時也可以使用hosts文件進行;
(3)關閉各節點的iptables或firewalld服務,並確保它們被禁止隨系統引導過程啟動;
(4)各節點禁用SELinux; 
(5)各節點禁用所有的Swap設備;
(6)若要使用ipvs模型的proxy,各節點還需要載入ipvs相關的各模塊;

 

master:192.168.7.100

node1:192.168.7.101

node2:192.168.7.102

需要安裝組件

 docker-ce
 kubeadm 
 kubelet 
 kubectl 

注意:禁用swap、selinux、iptables

#setenforce 0  #最好修改配置文件然后重啟/etc/sysconfig/selinux,改為disabled
#systemctl stop firewalld
#systemctl disable firewalld
#swapoff -a 

配置三個主機的hosts文件

# vim /etc/hosts
192.168.7.100 master
192.168.7.101 node1
192.168.7.102 node2

1、如果開啟了 swap 分區,kubelet 會啟動失敗(可以通過將參數 --fail-swap-on 設置為false 來忽略 swap on),故需要在每台機器上關閉 swap 分區:

$ sudo swapoff -a

2、為了防止開機自動掛載 swap 分區,可以注釋  /etc/fstab  中相應的條目:

$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

3、進行時間同步

$ yum -y install ntpdate
$ sudo ntpdate cn.pool.ntp.org

4、修改iptables內生的橋接相關功能,已經默認開啟了,沒開啟的自行開啟

[root@master ~]# vim /etc/sysctl.conf   # 修改內核參數
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1


[root@master ~]# sysctl -p   # 將內核參數生效

具體步驟

1. master和node先安裝kubelet docker kubeadm
2. master節點運行kubeadm init初始化命令
3. 驗證master
4. Node節點使用kubeadm加入k8s master
5. 驗證node
6.啟動容器測試訪問

kubeadm介紹

官網地址:https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm/

1、在三個主機上都安裝docker-ce

官方下載地址:https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1、切換到 /etc/yum.repos.d目錄下進行下載docker-ce的yum源

[root@k8s-master ~]# cd /etc/yum.repos.d/
[root@k8s-master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2、開始安裝docker-ce

[root@k8s-master yum.repos.d]# yum install docker-ce  -y

另外,docker自1.13版起會自動設置iptables的FORWARD默認策略為DROP,這可能會影響Kubernetes集群依賴的報文轉發功能,因此,需要在docker服務啟動后,重新將FORWARD鏈的默認策略設備為ACCEPT,方式是修改/usr/lib/systemd/system/docker.service文件,在“ExecStart=/usr/bin/dockerd”一行之后新增一行如下內容:

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker

ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT  # 將防火牆的FORWARD鏈改為ACCEPT

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

3、啟動docker服務

# systemctl daemon-reload
# systemctl start docker
# systemctl enable docker

4、配置docker加速器,並重新加載docker,設置為開機啟動

[root@master ~]# mkdir -p /etc/docker

sudo tee /etc/docker/daemon.json <<-'EOF' 
{
 "registry-mirrors": ["https://kganztio.mirror.aliyuncs.com"]
} 
EOF 


[root@master ~]# systemctl daemon-reload 
[root@master ~]# systemctl start docker 
[root@master ~]# systemctl enable docker.service

  

2、三個主機都下載kubenetes的yum源倉庫並下載組件

阿里雲官方地址:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

1、在三個主機上都配置kubernetes的yum源倉庫

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2、master主節點和node1、node2主機都安裝kubelet、kubeadm、kubectl組件,指定的版本號,方便后面做kubeadm的實驗升級,生產中是安裝的最新版本。

[root@master ~]# yum list all |grep "^kube"    # 可以查看kubernetes版本
kubeadm.x86_64                              1.18.6-0                   @kubernetes
kubectl.x86_64                              1.18.6-0                   @kubernetes
kubelet.x86_64                              1.18.6-0                   @kubernetes
kubernetes-cni.x86_64                       0.8.6-0                    @kubernetes
kubernetes.x86_64                           1.5.2-0.7.git269f928.el7   extras   
kubernetes-client.x86_64                    1.5.2-0.7.git269f928.el7   extras   
kubernetes-master.x86_64                    1.5.2-0.7.git269f928.el7   extras   
kubernetes-node.x86_64                      1.5.2-0.7.git269f928.el7   extras  

安裝指定的kubernetes版本

[root@k8s-master yum.repos.d]# yum -y install kubeadm-1.14.1 kubelet-1.14.1 kubectl-1.14.1

安裝完成之后插件此時安裝的版本

[root@master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

3、修改kubelet配置文件

[root@master ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
KUBE_PROXY=MODE=ipvs

設置為開機啟動

[root@master ~]# systemctl enable kubelet.service

kubeadm命令詳解

[root@master ~]# kubeadm --help
--apiserver-advertise-address string    #API Server 將要監聽的監聽地址
--apiserver-bind-port int32 # API Server綁定的端口默認為6443,
--apiserver-cert-extra-sans stringSlice # 可選的證書額外信息,用於指定API Server的服務器證書。可以是IP地址也可以是DNS名稱。
--cert-dir string    # 證書的存儲路徑,缺省路徑為/etc/kubernetes/pki
--config string    #kubeadm配置文件的路徑
--ignore-preflight-errors strings #可以忽略檢查過程中出現的錯誤信息,比如忽略swap,如果為all就忽略所有
--image-repository string # 設置一個鏡像倉庫,默認為k8s.gcr.io
--kubernetes-version string # 選擇k8s版本,默認為stable1
--node-name string # 指定node名稱
--pod-network-cidr # 設置pod ip地址范圍
--service-cidr # 設置service網絡地址范圍
--service-dns-domain string # 設置域名,默認為cluster.local
--skip-certificate-key-print # 不打印用於加密的key信息
--skip-phases strings # 要跳過哪些階段
--skip-token-print # 跳過打印token信息
--token # 指定token
--token-ttl #指定token過期時間,默認為24小時,0為永不過期
--upload-certs #更新證書

全局選項
--log-file string # 日志路徑
--log-file-max-size uint # 設置日志文件的最大大小,單位為兆,默認為18000為沒有限制 rootfs # 宿主機的根路徑,也就是使用絕對路徑
--skip-headers # 為true,在log日志里面不顯示消息的頭部信息
--skip-log-headers # 為true在日志文件里面不記錄頭部信息

3、在主節點master機器上初始化kubeadm

需要下載的鏡像,可以通過docker去拉取鏡像,也可以進行初始化去下載鏡像,本次實驗是在初始化的過程中拉取的鏡像,可以通過以下命令進行查詢自己需要的鏡像。

[root@master ~]# kubeadm config images list  --kubernetes-version v1.14.1
k8s.gcr.io/kube-apiserver:v1.14.1
k8s.gcr.io/kube-controller-manager:v1.14.1
k8s.gcr.io/kube-scheduler:v1.14.1
k8s.gcr.io/kube-proxy:v1.14.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

由於鏡像都是國外的,下載比較慢,將上面查詢到的鏡像進行轉發到aliyun網站即可,下載相對比較快!!!

# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.1
# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.1
# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.1
# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.1
# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1


# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

1、在master主機上開始初始化,注意指定當前安裝的kubeadm版本,以及忽略swap分區,此格式化之后會自動下載鏡像文件。

kubeadm init  --kubernetes-version=v1.14.1 \
--pod-network-cidr=10.224.0.0/16  --service-cidr=10.96.0.0/12 \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--ignore-preflight-errors=swap

命令中的各選項簡單說明如下:

  • --kubernetes-version選項的版本號用於指定要部署的Kubenretes程序版本,它需要與當前的kubeadm支持的版本保持一致;
  •  --pod-network-cidr選項用於指定分Pod分配使用的網絡地址,它通常應該與要部署使用的網絡插件(例如flannel、calico等)的默認設定保持一致,10.244.0.0/16是flannel默認使用的網絡;
  • --service-cidr用於指定為Service分配使用的網絡地址,它由kubernetes管理,默認即為10.96.0.0/12;
  • 最后一個選項“--ignore-preflight-errors=Swap”僅應該在未禁用Swap設備的狀態下使用。

2、出現以下內容才算初始化成功,一定要記得將swap進行關閉,否則初始化不成功!!!

 3、在master主機上配置kube證書,然后進行執行命令,建隱藏文件、復制一個環境變量並授權,此時master節點就能執行kubectl命令。

[root@computer-2 yum.repos.d]# mkdir -p $HOME/.kube
[root@computer-2 yum.repos.d]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@computer-2 yum.repos.d]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

4、在node1和node2主機上添加kubeadm初始化后最后兩行的命令,將兩個節點添加到master節點主機上

[root@master ~]# kubeadm init --apiserver-advertise-address=192.168.7.101 \
> --apiserver-bind-port=6443 --kubernetes-version=v1.18.3 \
> --pod-network-cidr=10.10.0.0/16  --service-cidr=10.20.0.0/16 \
> --service-dns-domain=linux36.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
> --ignore-preflight-errors=swap
W0720 22:37:02.504667    8243 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.linux36.local] and IPs [10.20.0.1 192.168.7.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.7.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.7.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0720 22:37:17.024784    8243 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0720 22:37:17.028093    8243 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 26.505881 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4a5y0s.jex4tzdhzj60t1ro
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.7.101:6443 --token 4a5y0s.jex4tzdhzj60t1ro \   # 執行完初始化的最后兩行在node1和node2上執行,將兩個節點添加到master上,實現集群
    --discovery-token-ca-cert-hash sha256:5d5e7c28f5fba3cb42426e113f192d122e72514e5f7df05e1c2e1f082833b496 

5、在master主機上查看主節點和node1節點狀態,此時是NotReady狀態,是因為還沒有部署flannel

[root@computer-2 yum.repos.d]# kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
computer-2   NotReady   master   10m   v1.14.1
node1        NotReady   <none>   7s    v1.14.1
node2        NotReady   <none>   7s    v1.14.1

4、在master上部署 flannel

官網地址:https://github.com/coreos/flannel/

如果下載flannel不下來,詳情看:https://www.cnblogs.com/struggle-1216/p/13370413.html

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

  查看此時下載的所有docker鏡像

[root@master ~]# docker images
REPOSITORY                                                                    TAG                 IMAGE ID            CREATED             SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.14.1             20a2d7035165        9 months ago        82.1MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.14.1             cfaa4ad74c37        9 months ago        210MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.14.1             8931473d5bdb        9 months ago        81.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.14.1             efb3887b411d        9 months ago        158MB
quay.io/coreos/flannel                                                        v0.11.0-amd64       ff281650a721        11 months ago       52.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.3.1               eb516548c180        12 months ago       40.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.3.10              2c4adeb21b4f        13 months ago       258MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB

 如果上面的鏡像未下載下來,可以在此網頁去查找下載,https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

[root@master ~]# docker pull quay.io/coreos/flannel:v0.11.0-amd64

5、在node1和node2上下載coredns:1.3.1鏡像 

由於在master初始化時添加了自定義的dns域名,而node1和node2節點上沒有coredns鏡像,只能手動去pull下載到node1和node2主機內

[root@node1 yum.repos.d]# docker pull  registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1
[root@node2 yum.repos.d]# docker pull  registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1

查看兩個node節點的鏡像,此時兩個node節點的鏡像一致

[root@node1 yum.repos.d]# docker images
REPOSITORY                                                       TAG                 IMAGE ID            CREATED             SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy   v1.14.1             20a2d7035165        9 months ago        82.1MB
quay.io/coreos/flannel                                           v0.11.0-amd64       ff281650a721        11 months ago       52.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns      1.3.1               eb516548c180        12 months ago       40.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        2 years ago         742kB

6、在master節點上驗證此時集群效果

(1)三個節點已經全部在准備狀態

[root@master ~]# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
computer-2   Ready    master   109m   v1.14.1
node1        Ready    <none>   54m    v1.14.1
node2        Ready    <none>   19m    v1.14.1

 (2)查看三個節點的狀態

[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 

(3)查看此時所有的節點運行狀態,此時都是runing狀態

[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-d5947d4b-8l45w               1/1     Running   1          110m
kube-system   coredns-d5947d4b-lnk5x               1/1     Running   1          110m
kube-system   etcd-computer-2                      1/1     Running   0          109m
kube-system   kube-apiserver-computer-2            1/1     Running   1          109m
kube-system   kube-controller-manager-computer-2   1/1     Running   1          110m
kube-system   kube-flannel-ds-amd64-r9x8c          1/1     Running   0          21m
kube-system   kube-flannel-ds-amd64-smw4z          1/1     Running   0          50m
kube-system   kube-flannel-ds-amd64-svb24          1/1     Running   0          50m
kube-system   kube-proxy-dgjlx                     1/1     Running   0          110m
kube-system   kube-proxy-hkpdn                     1/1     Running   0          56m
kube-system   kube-proxy-mvddn                     1/1     Running   0          21m
kube-system   kube-scheduler-computer-2            1/1     Running   1          110m

 (4)查看兩個node節點的狀態,此時都是runing狀態

[root@master ~]# kubectl get pods -n kube-system -o wide |grep node
kube-flannel-ds-amd64-r9x8c          1/1     Running   0          21m    192.168.7.102   node2        <none>           <none>
kube-flannel-ds-amd64-svb24          1/1     Running   0          50m    192.168.7.101   node1        <none>           <none>
kube-proxy-hkpdn                     1/1     Running   0          56m    192.168.7.101   node1        <none>           <none>
kube-proxy-mvddn                     1/1     Running   0          21m    192.168.7.102   node2        <none>           <none>

 (5)可以查詢每個詳細的節點信息

[root@master ~]# kubectl get pods -o wide  -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE    IP              NODE         NOMINATED NODE   READINESS GATES
coredns-d5947d4b-8l45w               1/1     Running   1          125m   10.10.0.2       computer-2   <none>           <none>
coredns-d5947d4b-lnk5x               1/1     Running   1          125m   10.10.0.3       computer-2   <none>           <none>
etcd-computer-2                      1/1     Running   0          124m   192.168.7.100   computer-2   <none>           <none>
kube-apiserver-computer-2            1/1     Running   1          124m   192.168.7.100   computer-2   <none>           <none>
kube-controller-manager-computer-2   1/1     Running   2          124m   192.168.7.100   computer-2   <none>           <none>
kube-flannel-ds-amd64-r9x8c          1/1     Running   0          35m    192.168.7.102   node2        <none>           <none>
kube-flannel-ds-amd64-smw4z          1/1     Running   0          64m    192.168.7.100   computer-2   <none>           <none>
kube-flannel-ds-amd64-svb24          1/1     Running   0          64m    192.168.7.101   node1        <none>           <none>
kube-proxy-dgjlx                     1/1     Running   0          125m   192.168.7.100   computer-2   <none>           <none>
kube-proxy-hkpdn                     1/1     Running   0          70m    192.168.7.101   node1        <none>           <none>
kube-proxy-mvddn                     1/1     Running   0          35m    192.168.7.102   node2        <none>           <none>
kube-scheduler-computer-2            1/1     Running   2          124m   192.168.7.100   computer-2   <none>           <none>

7、創建容器測試效果

創建四個容器,然后進入容器中進行測試,此時可以看到不同網段的IP地址也可以跨宿主機ping通

[root@master ~]# kubectl run net-test1 --image=alpine --replicas=4 sleep 360000  # --replicas=4是創建四個容器
[root@master ~]# kubectl get pods -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP          NODE    NOMINATED NODE   READINESS GATES
net-test1-7c9c4b94d6-46mvg   1/1     Running   0          79s   10.10.2.2   node2   <none>           <none>
net-test1-7c9c4b94d6-jl7kv   1/1     Running   0          79s   10.10.1.3   node1   <none>           <none>
net-test1-7c9c4b94d6-lq45j   1/1     Running   0          79s   10.10.2.3   node2   <none>           <none>
net-test1-7c9c4b94d6-x7qmd   1/1     Running   0          79s   10.10.1.2   node1   <none>           <none>

[root@master ~]# kubectl exec -it net-test1-7c9c4b94d6-46mvg sh
/ # ip a # 查看本機的IP地址
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP 
    link/ether a2:b1:a4:32:6c:5d brd ff:ff:ff:ff:ff:ff
    inet 10.10.2.2/24 scope global eth0  #IP地址是10.10.2.2
       valid_lft forever preferred_lft forever
/ # ping 10.10.1.3 # ping不同網段的地址,此時已經通了
PING 10.10.1.3 (10.10.1.3): 56 data bytes
64 bytes from 10.10.1.3: seq=0 ttl=62 time=32.475 ms
64 bytes from 10.10.1.3: seq=1 ttl=62 time=81.510 ms
^C
--- 10.10.1.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 32.475/56.992/81.510 ms

8、創建和刪除名稱空間

[root@master ~]# kubectl get ns  # ns是namespace縮寫
NAME              STATUS   AGE
default           Active   11h
kube-node-lease   Active   11h
kube-public       Active   11h
kube-system       Active   11h
[root@master ~]# kubectl create namespace develop
namespace/develop created
[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   11h
develop           Active   4s
kube-node-lease   Active   11h
kube-public       Active   11h
kube-system       Active   11h
[root@master ~]# kubectl delete  namespace develop
namespace "develop" deleted
[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   11h
kube-node-lease   Active   11h
kube-public       Active   11h
kube-system       Active   11h

9、查詢yaml和json格式的信息

[root@master ~]# kubectl get ns/default -o yaml # 輸出yaml格式信息
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: "2020-07-20T14:37:43Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:phase: {}
    manager: kube-apiserver
    operation: Update
    time: "2020-07-20T14:37:43Z"
  name: default
  resourceVersion: "152"
  selfLink: /api/v1/namespaces/default
  uid: 95da281c-b699-41c5-9fb3-01b38b2fdbe9
spec:
  finalizers:
  - kubernetes
status:
  phase: Active

[root@master ~]# kubectl get ns/default -o json  # 輸出json格式信息
{
    "apiVersion": "v1",
    "kind": "Namespace",
    "metadata": {
        "creationTimestamp": "2020-07-20T14:37:43Z",
        "managedFields": [
            {
                "apiVersion": "v1",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:status": {
                        "f:phase": {}
                    }
                },
                "manager": "kube-apiserver",
                "operation": "Update",
                "time": "2020-07-20T14:37:43Z"
            }
        ],
        "name": "default",
        "resourceVersion": "152",
        "selfLink": "/api/v1/namespaces/default",
        "uid": "95da281c-b699-41c5-9fb3-01b38b2fdbe9"
    },
    "spec": {
        "finalizers": [
            "kubernetes"
        ]
    },
    "status": {
        "phase": "Active"
    }
}

  

  

至此,kubernetes集群已經搭建安裝完成;kubeadm幫助我們在后台完成了所有操作!!! 

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM