二進制 & 工具部署Kubernetes


Kubernetes架構篇

Kubernetes 是一個可移植的、可擴展的開源平台,用於管理容器化的工作負載和服務,可促進聲明式配置和自動化。Kubernetes 擁有一個龐大且快速增長的生態系統。Kubernetes 的服務、支持和工具廣泛可用。

一、簡介

Kubernetes 是一個可移植的、可擴展的開源平台,用於管理容器化的工作負載和服務,可促進聲明式配置和自動化。Kubernetes 擁有一個龐大且快速增長的生態系統。Kubernetes 的服務、支持和工具廣泛可用。 一、簡介Kubernetes是一個全新的基於容器技術的分布式領先方案。簡稱:K8S。它是Google開源的容器集群管理系統,它的設計靈感來自於Google內部的一個叫作Borg的容器管理系統。繼承了Google十余年的容器集群使用經驗。它為容器化的應用提供了部署運行、資源調度、服務發現和動態伸縮等一些列完整的功能,極大地提高了大規模容器集群管理的便捷性。

kubernetes是一個完備的分布式系統支撐平台。具有完備的集群管理能力,多擴多層次的安全防護和准入機制、多租戶應用支撐能力、透明的服務注冊和發現機制、內建智能負載均衡器、強大的故障發現和自我修復能力、服務滾動升級和在線擴容能力、可擴展的資源自動調度機制以及多粒度的資源配額管理能力。

在集群管理方面,Kubernetes將集群中的機器划分為一個Master節點和一群工作節點Node,其中,在Master節點運行着集群管理相關的一組進程kube-apiserver、kube-controller-manager和kube-scheduler,這些進程實現了整個集群的資源管理、Pod調度、彈性伸縮、安全控制、系統監控和糾錯等管理能力,並且都是全自動完成的。Node作為集群中的工作節點,運行真正的應用程序,在Node上Kubernetes管理的最小運行單元是Pod。Node上運行着Kubernetes的kubelet、kube-proxy服務進程,這些服務進程負責Pod的創建、啟動、監控、重啟、銷毀以及實現軟件模式的負載均衡器。

在Kubernetes集群中,它解決了傳統IT系統中服務擴容和升級的兩大難題。如果今天的軟件並不是特別復雜並且需要承載的峰值流量不是特別多,那么后端項目的部署其實也只需要在虛擬機上安裝一些簡單的依賴,將需要部署的項目編譯后運行就可以了。但是隨着軟件變得越來越復雜,一個完整的后端服務不再是單體服務,而是由多個職責和功能不同的服務組成,服務之間復雜的拓撲關系以及單機已經無法滿足的性能需求使得軟件的部署和運維工作變得非常復雜,這也就使得部署和運維大型集群變成了非常迫切的需求。

Kubernetes 的出現不僅主宰了容器編排的市場,更改變了過去的運維方式,不僅將開發與運維之間邊界變得更加模糊,而且讓 DevOps 這一角色變得更加清晰,每一個軟件工程師都可以通過 Kubernetes 來定義服務之間的拓撲關系、線上的節點個數、資源使用量並且能夠快速實現水平擴容、藍綠部署等在過去復雜的運維操作。

二、架構

Kubernetes 遵循非常傳統的客戶端服務端架構,客戶端通過 RESTful 接口或者直接使用 kubectl 與 Kubernetes 集群進行通信,這兩者在實際上並沒有太多的區別,后者也只是對 Kubernetes 提供的 RESTful API 進行封裝並提供出來。每一個 Kubernetes 集群都由一組 Master 節點和一系列的 Worker 節點組成,其中 Master 節點主要負責存儲集群的狀態並為 Kubernetes 對象分配和調度資源。

Master

它主要負責接收客戶端的請求,安排容器的執行並且運行控制循環,將集群的狀態向目標狀態進行遷移,Master 節點內部由三個組件構成:

  • API Server

    負責處理來自用戶的請求,其主要作用就是對外提供 RESTful 的接口,包括用於查看集群狀態的讀請求以及改變集群狀態的寫請求,也是唯一一個與 etcd 集群通信的組件。

  • ControllerController

    管理器運行了一系列的控制器進程,這些進程會按照用戶的期望狀態在后台不斷地調節整個集群中的對象,當服務的狀態發生了改變,控制器就會發現這個改變並且開始向目標狀態遷移。

  • SchedulerScheduler

    調度器其實為 Kubernetes 中運行的 Pod 選擇部署的 Worker 節點,它會根據用戶的需要選擇最能滿足請求的節點來運行 Pod,它會在每次需要調度 Pod 時執行。

    Node

Node節點實現相對簡單一點,主要是由kubelet和kube-proxy兩部分組成: kubelet 是一個節點上的主要服務,它周期性地從 API Server 接受新的或者修改的 Pod 規范並且保證節點上的 Pod 和其中容器的正常運行,還會保證節點會向目標狀態遷移,該節點仍然會向 Master 節點發送宿主機的健康狀況。 kube-proxy 負責宿主機的子網管理,同時也能將服務暴露給外部,其原理就是在多個隔離的網絡中把請求轉發給正確的 Pod 或者容器。

Kubernetes架構圖

在這張系統架構圖中,我們把服務分為運行在工作節點上的服務和組成集群級別控制板的服務。

Kubernetes主要由以下幾個核心組件組成:

  • etcd保存了整個集群的狀態
  • apiserver提供了資源操作的唯一入口,並提供認證、授權、訪問控制、API注冊和發現等機制
  • controller manager負責維護集群的狀態,比如故障檢測、自動擴展、滾動更新等
  • scheduler負責資源的調度,按照預定的調度策略將Pod調度到相應的機器上
  • kubelet負責維護容器的生命周期,同時也負責Volume(CVI)和網絡(CNI)的管理
  • Container runtime負責鏡像管理以及Pod和容器的真正運行(CRI)
  • kube-proxy負責為Service提供cluster內部的服務發現和負載均衡

除了核心組件,還有一些推薦的組件:

  • kube-dns負責為整個集群提供DNS服務
  • Ingress Controller為服務提供外網入口
  • Heapster提供資源監控
  • Dashboard提供GUIFederation提供跨可用區的集群
  • Fluentd-elasticsearch提供集群日志采集、存儲與查詢

三、安裝部署

Kubernetes有兩種方式,第一種是二進制的方式,可定制但是部署復雜容易出錯;第二種是kubeadm工具安裝,部署簡單,不可定制化。

環境初始化

在開始之前,我們需要集群所用到的所有機器進行初始化。

系統環境

軟件 版本
CentOS CentOS Linux release 7.7.1908 (Core)
Docker 19.03.12
kubernetes v1.18.8
etcd 3.3.24
flannel v0.11.0
cfssl
kernel-lt 4.18+
kernel-lt-devel 4.18+

軟件規划

IP 安裝軟件
kubernetes-master-01 kube-apiserver,kube-controller-manager,kube-scheduler,etcd, kubelet,docker
kubernetes-master-02 kube-apiserver,kube-controller-manager,kube-scheduler,etcd, kubelet,docker
kubernetes-master-03 kube-apiserver,kube-controller-manager,kube-scheduler,etcd, kubelet,docker
kubernetes-node-01 kubelet,kube-proxy,etcd,docker
kubernetes-node-02 kubelet,kube-proxy,etcd,docker
kubernetes-master-vip kubectl,haproxy, Keepalive

集群規划

主機名 IP
kubernetes-master-01 172.16.0.50
kubernetes-master-02 172.16.0.51
kubernetes-master-03 172.16.0.52
kubernetes-node-01 172.16.0.53
kubernetes-node-02 172.16.0.54
kubernetes-master-vip 172.16.0.55

關閉selinux

setenforce 0Copy to clipboardErrorCopied

關閉防火牆

systemctl disable --now firewalld
setenforce 0Copy to clipboardErrorCopied

關閉swap分區

swapoff -a
sed -i.bak 's/^.*centos-swap/#&/g' /etc/fstab
echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubeletCopy to clipboardErrorCopied

修改主機名

# Master 01節點
$ hostnamectl set-hostname kubernetes-master-01

# Master 02節點
$ hostnamectl set-hostname kubernetes-master-02

# Master 03節點
$ hostnamectl set-hostname kubernetes-master-03

# Node 01節點
$ hostnamectl set-hostname kubernetes-master-01

# Node 02節點
$ hostnamectl set-hostname kubernetes-node-02

# 負載均衡 節點
$ hostnamectl set-hostname kubernetes-master-vipCopy to clipboardErrorCopied

配置HOSTS解析

cat >> /etc/hosts <<EOF

172.16.0.50 kubernetes-master-01
172.16.0.51 kubernetes-master-02
172.16.0.52 kubernetes-master-03
172.16.0.53 kubernetes-node-01
172.16.0.54 kubernetes-node-02
172.16.0.56 kubernetes-master-vip

EOF
Copy to clipboardErrorCopied

集群各節點免密登錄

ssh-keygen -t rsa

  for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03 kubernetes-node-01 kubernetes-node-02 kubernetes-master-vip ;  do   ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i ;   doneCopy to clipboardErrorCopied

升級內核版本

yum localinstall -y kernel-lt*

grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg

grubby --default-kernel

# 重啟
rebootCopy to clipboardErrorCopied

安裝IPVS模塊

# 安裝IPVS
yum install -y conntrack-tools ipvsadm ipset conntrack libseccomp

# 加載IPVS模塊
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
  /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    /sbin/modprobe \${kernel_module}
  fi
done
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vsCopy to clipboardErrorCopied

內核優化

cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF

# 立即生效
sysctl --systemCopy to clipboardErrorCopied

配置yum源

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

# 刷新緩存
yum makecache

# 更新系統
yum update -y --exclud=kernel* Copy to clipboardErrorCopied

安裝基礎軟件

yum install wget expect vim net-tools ntp bash-completion ipvsadm ipset jq iptables conntrack sysstat libseccomp -yCopy to clipboardErrorCopied

關閉防火牆

[root@localhost ~]# systemctl disable --now dnsmasq
[root@localhost ~]# systemctl disable --now firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@localhost ~]# systemctl disable --now NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.Copy to clipboardErrorCopied

kubeadm安裝

kubeadm是官方提供的Kubernetes自動化部署工具。簡單易用,不過不方便自定制。

安裝Docker

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install docker-ce -y

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://8mh75mhz.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload ; systemctl restart docker;systemctl enable --now docker.serviceCopy to clipboardErrorCopied

Master節點安裝

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

setenforce 0
yum install -y kubelet kubeadm kubectlCopy to clipboardErrorCopied

Node節點安裝

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


setenforce 0
yum install -y kubelet kubeadm kubectCopy to clipboardErrorCopied

設置開機自啟動

systemctl enable --now docker.service kubelet.service Copy to clipboardErrorCopied

初始化

kubeadm init \
--image-repository=registry.cn-hangzhou.aliyuncs.com/k8sos \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 Copy to clipboardErrorCopied

  • 根據命令提示
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configCopy to clipboardErrorCopied
  • 命令提示
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrcCopy to clipboardErrorCopied

安裝Flannel網絡插件

kubernetes需要使用第三方的網絡插件來實現kubernetes的網絡功能,這樣一來,安裝網絡插件成為必要前提;第三方網絡插件有多種,常用的有flanneld、calico和cannel(flanneld+calico),不同的網絡組件,都提供基本的網絡功能,為各個Node節點提供IP網絡等。

  • 下載網絡插件配置文件
docker pull registry.cn-hangzhou.aliyuncs.com/k8sos/flannel:v0.12.0-amd64 ; docker tag  registry.cn-hangzhou.aliyuncs.com/k8sos/flannel:v0.12.0-amd64  quay.io/coreos/flannel:v0.12.0-amd64

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlCopy to clipboardErrorCopied

加入集群

Node節點加入集群,需要Token。

# 創建TOKEN
kubeadm token create  --print-join-command

# node節點加入集群
kubeadm join 10.0.0.50:6443 --token 038qwm.hpoxkc1f2fkgti3r     --discovery-token-ca-cert-hash sha256:edcd2c212be408f741e439abe304711ffb0adbb3bedbb1b93354bfdc3dd13b04Copy to clipboardErrorCopied

Master節點加入集群

同上,master節點需要安裝好k8s組件及Docker,然后Master節點接入集群需要提前做高可用。

Master節點高可用
  • 安裝高可用軟件
# 需要在Master節點安裝高可用軟件
yum install -y keepalived haproxy Copy to clipboardErrorCopied
  • 配置haproxy

    • 修改配置文件

      cat > /etc/haproxy/haproxy.cfg <<EOF
      global
        maxconn  2000
        ulimit-n  16384
        log  127.0.0.1 local0 err
        stats timeout 30s
      
      defaults
        log global
        mode  http
        option  httplog
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        timeout http-request 15s
        timeout http-keep-alive 15s
      
      frontend monitor-in
        bind *:33305
        mode http
        option httplog
        monitor-uri /monitor
      
      listen stats
        bind    *:8006
        mode    http
        stats   enable
        stats   hide-version
        stats   uri       /stats
        stats   refresh   30s
        stats   realm     Haproxy\ Statistics
        stats   auth      admin:admin
      
      frontend k8s-master
        bind 0.0.0.0:8443
        bind 127.0.0.1:8443
        mode tcp
        option tcplog
        tcp-request inspect-delay 5s
        default_backend k8s-master
      
      backend k8s-master
        mode tcp
        option tcplog
        option tcp-check
        balance roundrobin
        default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
        server kubernetes-master-01    172.16.0.50:6443  check inter 2000 fall 2 rise 2 weight 100
        server kubernetes-master-02    172.16.0.51:6443  check inter 2000 fall 2 rise 2 weight 100
        server kubernetes-master-03    172.16.0.52:6443  check inter 2000 fall 2 rise 2 weight 100
      EOFCopy to clipboardErrorCopied
      
  • 分發配置至其他節點

    for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03; 
    do 
    ssh root@$i "mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_bak"
    scp haproxy.cfg root@$i:/etc/haproxy/haproxy.cfg
    doneCopy to clipboardErrorCopied
    
    
  • 配置keepalive

    • 修改keepalive配置

      mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak
      
      cat > /etc/keepalived/keepalived.conf <<EOF
      ! Configuration File for keepalived
      global_defs {
          router_id LVS_DEVEL
      }
      vrrp_script chk_kubernetes {
          script "/etc/keepalived/check_kubernetes.sh"
          interval 2
          weight -5
          fall 3  
          rise 2
      }
      vrrp_instance VI_1 {
          state MASTER
          interface eth0
          mcast_src_ip 172.16.0.50
          virtual_router_id 51
          priority 100
          advert_int 2
          authentication {
              auth_type PASS
              auth_pass K8SHA_KA_AUTH
          }
          virtual_ipaddress {
              172.16.0.55
          }
      #    track_script {
      #       chk_kubernetes
      #    }
      }
      EOFCopy to clipboardErrorCopied
      
    • 編寫監控腳本

      cat > /etc/keepalived/check_kubernetes.sh <<EOF
      #!/bin/bash
      
      function chech_kubernetes() {
          for ((i=0;i<5;i++));do
              apiserver_pid_id=$(pgrep kube-apiserver)
              if [[ ! -z $apiserver_pid_id ]];then
                  return
              else
                  sleep 2
              fi
              apiserver_pid_id=0
          done
      }
      
      # 1:running  0:stopped
      check_kubernetes
      if [[ $apiserver_pid_id -eq 0 ]];then
          /usr/bin/systemctl stop keepalived
          exit 1
      else
          exit 0
      fi
      EOF
      
      chmod +x /etc/keepalived/check_kubernetes.shCopy to clipboardErrorCopied
      
  • 分發至其他節點

    for i in kubernetes-master-02 kubernetes-master-03; 
    do 
      ssh root@$i "mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak"
      scp /etc/keepalived/keepalived.conf root@$i:/etc/keepalived/keepalived.conf
      scp /etc/keepalived/check_kubernetes.sh root@$i:/etc/keepalived/check_kubernetes.sh
    doneCopy to clipboardErrorCopied
    
  • 修改備用節點配置

    sed -i 's#state MASTER#state BACKUP#g' /etc/keepalived/keepalived.conf
    sed -i 's#172.16.0.50#172.16.0.51#g' /etc/keepalived/keepalived.conf
    sed -i 's#priority 100#priority 99#g' /etc/keepalived/keepalived.confCopy to clipboardErrorCopied
    
  • 啟動

    # 在所有的master節點上執行
    [root@kubernetes-master-01 ~]# systemctl enable --now keepalived.service haproxy.serviceCopy to clipboardErrorCopied
    
    
  • 配置master 節點加入集群文件

    apiVersion: kubeadm.k8s.io/v1beta2
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 24h0m0s
      usages:
      - signing
      - authentication
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 172.16.0.55  # 負載均衡VIP IP
      bindPort: 6443
    nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: kubernetes-master-01 # 節點名字
      taints:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
    ---
    apiServer:
      certSANs: # DNS地址
      - "172.16.0.55"
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.cn-hangzhou.aliyuncs.com/k8sos # 集群鏡像下載地址
    controlPlaneEndpoint: "172.16.0.55:8443"  # 監聽地址
    kind: ClusterConfiguration
    kubernetesVersion: v1.18.8
    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12 # service網段
      podSubnet: 10.244.0.0/16 # pod網段
    scheduler: {}Copy to clipboardErrorCopied
    
  • 下載鏡像

    [root@kubernetes-master-02 ~]# kubeadm config images pull --config kubeam-init.yaml
    W0905 15:42:34.095990   27731 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/kube-apiserver:v1.18.8
    [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/kube-controller-manager:v1.18.8
    [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/kube-scheduler:v1.18.8
    [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/kube-proxy:v1.18.8
    [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/pause:3.2
    [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/etcd:3.4.3-0
    [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/coredns:1.7.0Copy to clipboardErrorCopied
    
    
  • 初始化節點

    kubeadm init --config kubeam-init.yaml --upload-certsCopy to clipboardErrorCopied
    
    

我們可以看到我們初始化出來的完成之后展示出來的信息是負載均衡的節點信息,而非當親節點信息。

[kubeadm高可用安裝

用kubeadm的方式,高可用安裝kubernetes集群。

部署docker

添加docker軟件yum源

在阿里雲鏡像站下載對應的docker源

安裝docker-ce
yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install docker-ce -yCopy to clipboardErrorCopied

檢查是否部署成功
[root@kubernetes-master-01 ~]# docker version
Client: Docker Engine - Community
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        48a66213fe
 Built:             Mon Jun 22 15:46:54 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.12
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       48a66213fe
  Built:            Mon Jun 22 15:45:28 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683Copy to clipboardErrorCopied

換docker鏡像源
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://8mh75mhz.mirror.aliyuncs.com"]
}
EOFCopy to clipboardErrorCopied

重新加載並啟動
sudo systemctl daemon-reload ; systemctl restart docker;systemctl enable --now docker.serviceCopy to clipboardErrorCopied

部署kubernetes

部署kubernetes,首先在阿里雲鏡像站下載源。

下載源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOFCopy to clipboardErrorCopied

最好重新生成yum緩存

yum makecacheCopy to clipboardErrorCopied

安裝kubeadm、kubelet和kubectl組件
  • 查看鏡像版本

    [root@kubernetes-master-01 ~]# yum list kubeadm --showduplicates | grep 1.18.8
    Repository epel is listed more than once in the configuration
    kubeadm.x86_64                       1.18.8-0                        @kubernetes
    kubeadm.x86_64                       1.18.8-0                        kubernetesCopy to clipboardErrorCopied
    
    
  • 安裝適合的版本

    yum install -y kubelet-1.18.8 kubeadm-1.18.8 kubectl-1.18.8Copy to clipboardErrorCopied
    
    
  • 設置開機自啟動

    systemctl enable --now kubeletCopy to clipboardErrorCopied
    
    

部署Master節點

Master節點是整個集群的核心節點,其中主要包括apiserver、controller manager、scheduler、kubelet以及kube-proxy等組件

生成配置文件
kubeadm config print init-defaults > kubeadm-init.yamlCopy to clipboardErrorCopied

修改初始化配置文件
[root@kubernetes-master-01 ~]# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.0.44 # 修改成集群VIP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: kubernetes-master-01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
      - "172.16.0.44"                                    #VIP地址
    - "127.0.0.1",
    - "172.16.0.50",
    - "172.16.0.51",
    - "172.16.0.52",
    - "172.16.0.55",
    - "10.96.0.1",
    - "kubernetes",
    - "kubernetes.default",
    - "kubernetes.default.svc",
    - "kubernetes.default.svc.cluster",
    - "kubernetes.default.svc.cluster.local" 
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/k8sos # 修改成你的鏡像倉庫
controlPlaneEndpoint: "172.16.0.44:8443"                #VIP的地址和端口
kind: ClusterConfiguration
kubernetesVersion: v1.18.8 # 修改成需要安裝的版本
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16                                #添加pod網段
scheduler: {}Copy to clipboardErrorCopied

1、上面的advertiseAddress字段的值,這個值並非當前主機的網卡地址,而是高可用集群的VIP的地址。

2、上面的controlPlaneEndpoint這里填寫的是VIP的地址,端口填寫負載均衡的端口。

拉取鏡像
[root@kubernetes-master-01 ~]# kubeadm config images pull --config kubeadm-init.yaml
W0910 00:24:00.828524   13149 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/kube-apiserver:v1.18.8
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/kube-controller-manager:v1.18.8
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/kube-scheduler:v1.18.8
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/kube-proxy:v1.18.8
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/pause:3.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/etcd:3.4.3-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/k8sos/coredns:1.6.7Copy to clipboardErrorCopied
初始化Master節點
[root@kubernetes-master-01 ~]# systemctl enable --now haproxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
[root@kubernetes-master-01 ~]#
apiVersion: kubeadm.k8s.io/v1beta2
[root@kubernetes-master-01 ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0912 02:18:15.553917   31660 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: configmaps "kubeadm-config" not found
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0912 02:18:17.546319   31660 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@kubernetes-master-01 ~]# rm -rf kubeadm-init.yaml
[root@kubernetes-master-01 ~]# rm -rf /etc/kubernetes/
[root@kubernetes-master-01 ~]# rm -rf /var/lib/etcd/
[root@kubernetes-master-01 ~]# kubeadm config print init-defaults > kubeadm-init.yaml
W0912 02:19:15.842407   32092 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[root@kubernetes-master-01 ~]# vim kubeadm-init.yaml
[root@kubernetes-master-01 ~]# kubeadm init --config kubeadm-init.yaml --upload-certs
W0912 02:21:43.282656   32420 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes-master-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.0.44 172.16.0.44 172.16.0.44 127.0.0.1 172.16.0.50 172.16.0.51 172.16.0.52 10.96.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubernetes-master-01 localhost] and IPs [172.16.0.44 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-master-01 localhost] and IPs [172.16.0.44 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0912 02:21:47.217408   32420 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0912 02:21:47.218263   32420 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.017997 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
4148edd0a9c6e678ebca46567255522318b24936b9da7e517f687719f3dc33ac
[mark-control-plane] Marking the node kubernetes-master-01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubernetes-master-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.16.0.44:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:06df91bcacf6b5f206378a11065e2b6948eb2529503248f8e312fc0b913bec62 \
    --control-plane --certificate-key 4148edd0a9c6e678ebca46567255522318b24936b9da7e517f687719f3dc33ac

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.0.44:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:06df91bcacf6b5f206378a11065e2b6948eb2529503248f8e312fc0b913bec62Copy to clipboardErrorCopied

大概半分鍾的時間就會初始化完成。接下來就是加入我們的其他Master節點,以及Node節點組成我們的集群。

Master用戶配置
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configCopy to clipboardErrorCopied

Master節點加入集群

我們可以使用集群初始化信息中的加入命令加入Master節點。

  kubeadm join 172.16.0.44:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:06df91bcacf6b5f206378a11065e2b6948eb2529503248f8e312fc0b913bec62 \
    --control-plane --certificate-key 4148edd0a9c6e678ebca46567255522318b24936b9da7e517f687719f3dc33acCopy to clipboardErrorCopied

Node節點加入集群

加入Node節點同加入Master節點。

kubeadm join 172.16.0.44:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:06df91bcacf6b5f206378a11065e2b6948eb2529503248f8e312fc0b913bec62Copy to clipboardErrorCopied

部署Node節點

二進制安裝

二進制安裝有利於集群定制化,可根據特定組件的負載進行進行自定制。

證書

kubernetes組件眾多,這些組件之間通過 HTTP/GRPC 相互通信,以協同完成集群中應用的部署和管理工作。尤其是master節點,更是掌握着整個集群的操作。其安全就變得尤為重要了,在目前世面上最安全的,使用最廣泛的就是數字證書。kubernetes正是使用這種認證方式。

安裝cfssl證書生成工具

本次我們使用cfssl證書生成工具,這是一款把預先的證書機構、使用期等時間寫在json文件里面會更加高效和自動化。cfssl采用go語言編寫,是一個開源的證書管理工具,cfssljson用來從cfssl程序獲取json輸出,並將證書,密鑰,csrbundle寫入文件中。

  • 注意:
在做二進制安裝k8s前,最好在每一台主機中安裝好docker
  • 下載
# 下載
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

# 設置執行權限
chmod +x cfssljson_linux-amd64
chmod +x cfssl_linux-amd64

# 移動到/usr/local/bin
mv cfssljson_linux-amd64 cfssljson
mv cfssl_linux-amd64 cfssl
mv cfssljson cfssl /usr/local/binCopy to clipboardErrorCopied
創建根證書

從整個架構來看,集群環境中最重要的部分就是etcd和API server。

所謂根證書,是CA認證中心與用戶建立信任關系的基礎,用戶的數字證書必須有一個受信任的根證書,用戶的數字證書才是有效的。

從技術上講,證書其實包含三部分,用戶的信息,用戶的公鑰,以及證書簽名。

CA負責數字證書的批審、發放、歸檔、撤銷等功能,CA頒發的數字證書擁有CA的數字簽名,所以除了CA自身,其他機構無法不被察覺的改動。

  • 創建請求證書的json配置文件
mkdir -p cert/ca

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ],
           "expiry": "8760h"
      }
    }
  }
}
EOFCopy to clipboardErrorCopied

default是默認策略,指定證書默認有效期是1年

profiles是定義使用場景,這里只是kubernetes,其實可以定義多個場景,分別指定不同的過期時間,使用場景等參數,后續簽名證書時使用某個profile;

signing: 表示該證書可用於簽名其它證書,生成的ca.pem證書

server auth: 表示client 可以用該CA 對server 提供的證書進行校驗;

client auth: 表示server 可以用該CA 對client 提供的證書進行驗證。

  • 創建根CA證書簽名請求文件
cat > ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names":[{
    "C": "CN",
    "ST": "ShangHai",
    "L": "ShangHai"
  }]
}
EOFCopy to clipboardErrorCopied

C:國家

ST:省

L:城市

O:組織

OU:組織別名

  • 生成證書
[root@kubernetes-master-01 ca]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/08/28 23:51:50 [INFO] generating a new CA key and certificate from CSR
2020/08/28 23:51:50 [INFO] generate received request
2020/08/28 23:51:50 [INFO] received CSR
2020/08/28 23:51:50 [INFO] generating key: rsa-2048
2020/08/28 23:51:50 [INFO] encoded CSR
2020/08/28 23:51:50 [INFO] signed certificate with serial number 66427391707536599498414068348802775591392574059
[root@kubernetes-master-01 ca]# ll
總用量 20
-rw-r--r-- 1 root root  282 8月  28 23:41 ca-config.json
-rw-r--r-- 1 root root 1013 8月  28 23:51 ca.csr
-rw-r--r-- 1 root root  196 8月  28 23:41 ca-csr.json
-rw------- 1 root root 1675 8月  28 23:51 ca-key.pem
-rw-r--r-- 1 root root 1334 8月  28 23:51 ca.pemCopy to clipboardErrorCopied

gencert:生成新的key(密鑰)和簽名證書

--initca:初始化一個新CA證書

部署Etcd集群

Etcd是基於Raft的分布式key-value存儲系統,由CoreOS團隊開發,常用於服務發現,共享配置,以及並發控制(如leader選舉,分布式鎖等等)。Kubernetes使用Etcd進行狀態和數據存儲!

Etcd節點規划
Etcd名稱 IP
etcd-01 172.16.0.50
etcd-02 172.16.0.51
etcd-03 172.16.0.52
創建Etcd證書

hosts字段中IP為所有etcd節點的集群內部通信IP,有幾個etcd節點,就寫多少個IP。

mkdir -p /root/cert/etcd
cd /root/cert/etcd

cat > etcd-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "127.0.0.1",
    "172.16.0.50",
    "172.16.0.51",
    "172.16.0.52",
    "172.16.0.53",
    "172.16.0.54"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
          "C": "CN",
          "ST": "ShangHai",
          "L": "ShangHai"
        }
    ]
}
EOFCopy to clipboardErrorCopied
生成證書
[root@kubernetes-master-01 etcd]# cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
2020/08/29 00:02:20 [INFO] generate received request
2020/08/29 00:02:20 [INFO] received CSR
2020/08/29 00:02:20 [INFO] generating key: rsa-2048
2020/08/29 00:02:20 [INFO] encoded CSR
2020/08/29 00:02:20 [INFO] signed certificate with serial number 71348009702526539124716993806163559962125770315
[root@kubernetes-master-01 etcd]# ll
總用量 16
-rw-r--r-- 1 root root 1074 8月  29 00:02 etcd.csr
-rw-r--r-- 1 root root  352 8月  28 23:59 etcd-csr.json
-rw------- 1 root root 1675 8月  29 00:02 etcd-key.pem
-rw-r--r-- 1 root root 1460 8月  29 00:02 etcd.pemCopy to clipboardErrorCopied

gencert: 生成新的key(密鑰)和簽名證書 -initca:初始化一個新ca -ca:指明ca的證書 -ca-key:指明ca的私鑰文件 -config:指明請求證書的json文件 -profile:與config中的profile對應,是指根據config中的profile段來生成證書的相關信息

分發證書至etcd服務器
for ip in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03
do
  ssh root@${ip} "mkdir -pv /etc/etcd/ssl"
  scp ../ca/ca*.pem  root@${ip}:/etc/etcd/ssl
  scp ./etcd*.pem  root@${ip}:/etc/etcd/ssl
done

for ip in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03; do   
    ssh root@${ip} "ls -l /etc/etcd/ssl";
doneCopy to clipboardErrorCopied

部署etcd
wget https://mirrors.huaweicloud.com/etcd/v3.3.24/etcd-v3.3.24-linux-amd64.tar.gz

tar xf etcd-v3.3.24-linux-amd64

for i in kubernetes-master-02 kubernetes-master-01 kubernetes-master-03
do
scp ./etcd-v3.3.24-linux-amd64/etcd* root@$i:/usr/local/bin/
done

[root@kubernetes-master-01 etcd-v3.3.24-linux-amd64]# etcd --version
etcd Version: 3.3.24
Git SHA: bdd57848d
Go Version: go1.12.17
Go OS/Arch: linux/amd64Copy to clipboardErrorCopied

用systemd管理Etcd
mkdir -pv /etc/kubernetes/conf/etcd

ETCD_NAME=`hostname`
INTERNAL_IP=`hostname -i`
INITIAL_CLUSTER=kubernetes-master-01=https://172.16.0.50:2380,kubernetes-master-02=https://172.16.0.51:2380,kubernetes-master-03=https://172.16.0.52:2380

cat << EOF | sudo tee /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/ssl/etcd.pem \\
  --key-file=/etc/etcd/ssl/etcd-key.pem \\
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \\
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster \\
  --initial-cluster ${INITIAL_CLUSTER} \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOFCopy to clipboardErrorCopied

配置項解釋

配置選項 選項說明
name 節點名稱
data-dir 指定節點的數據存儲目錄
listen-peer-urls 與集群其它成員之間的通信地址
listen-client-urls 監聽本地端口,對外提供服務的地址
initial-advertise-peer-urls 通告給集群其它節點,本地的對等URL地址
advertise-client-urls 客戶端URL,用於通告集群的其余部分信息
initial-cluster 集群中的所有信息節點
initial-cluster-token 集群的token,整個集群中保持一致
initial-cluster-state 初始化集群狀態,默認為new
--cert-file 客戶端與服務器之間TLS證書文件的路徑
--key-file 客戶端與服務器之間TLS密鑰文件的路徑
--peer-cert-file 對等服務器TLS證書文件的路徑
--peer-key-file 對等服務器TLS密鑰文件的路徑
--trusted-ca-file 簽名client證書的CA證書,用於驗證client證書
--peer-trusted-ca-file 簽名對等服務器證書的CA證書。
啟動Etcd
# 在三台及節點上執行
systemctl enable --now etcdCopy to clipboardErrorCopied

測試Etcd集群
ETCDCTL_API=3 etcdctl \
--cacert=/etc/etcd/ssl/etcd.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints="https://172.16.0.50:2379,https://172.16.0.51:2379,https://172.16.0.52:2379" \
endpoint status --write-out='table'

ETCDCTL_API=3 etcdctl \
--cacert=/etc/etcd/ssl/etcd.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints="https://172.16.0.50:2379,https://172.16.0.51:2379,https://172.16.0.52:2379" \
member list --write-out='table'Copy to clipboardErrorCopied

創建集群證書

Master節點是集群當中最為重要的一部分,組件眾多,部署也最為復雜。

Master節點規划
主機名(角色) IP
Kubernetes-master-01 172.16.0.50
Kubernetes-master-02 172.16.0.51
Kubernetes-master-03 172.16.0.52
創建根證書

創建一個臨時目錄來制作證書。

  • 創建根證書認證中心配置
[root@kubernetes-master-01 k8s]# mkdir /opt/cert/k8s

[root@kubernetes-master-01 k8s]# cd /opt/cert/k8s

[root@kubernetes-master-01 k8s]# pwd
/opt/cert/k8s

[root@kubernetes-master-01 k8s]# cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOFCopy to clipboardErrorCopied

  • 創建根證書簽名
[root@kubernetes-master-01 k8s]# cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

[root@kubernetes-master-01 k8s]# ll
total 8
-rw-r--r-- 1 root root 294 Sep 13 19:59 ca-config.json
-rw-r--r-- 1 root root 212 Sep 13 20:01 ca-csr.jsonCopy to clipboardErrorCopied
  • 生成根證書
[root@kubernetes-master-01 k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/09/13 20:01:45 [INFO] generating a new CA key and certificate from CSR
2020/09/13 20:01:45 [INFO] generate received request
2020/09/13 20:01:45 [INFO] received CSR
2020/09/13 20:01:45 [INFO] generating key: rsa-2048
2020/09/13 20:01:46 [INFO] encoded CSR
2020/09/13 20:01:46 [INFO] signed certificate with serial number 588993429584840635805985813644877690042550093427
[root@kubernetes-master-01 k8s]# ll
total 20
-rw-r--r-- 1 root root  294 Sep 13 19:59 ca-config.json
-rw-r--r-- 1 root root  960 Sep 13 20:01 ca.csr
-rw-r--r-- 1 root root  212 Sep 13 20:01 ca-csr.json
-rw------- 1 root root 1679 Sep 13 20:01 ca-key.pem
-rw-r--r-- 1 root root 1273 Sep 13 20:01 ca.pem
Copy to clipboardErrorCopied
簽發kube-apiserver證書
  • 創建kube-apiserver證書簽名配置
cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "172.16.0.50",
        "172.16.0.51",
        "172.16.0.52",
        "172.16.0.55",
        "10.96.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai"
        }
    ]
}
EOFCopy to clipboardErrorCopied

hostlocalhost地址 + master部署節點的ip地址 + etcd節點的部署地址 + 負載均衡指定的VIP(172.16.0.55) + service ip段的第一個合法地址(10.96.0.1) + k8s默認指定的一些地址

  • 生成證書
[root@kubernetes-master-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2020/08/29 12:29:41 [INFO] generate received request
2020/08/29 12:29:41 [INFO] received CSR
2020/08/29 12:29:41 [INFO] generating key: rsa-2048
2020/08/29 12:29:41 [INFO] encoded CSR
2020/08/29 12:29:41 [INFO] signed certificate with serial number 701177072439793091180552568331885323625122463841Copy to clipboardErrorCopied

簽發kube-controller-manager證書
  • 創建kube-controller-manager證書簽名配置
cat > kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "hosts": [
        "127.0.0.1",
        "172.16.0.50",
        "172.16.0.51",
        "172.16.0.52",
        "172.16.0.55"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "system:kube-controller-manager",
            "OU": "System"
        }
    ]
}
EOFCopy to clipboardErrorCopied

  • 生成證書
[root@kubernetes-master-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2020/08/29 12:40:21 [INFO] generate received request
2020/08/29 12:40:21 [INFO] received CSR
2020/08/29 12:40:21 [INFO] generating key: rsa-2048
2020/08/29 12:40:22 [INFO] encoded CSR
2020/08/29 12:40:22 [INFO] signed certificate with serial number 464924254532468215049650676040995556458619239240Copy to clipboardErrorCopied

簽發kube-scheduler證書
  • 創建kube-scheduler簽名配置
cat > kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
        "127.0.0.1",
        "172.16.0.50",
        "172.16.0.51",
        "172.16.0.52",
        "172.16.0.55"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "system:kube-scheduler",
            "OU": "System"
        }
    ]
}
EOFCopy to clipboardErrorCopied

  • 創建證書
[root@kubernetes-master-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
2020/08/29 12:42:29 [INFO] generate received request
2020/08/29 12:42:29 [INFO] received CSR
2020/08/29 12:42:29 [INFO] generating key: rsa-2048
2020/08/29 12:42:29 [INFO] encoded CSR
2020/08/29 12:42:29 [INFO] signed certificate with serial number 420546069405900774170348492061478728854870171400Copy to clipboardErrorCopied

簽發kube-proxy證書
  • 創建kube-proxy證書簽名配置
cat > kube-proxy-csr.json << EOF
{
    "CN":"system:kube-proxy",
    "hosts":[],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"BeiJing",
            "ST":"BeiJing",
            "O":"system:kube-proxy",
            "OU":"System"
        }
    ]
}
EOFCopy to clipboardErrorCopied

  • 生成證書
[root@kubernetes-master-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2020/08/29 12:45:11 [INFO] generate received request
2020/08/29 12:45:11 [INFO] received CSR
2020/08/29 12:45:11 [INFO] generating key: rsa-2048
2020/08/29 12:45:11 [INFO] encoded CSR
2020/08/29 12:45:11 [INFO] signed certificate with serial number 39717174368771783903269928946823692124470234079
2020/08/29 12:45:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").Copy to clipboardErrorCopied

簽發操作用戶證書

為了能讓集群客戶端工具安全的訪問集群,所以要為集群客戶端創建證書,使其具有所有的集群權限。

  • 創建證書簽名配置
cat > admin-csr.json << EOF
{
    "CN":"admin",
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"BeiJing",
            "ST":"BeiJing",
            "O":"system:masters",
            "OU":"System"
        }
    ]
}
EOFCopy to clipboardErrorCopied

  • 生成證書
[root@kubernetes-master-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2020/08/29 12:50:46 [INFO] generate received request
2020/08/29 12:50:46 [INFO] received CSR
2020/08/29 12:50:46 [INFO] generating key: rsa-2048
2020/08/29 12:50:46 [INFO] encoded CSR
2020/08/29 12:50:46 [INFO] signed certificate with serial number 247283053743606613190381870364866954196747322330
2020/08/29 12:50:46 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").Copy to clipboardErrorCopied
頒發證書

Master節點所需證書:ca、kube-apiservver、kube-controller-manager、kube-scheduler、用戶證書、Etcd證書

Node節點證書:ca、用戶證書、kube-proxy證書

VIP節點:用戶證書

  • 頒發Master節點證書
mkdir -pv /etc/kubernetes/ssl

cp -p ./{ca*pem,server*pem,kube-controller-manager*pem,kube-scheduler*.pem,kube-proxy*pem,admin*.pem} /etc/kubernetes/ssl

for i in kubernetes-master-02 kubernetes-master-03; do  
  ssh root@$i "mkdir -pv /etc/kubernetes/ssl"
  scp /etc/kubernetes/ssl/* root@$i:/etc/kubernetes/ssl
done

[root@kubernetes-master-01 k8s]# for i in kubernetes-master-02 kubernetes-master-03; do  
>   ssh root@$i "mkdir -pv /etc/kubernetes/ssl"
>   scp /etc/kubernetes/ssl/* root@$i:/etc/kubernetes/ssl
> done
mkdir: created directory ‘/etc/kubernetes/ssl’
admin-key.pem                                                                                                                                                                                 100% 1679   562.2KB/s   00:00    
admin.pem                                                                                                                                                                                     100% 1359    67.5KB/s   00:00    
ca-key.pem                                                                                                                                                                                    100% 1679    39.5KB/s   00:00    
ca.pem                                                                                                                                                                                        100% 1273   335.6KB/s   00:00    
kube-controller-manager-key.pem                                                                                                                                                               100% 1679   489.6KB/s   00:00    
kube-controller-manager.pem                                                                                                                                                                   100% 1472    69.4KB/s   00:00    
kube-proxy-key.pem                                                                                                                                                                            100% 1679   646.6KB/s   00:00    
kube-proxy.pem                                                                                                                                                                                100% 1379   672.8KB/s   00:00    
kube-scheduler-key.pem                                                                                                                                                                        100% 1679   472.1KB/s   00:00    
kube-scheduler.pem                                                                                                                                                                            100% 1448    82.7KB/s   00:00    
server-key.pem                                                                                                                                                                                100% 1675   898.3KB/s   00:00    
server.pem                                                                                                                                                                                    100% 1554     2.2MB/s   00:00    
mkdir: created directory ‘/etc/kubernetes/ssl’
admin-key.pem                                                                                                                                                                                 100% 1679   826.3KB/s   00:00    
admin.pem                                                                                                                                                                                     100% 1359     1.1MB/s   00:00    
ca-key.pem                                                                                                                                                                                    100% 1679   127.4KB/s   00:00    
ca.pem                                                                                                                                                                                        100% 1273    50.8KB/s   00:00    
kube-controller-manager-key.pem                                                                                                                                                               100% 1679   197.7KB/s   00:00    
kube-controller-manager.pem                                                                                                                                                                   100% 1472   833.7KB/s   00:00    
kube-proxy-key.pem                                                                                                                                                                            100% 1679   294.6KB/s   00:00    
kube-proxy.pem                                                                                                                                                                                100% 1379    94.9KB/s   00:00    
kube-scheduler-key.pem                                                                                                                                                                        100% 1679   411.2KB/s   00:00    
kube-scheduler.pem                                                                                                                                                                            100% 1448   430.4KB/s   00:00    
server-key.pem                                                                                                                                                                                100% 1675   924.0KB/s   00:00    
server.pem                                                                                                                                                                                    100% 1554   126.6KB/s   00:00  Copy to clipboardErrorCopied
  • 頒發Node節點證書
for i in kubernetes-node-01 kubernetes-node-02; do  
  ssh root@$i "mkdir -pv /etc/kubernetes/ssl"
  scp -pr ./{ca*.pem,admin*pem,kube-proxy*pem} root@$i:/etc/kubernetes/ssl
done

[root@kubernetes-master-01 k8s]# for i in kubernetes-node-01 kubernetes-node-02; do  
>   ssh root@$i "mkdir -pv /etc/kubernetes/ssl"
>   scp -pr ./{ca*.pem,admin*pem,kube-proxy*pem} root@$i:/etc/kubernetes/ssl
> done
The authenticity of host 'kubernetes-node-01 (172.16.0.53)' can't be established.
ECDSA key fingerprint is SHA256:5N7Cr3nku+MMnJyG3CnY3tchGfNYhxDuIulGceQXWd4.
ECDSA key fingerprint is MD5:aa:ba:4e:29:5d:81:0f:be:9b:cd:54:9b:47:48:e4:33.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'kubernetes-node-01,172.16.0.53' (ECDSA) to the list of known hosts.
root@kubernetes-node-01's password: 
mkdir: created directory ‘/etc/kubernetes’
mkdir: created directory ‘/etc/kubernetes/ssl’
root@kubernetes-node-01's password: 
ca-key.pem                                                                                                                                                                                    100% 1679   361.6KB/s   00:00    
ca.pem                                                                                                                                                                                        100% 1273   497.5KB/s   00:00    
admin-key.pem                                                                                                                                                                                 100% 1679    98.4KB/s   00:00    
admin.pem                                                                                                                                                                                     100% 1359   116.8KB/s   00:00    
kube-proxy-key.pem                                                                                                                                                                            100% 1679   494.7KB/s   00:00    
kube-proxy.pem                                                                                                                                                                                100% 1379    45.5KB/s   00:00    
The authenticity of host 'kubernetes-node-02 (172.16.0.54)' can't be established.
ECDSA key fingerprint is SHA256:5N7Cr3nku+MMnJyG3CnY3tchGfNYhxDuIulGceQXWd4.
ECDSA key fingerprint is MD5:aa:ba:4e:29:5d:81:0f:be:9b:cd:54:9b:47:48:e4:33.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'kubernetes-node-02,172.16.0.54' (ECDSA) to the list of known hosts.
root@kubernetes-node-02's password: 
mkdir: created directory ‘/etc/kubernetes’
mkdir: created directory ‘/etc/kubernetes/ssl’
root@kubernetes-node-02's password: 
ca-key.pem                                                                                                                                                                                    100% 1679   211.3KB/s   00:00    
ca.pem                                                                                                                                                                                        100% 1273   973.0KB/s   00:00    
admin-key.pem                                                                                                                                                                                 100% 1679   302.2KB/s   00:00    
admin.pem                                                                                                                                                                                     100% 1359   285.6KB/s   00:00    
kube-proxy-key.pem                                                                                                                                                                            100% 1679    79.8KB/s   00:00    
kube-proxy.pem                                                                                                                                                                                100% 1379   416.5KB/s   00:00    
Copy to clipboardErrorCopied

  • 頒發VIP節點證書
[root@kubernetes-master-01 k8s]# ssh root@kubernetes-master-vip "mkdir -pv /etc/kubernetes/ssl"
mkdir: 已創建目錄 "/etc/kubernetes"
mkdir: 已創建目錄 "/etc/kubernetes/ssl"
[root@kubernetes-master-01 k8s]# scp admin*pem root@kubernetes-master-vip:/etc/kubernetes/ssl
admin-key.pem                                                                                     100% 1679     3.8MB/s   00:00
admin.pemCopy to clipboardErrorCopied

部署master節點

kubernetes現托管在github上面,我們所需要的安裝包可以在GitHub上下載。

下載二進制組件
# 下載server安裝包
wget https://dl.k8s.io/v1.19.0/kubernetes-server-linux-amd64.tar.gz

# 下載client安裝包
wget https://dl.k8s.io/v1.19.0/kubernetes-client-linux-amd64.tar.gz

# 下載Node安裝包
wget https://dl.k8s.io/v1.19.0/kubernetes-node-linux-amd64.tar.gz

[root@kubernetes-master-01 ~]# ll 
-rw-r--r-- 1 root      root       13237066 8月  29 02:51 kubernetes-client-linux-amd64.tar.gz
-rw-r--r-- 1 root      root       97933232 8月  29 02:51 kubernetes-node-linux-amd64.tar.gz
-rw-r--r-- 1 root      root      363943527 8月  29 02:51 kubernetes-server-linux-amd64.tar.gz

# 如果無法下載,可用下方方法
[root@kubernetes-master-01 k8s]# docker pull registry.cn-hangzhou.aliyuncs.com/k8sos/k8s:v1.18.8.1
v1.18.8.1: Pulling from k8sos/k8s
75f829a71a1c: Pull complete 
183ee8383f81: Pull complete 
a5955b997bb4: Pull complete 
5401bb259bcd: Pull complete 
0c05c4d60f48: Pull complete 
6a216d9c9d7c: Pull complete 
6711ab2c0ba7: Pull complete 
3ff1975ab201: Pull complete 
Digest: sha256:ee02569b218a4bab3f64a7be0b23a9feda8c6717e03f30da83f80387aa46e202
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/k8sos/k8s:v1.18.8.1
registry.cn-hangzhou.aliyuncs.com/k8sos/k8s:v1.18.8.1
# 緊接着在容器當中復制出來即可。
Copy to clipboardErrorCopied

分發組件
[root@kubernetes-master-01 bin]# for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03;  do   scp kube-apiserver kube-controller-manager kube-scheduler kubectl  root@$i:/usr/local/bin/; done
kube-apiserver                                      100%  115MB  94.7MB/s   00:01
kube-controller-manager                             100%  105MB  87.8MB/s   00:01
kube-scheduler                                      100%   41MB  88.2MB/s   00:00
kubectl                                             100%   42MB  95.7MB/s   00:00
kube-apiserver                                      100%  115MB 118.4MB/s   00:00
kube-controller-manager                             100%  105MB 107.3MB/s   00:00
kube-scheduler                                      100%   41MB 119.9MB/s   00:00
kubectl                                             100%   42MB  86.0MB/s   00:00
kube-apiserver                                      100%  115MB 120.2MB/s   00:00
kube-controller-manager                             100%  105MB 108.1MB/s   00:00
kube-scheduler                                      100%   41MB 102.4MB/s   00:00
kubectl                                             100%   42MB 124.3MB/s   00:00Copy to clipboardErrorCopied

配置TLS bootstrapping

TLS bootstrapping 是用來簡化管理員配置kubelet 與 apiserver 雙向加密通信的配置步驟的一種機制。當集群開啟了 TLS 認證后,每個節點的 kubelet 組件都要使用由 apiserver 使用的 CA 簽發的有效證書才能與 apiserver 通訊,此時如果有很多個節點都需要單獨簽署證書那將變得非常繁瑣且極易出錯,導致集群不穩。

TLS bootstrapping 功能就是讓 node節點上的kubelet組件先使用一個預定的低權限用戶連接到 apiserver,然后向 apiserver 申請證書,由 apiserver 動態簽署頒發到Node節點,實現證書簽署自動化。

  • 生成TLS bootstrapping所需token
# 必須要用自己機器創建的Token
TLS_BOOTSTRAPPING_TOKEN=`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`

cat > token.csv << EOF
${TLS_BOOTSTRAPPING_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

[root@kubernetes-master-01 k8s]# TLS_BOOTSTRAPPING_TOKEN=`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`
[root@kubernetes-master-01 k8s]# 
[root@kubernetes-master-01 k8s]# cat > token.csv << EOF
> ${TLS_BOOTSTRAPPING_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
> EOF
[root@kubernetes-master-01 k8s]# cat token.csv 
1b076dcc88e04d64c3a9e7d7a1586fe5,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
Copy to clipboardErrorCopied

創建集群配置文件

在kubernetes中,我們需要創建一個配置文件,用來配置集群、用戶、命名空間及身份認證等信息。

創建kubelet-bootstrap.kubeconfig文件
export KUBE_APISERVER="https://172.16.0.55:8443"

# 設置集群參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kubelet-bootstrap.kubeconfig

# 設置客戶端認證參數,此處token必須用上敘token.csv中的token
kubectl config set-credentials "kubelet-bootstrap" \
  --token=3ac791ff0afab20f5324ff898bb1570e \			# 這里的tocken要注意為之前的tocken
  --kubeconfig=kubelet-bootstrap.kubeconfig                          

# 設置上下文參數(在上下文參數中將集群參數和用戶參數關聯起來)
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=kubelet-bootstrap.kubeconfig

# 配置默認上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfigCopy to clipboardErrorCopied

創建kube-controller-manager.kubeconfig文件
export KUBE_APISERVER="https://172.16.0.55:8443"

# 設置集群參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-controller-manager.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials "kube-controller-manager" \
  --client-certificate=/etc/kubernetes/ssl/kube-controller-manager.pem \
  --client-key=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig    

# 設置上下文參數(在上下文參數中將集群參數和用戶參數關聯起來)
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kube-controller-manager" \
  --kubeconfig=kube-controller-manager.kubeconfig

# 配置默認上下文
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfigCopy to clipboardErrorCopied
  • --certificate-authority:驗證 kube-apiserver 證書的根證書
  • --client-certificate--client-key:剛生成的kube-controller-manager證書和私鑰,連接 kube-apiserver 時使用
  • --embed-certs=true:將 ca.pemkube-controller-manager 證書內容嵌入到生成的 kubectl.kubeconfig 文件中(不加時,寫入的是證書文件路徑)

創建kube-scheduler.kubeconfig文件
export KUBE_APISERVER="https://172.16.0.55:8443"

# 設置集群參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-scheduler.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials "kube-scheduler" \
  --client-certificate=/etc/kubernetes/ssl/kube-scheduler.pem \
  --client-key=/etc/kubernetes/ssl/kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig   

# 設置上下文參數(在上下文參數中將集群參數和用戶參數關聯起來)
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kube-scheduler" \
  --kubeconfig=kube-scheduler.kubeconfig

# 配置默認上下文
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfigCopy to clipboardErrorCopied

創建kube-proxy.kubeconfig文件
export KUBE_APISERVER="https://172.16.0.55:8443"

# 設置集群參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials "kube-proxy" \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

# 設置上下文參數(在上下文參數中將集群參數和用戶參數關聯起來)
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kube-proxy" \
  --kubeconfig=kube-proxy.kubeconfig

# 配置默認上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfigCopy to clipboardErrorCopied

創建admin.kubeconfig文件
  export KUBE_APISERVER="https://172.16.0.55:8443"

# 設置集群參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=admin.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials "admin" \
  --client-certificate=/etc/kubernetes/ssl/admin.pem \
  --client-key=/etc/kubernetes/ssl/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=admin.kubeconfig 

# 設置上下文參數(在上下文參數中將集群參數和用戶參數關聯起來)
kubectl config set-context default \
  --cluster=kubernetes \
  --user="admin" \
  --kubeconfig=admin.kubeconfig

# 配置默認上下文
kubectl config use-context default --kubeconfig=admin.kubeconfigCopy to clipboardErrorCopied

分發集群配置文件至Master節點
[root@kubernetes-master-01 ~]# for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03; 
do 
  ssh root@$i "mkdir -p  /etc/kubernetes/cfg"; 
  scp token.csv kube-scheduler.kubeconfig kube-controller-manager.kubeconfig admin.kubeconfig kube-proxy.kubeconfig kubelet-bootstrap.kubeconfig root@$i:/etc/kubernetes/cfg; 
done

token.csv                                                  100%   84   662.0KB/s   00:00
kube-scheduler.kubeconfig                                  100% 6159    47.1MB/s   00:00
kube-controller-manager.kubeconfig                         100% 6209    49.4MB/s   00:00
admin.conf                                                 100% 6021    51.0MB/s   00:00
kube-proxy.kubeconfig                                      100% 6059    52.7MB/s   00:00
kubelet-bootstrap.kubeconfig                               100% 1985    25.0MB/s   00:00
token.csv                                                  100%   84   350.5KB/s   00:00
kube-scheduler.kubeconfig                                  100% 6159    20.0MB/s   00:00
kube-controller-manager.kubeconfig                         100% 6209    20.7MB/s   00:00
admin.conf                                                 100% 6021    23.4MB/s   00:00
kube-proxy.kubeconfig                                      100% 6059    20.0MB/s   00:00
kubelet-bootstrap.kubeconfig                               100% 1985     4.4MB/s   00:00
token.csv                                                  100%   84   411.0KB/s   00:00
kube-scheduler.kubeconfig                                  100% 6159    19.6MB/s   00:00
kube-controller-manager.kubeconfig                         100% 6209    21.4MB/s   00:00
admin.conf                                                 100% 6021    19.9MB/s   00:00
kube-proxy.kubeconfig                                      100% 6059    20.1MB/s   00:00
kubelet-bootstrap.kubeconfig                               100% 1985     9.8MB/s   00:00
[root@kubernetes-master-01 ~]#Copy to clipboardErrorCopied

分發集群配置文件至Node節點
[root@kubernetes-master-01 ~]# for i in kubernetes-node-01 kubernetes-node-02; 
do     
  ssh root@$i "mkdir -p  /etc/kubernetes/cfg";     
  scp kube-proxy.kubeconfig kubelet-bootstrap.kubeconfig root@$i:/etc/kubernetes/cfg; 
done
kube-proxy.kubeconfig                                 100% 6059    18.9MB/s   00:00
kubelet-bootstrap.kubeconfig                          100% 1985     8.1MB/s   00:00
kube-proxy.kubeconfig                                 100% 6059    16.2MB/s   00:00
kubelet-bootstrap.kubeconfig                          100% 1985     9.9MB/s   00:00
[root@kubernetes-master-01 ~]#Copy to clipboardErrorCopied

部署kube-apiserver
  • 創建kube-apiserver服務配置文件(三個節點都要執行,不能復制,注意api server IP)
KUBE_APISERVER_IP=`hostname -i`

cat > /etc/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--advertise-address=${KUBE_APISERVER_IP} \\
--default-not-ready-toleration-seconds=360 \\
--default-unreachable-toleration-seconds=360 \\
--max-mutating-requests-inflight=2000 \\
--max-requests-inflight=4000 \\
--default-watch-cache-size=200 \\
--delete-collection-workers=2 \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.96.0.0/16 \\
--service-node-port-range=10-52767 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/etc/kubernetes/cfg/token.csv \\
--kubelet-client-certificate=/etc/kubernetes/ssl/server.pem \\
--kubelet-client-key=/etc/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/etc/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kubernetes/k8s-audit.log \\
--etcd-servers=https://172.16.0.50:2379,https://172.16.0.51:2379,https://172.16.0.52:2379 \\
--etcd-cafile=/etc/etcd/ssl/ca.pem \\
--etcd-certfile=/etc/etcd/ssl/server.pem \\
--etcd-keyfile=/etc/etcd/ssl/server-key.pem"
EOFCopy to clipboardErrorCopied

配置選項 選項說明
--logtostderr=false 輸出日志到文件中,不輸出到標准錯誤控制台
--v=2 指定輸出日志的級別
--advertise-address 向集群成員通知 apiserver 消息的 IP 地址
--etcd-servers 連接的 etcd 服務器列表
--etcd-cafile 用於etcd 通信的 SSL CA 文件
--etcd-certfile 用於 etcd 通信的的 SSL 證書文件
--etcd-keyfile 用於 etcd 通信的 SSL 密鑰文件
--service-cluster-ip-range Service網絡地址分配
--bind-address 監聽 --seure-port 的 IP 地址,如果為空,則將使用所有接口(0.0.0.0
--secure-port=6443 用於監聽具有認證授權功能的 HTTPS 協議的端口,默認值是6443
--allow-privileged 是否啟用授權功能
--service-node-port-range Service使用的端口范圍
--default-not-ready-toleration-seconds 表示 notReady狀態的容忍度秒數
--default-unreachable-toleration-seconds 表示 unreachable狀態的容忍度秒數:
--max-mutating-requests-inflight=2000 在給定時間內進行中可變請求的最大數量,0 值表示沒有限制(默認值 200)
--default-watch-cache-size=200 默認監視緩存大小,0 表示對於沒有設置默認監視大小的資源,將禁用監視緩存
--delete-collection-workers=2 用於 DeleteCollection 調用的工作者數量,這被用於加速 namespace 的清理( 默認值 1)
--enable-admission-plugins 資源限制的相關配置
--authorization-mode 在安全端口上進行權限驗證的插件的順序列表,以逗號分隔的列表。
--enable-bootstrap-token-auth 啟用此選項以允許 'kube-system' 命名空間中的 'bootstrap.kubernetes.io/token' 類型密鑰可以被用於 TLS 的啟動認證
--token-auth-file 聲明bootstrap token文件
--kubelet-certificate-authority 證書 authority 的文件路徑
--kubelet-client-certificate 用於 TLS 的客戶端證書文件路徑
--kubelet-client-key 用於 TLS 的客戶端證書密鑰文件路徑
--tls-private-key-file 包含匹配--tls-cert-file 的 x509 證書私鑰的文件
--service-account-key-file 包含 PEM 加密的 x509 RSA 或 ECDSA 私鑰或公鑰的文件
--audit-log-maxage 基於文件名中的時間戳,舊審計日志文件的最長保留天數
--audit-log-maxbackup 舊審計日志文件的最大保留個數
--audit-log-maxsize 審計日志被輪轉前的最大兆字節數
--audit-log-path 如果設置,表示所有到apiserver的請求都會記錄到這個文件中,‘-’表示寫入標准輸出
  • 創建kube-apiserver服務腳本
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOFCopy to clipboardErrorCopied

  • 分發kube-apiserver服務腳本
for i in kubernetes-master-02 kubernetes-master-03; 
do  
scp /usr/lib/systemd/system/kube-apiserver.service root@$i:/usr/lib/systemd/system/kube-apiserver.service
doneCopy to clipboardErrorCopied

  • 啟動
# 創建kubernetes日志目錄
mkdir -p /var/log/kubernetes/
systemctl daemon-reload
systemctl enable --now kube-apiserverCopy to clipboardErrorCopied

kube-apiserver高可用部署

負載均衡器有很多種,這里我們采用官方推薦的haproxy + keepalived

  • 安裝haproxykeeplived(在三個master節點上安裝)

    yum install -y keepalived haproxy Copy to clipboardErrorCopied
    
    
  • 配置haproxy服務

    cat > /etc/haproxy/haproxy.cfg <<EOF
    global
      maxconn  2000
      ulimit-n  16384
      log  127.0.0.1 local0 err
      stats timeout 30s
    
    defaults
      log global
      mode  http
      option  httplog
      timeout connect 5000
      timeout client  50000
      timeout server  50000
      timeout http-request 15s
      timeout http-keep-alive 15s
    
    frontend monitor-in
      bind *:33305
      mode http
      option httplog
      monitor-uri /monitor
    
    listen stats
      bind    *:8006
      mode    http
      stats   enable
      stats   hide-version
      stats   uri       /stats
      stats   refresh   30s
      stats   realm     Haproxy\ Statistics
      stats   auth      admin:admin
    
    frontend k8s-master
      bind 0.0.0.0:8443
      bind 127.0.0.1:8443
      mode tcp
      option tcplog
      tcp-request inspect-delay 5s
      default_backend k8s-master
    
    backend k8s-master
      mode tcp
      option tcplog
      option tcp-check
      balance roundrobin
      default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
      server kubernetes-master-01    172.16.0.50:6443  check inter 2000 fall 2 rise 2 weight 100
      server kubernetes-master-02    172.16.0.51:6443  check inter 2000 fall 2 rise 2 weight 100
      server kubernetes-master-03    172.16.0.52:6443  check inter 2000 fall 2 rise 2 weight 100
    EOFCopy to clipboardErrorCopied
    
  • 分發配置至其他節點

    for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03; 
    do 
    ssh root@$i "mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_bak"
    scp haproxy.cfg root@$i:/etc/haproxy/haproxy.cfg
    doneCopy to clipboardErrorCopied
    
    
  • 配置keepalived服務

    mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak
    
    cat > /etc/keepalived/keepalived.conf <<EOF
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    }
    vrrp_script chk_kubernetes {
        script "/etc/keepalived/check_kubernetes.sh"
        interval 2
        weight -5
        fall 3  
        rise 2
    }
    vrrp_instance VI_1 {
        state MASTER
        interface eth0
        mcast_src_ip 172.16.0.50
        virtual_router_id 51
        priority 100
        advert_int 2
        authentication {
            auth_type PASS
            auth_pass K8SHA_KA_AUTH
        }
        virtual_ipaddress {
            172.16.0.55
        }
    #    track_script {
    #       chk_kubernetes
    #    }
    }
    EOFCopy to clipboardErrorCopied
    
    
    • 分發keepalived配置文件

      for i in kubernetes-master-02 kubernetes-master-03; 
      do 
        ssh root@$i "mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak"
        scp /etc/keepalived/keepalived.conf root@$i:/etc/keepalived/keepalived.conf
      doneCopy to clipboardErrorCopied
      
  • 配置kubernetes-master-02節點

    sed -i 's#state MASTER#state BACKUP#g' /etc/keepalived/keepalived.conf
    sed -i 's#172.16.0.50#172.16.0.51#g' /etc/keepalived/keepalived.conf
    sed -i 's#priority 100#priority 99#g' /etc/keepalived/keepalived.confCopy to clipboardErrorCopied
    
  • 配置kubernetes-master-03節點

    sed -i 's#state MASTER#state BACKUP#g' /etc/keepalived/keepalived.conf
    sed -i 's#172.16.0.50#172.16.0.52#g' /etc/keepalived/keepalived.conf
    sed -i 's#priority 100#priority 98#g' /etc/keepalived/keepalived.confCopy to clipboardErrorCopied
    
  • 配置健康檢查腳本

    cat > /etc/keepalived/check_kubernetes.sh <<EOF
    #!/bin/bash
    
    function chech_kubernetes() {
        for ((i=0;i<5;i++));do
            apiserver_pid_id=$(pgrep kube-apiserver)
            if [[ ! -z $apiserver_pid_id ]];then
                return
            else
                sleep 2
            fi
            apiserver_pid_id=0
        done
    }
    
    # 1:running  0:stopped
    check_kubernetes
    if [[ $apiserver_pid_id -eq 0 ]];then
        /usr/bin/systemctl stop keepalived
        exit 1
    else
        exit 0
    fi
    EOF
    
    chmod +x /etc/keepalived/check_kubernetes.shCopy to clipboardErrorCopied
    
  • 啟動keeplivedhaproxy服務

    systemctl enable --now keepalived haproxyCopy to clipboardErrorCopied
    
    

  • 授權TLS Bootrapping用戶請求

    kubectl create clusterrolebinding kubelet-bootstrap \
    --clusterrole=system:node-bootstrapper \
    --user=kubelet-bootstrapCopy to clipboardErrorCopied
    
    

部署kube-controller-manager服務

Controller Manager作為集群內部的管理控制中心,負責集群內的Node、Pod副本、服務端點(Endpoint)、命名空間(Namespace)、服務賬號(ServiceAccount)、資源定額(ResourceQuota)的管理,當某個Node意外宕機時,Controller Manager會及時發現並執行自動化修復流程,確保集群始終處於預期的工作狀態。如果多個控制器管理器同時生效,則會有一致性問題,所以kube-controller-manager的高可用,只能是主備模式,而kubernetes集群是采用租賃鎖實現leader選舉,需要在啟動參數中加入 --leader-elect=true

  • 創建kube-controller-manager配置文件
cat > /etc/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--leader-elect=true \\
--cluster-name=kubernetes \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/12 \\
--service-cluster-ip-range=10.96.0.0/16 \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--kubeconfig=/etc/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s \\
--controllers=*,bootstrapsigner,tokencleaner \\
--use-service-account-credentials=true \\
--node-monitor-grace-period=10s \\
--horizontal-pod-autoscaler-use-rest-clients=true"
EOFCopy to clipboardErrorCopied

配置文件詳細解釋如下:

配置選項 選項意義
--leader-elect 高可用時啟用選舉功能。
--master 通過本地非安全本地端口8080連接apiserver
--bind-address 監控地址
--allocate-node-cidrs 是否應在node節點上分配和設置Pod的CIDR
--cluster-cidr Controller Manager在啟動時如果設置了--cluster-cidr參數,防止不同的節點的CIDR地址發生沖突
--service-cluster-ip-range 集群Services 的CIDR范圍
--cluster-signing-cert-file 指定用於集群簽發的所有集群范圍內證書文件(根證書文件)
--cluster-signing-key-file 指定集群簽發證書的key
--root-ca-file 如果設置,該根證書權限將包含service acount的toker secret,這必須是一個有效的PEM編碼CA 包
--service-account-private-key-file 包含用於簽署service account token的PEM編碼RSA或者ECDSA私鑰的文件名
--experimental-cluster-signing-duration 證書簽發時間
  • 配置腳本
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOFCopy to clipboardErrorCopied

  • 分發配置
for i in kubernetes-master-02 kubernetes-master-03;
do
  scp /etc/kubernetes/cfg/kube-controller-manager.conf root@$i:/etc/kubernetes/cfg
  scp /usr/lib/systemd/system/kube-controller-manager.service root@$i:/usr/lib/systemd/system/kube-controller-manager.service
  root@$i "systemctl daemon-reload"
doneCopy to clipboardErrorCopied

部署kube-scheduler服務

kube-scheduler是 Kubernetes 集群的默認調度器,並且是集群 控制面 的一部分。對每一個新創建的 Pod 或者是未被調度的 Pod,kube-scheduler 會過濾所有的node,然后選擇一個最優的 Node 去運行這個 Pod。kube-scheduler 調度器是一個策略豐富、拓撲感知、工作負載特定的功能,調度器顯著影響可用性、性能和容量。調度器需要考慮個人和集體的資源要求、服務質量要求、硬件/軟件/政策約束、親和力和反親和力規范、數據局部性、負載間干擾、完成期限等。工作負載特定的要求必要時將通過 API 暴露。

  • 創建kube-scheduler配置文件

    cat > /etc/kubernetes/cfg/kube-scheduler.conf << EOF
    KUBE_SCHEDULER_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/var/log/kubernetes \\
    --kubeconfig=/etc/kubernetes/cfg/kube-scheduler.kubeconfig \\
    --leader-elect=true \\
    --master=http://127.0.0.1:8080 \\
    --bind-address=127.0.0.1 "
    EOFCopy to clipboardErrorCopied
    
    
  • 創建啟動腳本

    cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    EnvironmentFile=/etc/kubernetes/cfg/kube-scheduler.conf
    ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
    Restart=on-failure
    RestartSec=5
    
    [Install]
    WantedBy=multi-user.target
    EOFCopy to clipboardErrorCopied
    
    
  • 分配配置文件

    for ip in kubernetes-master-02 kubernetes-master-03;
    do
        scp /usr/lib/systemd/system/kube-scheduler.service root@${ip}:/usr/lib/systemd/system
        scp /etc/kubernetes/cfg/kube-scheduler.conf root@${ip}:/etc/kubernetes/cfg
    doneCopy to clipboardErrorCopied
    
    
  • 啟動

    systemctl daemon-reload
    systemctl enable --now kube-schedulerCopy to clipboardErrorCopied
    
    
查看集群Master節點狀態

至此,master所有節點均安裝完畢。

[root@kubernetes-master-01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}Copy to clipboardErrorCopied

部署Node節點

Node節點主要負責提供應用運行環境,其最主要的組件就是kube-proxykubelet

分發工具包
[root@kubernetes-master-01 bin]# for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03 kubernetes-node-02 kubernetes-node-01; do scp kubelet kube-proxy root@$i:/usr/local/bin/; done
kubelet                                            100%  108MB 120.2MB/s   00:00
kube-proxy                                         100%   37MB  98.1MB/s   00:00
kubelet                                            100%  108MB 117.4MB/s   00:00

for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03 kubernetes-node-02 kubernetes-node-01; do echo $i; ssh root@$i "ls -lh /usr/local/bin"; doneCopy to clipboardErrorCopied

配置kubelet服務
  • 創建kubelet.conf配置文件

    KUBE_HOSTNAME=`hostname`
    
    cat > /etc/kubernetes/cfg/kubelet.conf << EOF
    KUBELET_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/var/log/kubernetes \\
    --hostname-override=${KUBE_HOSTNAME} \\
    --container-runtime=docker \\
    --kubeconfig=/etc/kubernetes/cfg/kubelet.kubeconfig \\
    --bootstrap-kubeconfig=/etc/kubernetes/cfg/kubelet-bootstrap.kubeconfig \\
    --config=/etc/kubernetes/cfg/kubelet-config.yml \\
    --cert-dir=/etc/kubernetes/ssl \\
    --image-pull-progress-deadline=15m \\
    --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sos/pause:3.2"
    EOFCopy to clipboardErrorCopied
    
    

    配置文件解釋說明

配置選項 選項意義
--hostname-override 用來配置該節點在集群中顯示的主機名,kubelet設置了-–hostname-override參數后,kube-proxy也需要設置,否則會出現找不到Node的情況
--container-runtime 指定容器運行時引擎
--kubeconfig kubelet作為客戶端使用的kubeconfig認證文件,此文件是由kube-controller-mananger自動生成的
--bootstrap-kubeconfig 指定令牌認證文件
--config 指定kubelet配置文件
--cert-dir 設置kube-controller-manager生成證書和私鑰的目錄
--image-pull-progress-deadline 鏡像拉取進度最大時間,如果在這段時間拉取鏡像沒有任何進展,將取消拉取,默認:1m0s
--pod-infra-container-image 每個pod中的network/ipc 名稱空間容器將使用的鏡像
  • 創建kubelet-config.conf配置文件

    cat > /etc/kubernetes/cfg/kubelet-config.yml << EOF
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 172.16.0.50
    port: 10250
    readOnlyPort: 10255
    cgroupDriver: cgroupfs
    clusterDNS:
    - 10.96.0.2
    clusterDomain: cluster.local 
    failSwapOn: false
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 2m0s
        enabled: true
      x509:
        clientCAFile: /etc/kubernetes/ssl/ca.pem 
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 5m0s
        cacheUnauthorizedTTL: 30s
    evictionHard:
      imagefs.available: 15%
      memory.available: 100Mi
      nodefs.available: 10%
      nodefs.inodesFree: 5%
    maxOpenFiles: 1000000
    maxPods: 110
    EOFCopy to clipboardErrorCopied
    
    

    簡單說幾個比較重要的選項配置意義

    配置選項 選項意義
    address kubelet 服務監聽的地址
    port kubelet 服務的端口,默認 10250
    readOnlyPort 沒有認證/授權的只讀 kubelet 服務端口 ,設置為 0 表示禁用,默認10255
    clusterDNS DNS 服務器的IP地址列表
    clusterDomain 集群域名, kubelet 將配置所有容器除了主機搜索域還將搜索當前域
  • 創建kubelet啟動腳本

    cat > /usr/lib/systemd/system/kubelet.service << EOF
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    
    [Service]
    EnvironmentFile=/etc/kubernetes/cfg/kubelet.conf
    ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS
    Restart=on-failure
    RestartSec=10
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOFCopy to clipboardErrorCopied
    
    
  • 分發配置文件

    for ip in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03 kubernetes-node-01 kubernetes-node-02;do
        if [[ ${ip} == "kubernetes-node-02" ]] || [[ ${ip} == "kubernetes-node-01" ]];then
            ssh root@${ip} "mkdir -pv /var/log/kubernetes"
        fi
        scp /etc/kubernetes/cfg/{kubelet-config.yml,kubelet.conf} root@${ip}:/etc/kubernetes/cfg
        scp /usr/lib/systemd/system/kubelet.service root@${ip}:/usr/lib/systemd/system
    doneCopy to clipboardErrorCopied
    
    
  • 配置文件處理

    # 修改kubernetes-master-02配置
    sed -i 's#master-01#master-02#g' /etc/kubernetes/cfg/kubelet.conf
    sed -i 's#172.16.0.50#172.16.0.51#g' /etc/kubernetes/cfg/kubelet-config.yml
    
    # 修改kubernetes-master-03配置
    sed -i 's#master-01#master-03#g' /etc/kubernetes/cfg/kubelet.conf
    sed -i 's#172.16.0.50#172.16.0.52#g' /etc/kubernetes/cfg/kubelet-config.yml
    
    # 修改kubernetes-node-01配置
    sed -i 's#master-01#node-01#g' /etc/kubernetes/cfg/kubelet.conf
    sed -i 's#172.16.0.50#172.16.0.53#g' /etc/kubernetes/cfg/kubelet-config.yml
    
    # 修改kubernetes-node-02配置
    sed -i 's#master-01#node-02#g' /etc/kubernetes/cfg/kubelet.conf
    sed -i 's#172.16.0.50#172.16.0.54#g' /etc/kubernetes/cfg/kubelet-config.ymlCopy to clipboardErrorCopied
    
    
  • 開啟kubelet

    systemctl daemon-reload;systemctl enable --now kubelet;systemctl status kubelet.serviceCopy to clipboardErrorCopied
    
配置kube-proxy服務

kube-proxy是Kubernetes的核心組件,部署在每個Node節點上,它是實現Kubernetes Service的通信與負載均衡機制的重要組件; kube-proxy負責為Pod創建代理服務,從apiserver獲取所有server信息,並根據server信息創建代理服務,實現server到Pod的請求路由和轉發,從而實現K8s層級的虛擬轉發網絡。

創建kube-proxy配置文件
cat > /etc/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--config=/etc/kubernetes/cfg/kube-proxy-config.yml"
EOFCopy to clipboardErrorCopied
創建kube-proxy-config.yml配置文件
cat > /etc/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.16.0.50
healthzBindAddress: 172.16.0.50:10256
metricsBindAddress: 172.16.0.50:10249
clientConnection:
  burst: 200
  kubeconfig: /etc/kubernetes/cfg/kube-proxy.kubeconfig
  qps: 100
hostnameOverride: kubernetes-master-01
clusterCIDR: 10.96.0.0/16
enableProfiling: true
mode: "ipvs"
kubeProxyIPTablesConfiguration:
  masqueradeAll: false
kubeProxyIPVSConfiguration:
  scheduler: rr
  excludeCIDRs: []
EOFCopy to clipboardErrorCopied

簡單說一下上面配置的選項意義

選項配置 選項意義
clientConnection 與kube-apiserver交互時的參數設置
burst: 200 臨時允許該事件記錄值超過qps設定值
kubeconfig kube-proxy 客戶端連接 kube-apiserver 的 kubeconfig 文件路徑設置
qps: 100 與kube-apiserver交互時的QPS,默認值5
bindAddress kube-proxy監聽地址
healthzBindAddress 用於檢查服務的IP地址和端口
metricsBindAddress metrics服務的ip地址和端口。默認:127.0.0.1:10249
clusterCIDR kube-proxy 根據 --cluster-cidr 判斷集群內部和外部流量,指定 --cluster-cidr 或 --masquerade-all 選項后 kube-proxy 才會對訪問 Service IP 的請求做 SNAT
hostnameOverride 參數值必須與 kubelet 的值一致,否則 kube-proxy 啟動后會找不到該 Node,從而不會創建任何 ipvs 規則;
masqueradeAll 如果使用純iptables代理,SNAT所有通過服務集群ip發送的通信
mode 使用ipvs模式
scheduler 當proxy為ipvs模式時,ipvs調度類型
創建kube-proxy啟動腳本
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOFCopy to clipboardErrorCopied

分發配置文件
for ip in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03 kubernetes-node-01 kubernetes-node-02;do
    scp /etc/kubernetes/cfg/{kube-proxy-config.yml,kube-proxy.conf} root@${ip}:/etc/kubernetes/cfg/
    scp /usr/lib/systemd/system/kube-proxy.service root@${ip}:/usr/lib/systemd/system/
doneCopy to clipboardErrorCopied

修改節點配置文件
  • 修改kubernetes-master-02節點

    sed -i 's#172.16.0.50#172.16.0.51#g' /etc/kubernetes/cfg/kube-proxy-config.yml
    sed -i 's#master-01#master-02#g' /etc/kubernetes/cfg/kube-proxy-config.ymlCopy to clipboardErrorCopied
    
    
  • 修改kubernetes-master-03節點

    sed -i 's#172.16.0.50#172.16.0.52#g' /etc/kubernetes/cfg/kube-proxy-config.yml
    sed -i 's#master-01#master-03#g' /etc/kubernetes/cfg/kube-proxy-config.ymlCopy to clipboardErrorCopied
    
  • 修改kubernetes-node-01節點

    sed -i 's#172.16.0.50#172.16.0.53#g' /etc/kubernetes/cfg/kube-proxy-config.yml
    sed -i 's#master-01#node-01#g' /etc/kubernetes/cfg/kube-proxy-config.ymlCopy to clipboardErrorCopied
    
  • 修改kubernetes-node-02節點

    sed -i 's#172.16.0.50#172.16.0.54#g' /etc/kubernetes/cfg/kube-proxy-config.yml
    sed -i 's#master-01#node-02#g' /etc/kubernetes/cfg/kube-proxy-config.ymlCopy to clipboardErrorCopied
    
  • 配置查看

    for ip in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03 kubernetes-node-01 kubernetes-node-02;do     
        echo ''; echo $ip; echo ''; 
        ssh root@$ip "cat /etc/kubernetes/cfg/kube-proxy-config.yml"; 
    doneCopy to clipboardErrorCopied
    
開機啟動
systemctl daemon-reload; systemctl enable --now kube-proxy; systemctl status kube-proxyCopy to clipboardErrorCopied
查看kubelet加入集群請求
[root@kubernetes-master-01 k8s]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-51i8zZdDrIFh_zGjblcnJHVTVEZF03-MRLmxqW7ubuk   50m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-9DyYdqmYto4MW7IcGbTPqVePH9PHQN1nNefZEFcab7s   50m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-YzbkiJCgLrXM2whs0h00TDceGaBI3Ntly8Z7HGCYvFw   62m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   PendingCopy to clipboardErrorCopied
批准加入
kubectl certificate approve `kubectl get csr | grep "Pending" | awk '{print $1}'`Copy to clipboardErrorCopied
查看節點
[root@kubernetes-master-01 ~]# kubectl get node
NAME                   STATUS   ROLES    AGE    VERSION
kubernetes-master-01   Ready    <none>   123m   v1.18.8
kubernetes-master-02   Ready    <none>   120m   v1.18.8
kubernetes-master-03   Ready    <none>   118m   v1.18.8
kubernetes-node-01     Ready    <none>   3s     v1.18.8Copy to clipboardErrorCopied
查看node節點生成的文件
[root@kubernetes-node-01 ~]# ll /etc/kubernetes/ssl/
總用量 36
-rw------- 1 root root 1679 8月  29 12:50 admin-key.pem
-rw-r--r-- 1 root root 1359 8月  29 12:50 admin.pem
-rw------- 1 root root 1679 8月  29 12:08 ca-key.pem
-rw-r--r-- 1 root root 1224 8月  29 12:08 ca.pem
-rw------- 1 root root 1191 8月  29 22:49 kubelet-client-2020-08-29-22-49-08.pem
lrwxrwxrwx 1 root root   58 8月  29 22:49 kubelet-client-current.pem -> /etc/kubernetes/ssl/kubelet-client-2020-08-29-22-49-08.pem
-rw-r--r-- 1 root root 2233 8月  29 20:02 kubelet.crt
-rw------- 1 root root 1675 8月  29 20:02 kubelet.key
-rw------- 1 root root 1679 8月  29 12:45 kube-proxy-key.pem
-rw-r--r-- 1 root root 1379 8月  29 12:45 kube-proxy.pemCopy to clipboardErrorCopied

集群角色

在K8S集群中,可分為master節點和node節點之分。

節點標簽
[root@kubernetes-master-01 ~]# kubectl label nodes kubernetes-master-01 node-role.kubernetes.io/master-
node/kubernetes-master-01 labeled
[root@kubernetes-master-01 ~]# kubectl label nodes kubernetes-master-01 node-role.kubernetes.io/master=kubernetes-master-01
node/kubernetes-master-01 labeled
[root@kubernetes-master-01 ~]# kubectl label nodes kubernetes-master-02 node-role.kubernetes.io/master=kubernetes-master-02
node/kubernetes-master-02 labeled
[root@kubernetes-master-01 ~]# kubectl label nodes kubernetes-master-03 node-role.kubernetes.io/master=kubernetes-master-03
node/kubernetes-master-03 labeled
[root@kubernetes-master-01 ~]# kubectl label nodes kubernetes-node-01 node-role.kubernetes.io/node=kubernetes-master-01
node/kubernetes-node-01 labeled
[root@kubernetes-master-01 ~]# kubectl get nodes
NAME                   STATUS   ROLES    AGE    VERSION
kubernetes-master-01   Ready    master   135m   v1.18.8
kubernetes-master-02   Ready    master   131m   v1.18.8
kubernetes-master-03   Ready    master   130m   v1.18.8
kubernetes-node-01     Ready    node     11m    v1.18.8Copy to clipboardErrorCopied

刪除節點

[root@kubernetes-master-01 ~]# kubectl label nodes kubernetes-master-01 node-role.kubernetes.io/master-
node/kubernetes-master-01 labeledCopy to clipboardErrorCopied
為master節點打污點

master節點一般情況下不運行pod,因此我們需要給master節點添加污點使其不被調度。

[root@kubernetes-master-01 ~]# kubectl taint nodes kubernetes-master-01 node-role.kubernetes.io/master=kubernetes-master-01:NoSchedule --overwrite
node/kubernetes-master-01 modified
[root@kubernetes-master-01 ~]# kubectl taint nodes kubernetes-master-02 node-role.kubernetes.io/master=kubernetes-master-02:NoSchedule --overwrite
node/kubernetes-master-02 modified
[root@kubernetes-master-01 ~]# kubectl taint nodes kubernetes-master-03 node-role.kubernetes.io/master=kubernetes-master-03:NoSchedule --overwrite
node/kubernetes-master-03 modifiedCopy to clipboardErrorCopied

部署網絡插件

kubernetes設計了網絡模型,但卻將它的實現交給了網絡插件,CNI網絡插件最主要的功能就是實現POD資源能夠跨主機進行通訊。常見的CNI網絡插件:

  • Flannel
  • Calico
  • Canal
  • Contiv
  • OpenContrail
  • NSX-T
  • Kube-router
安裝網絡插件
[root@kubernetes-master-01 ~]# for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03 kubernetes-node-01 kubernetes-node-02; do scp flanneld mk-docker-opts.sh  root@$i:/usr/local/bin; done
flanneld                                     100%   34MB  93.6MB/s   00:00
mk-docker-opts.sh                            100% 2139    19.4MB/s   00:00
flanneld                                     100%   34MB 103.3MB/s   00:00
mk-docker-opts.sh                            100% 2139     8.5MB/s   00:00
flanneld                                     100%   34MB 106.5MB/s   00:00
mk-docker-opts.sh                            100% 2139     9.7MB/s   00:00
flanneld                                     100%   34MB 113.2MB/s   00:00
mk-docker-opts.sh                            100% 2139    10.5MB/s   00:00
flanneld                                     100%   34MB 110.3MB/s   00:00
mk-docker-opts.sh                            100% 2139     8.7MB/s   00:00Copy to clipboardErrorCopied
Flanneld配置寫入Etcd中
etcdctl \
--ca-file=/etc/etcd/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--endpoints="https://172.16.0.50:2379,https://172.16.0.51:2379,https://172.16.0.52:2379" \
mk /coreos.com/network/config '{"Network":"10.244.0.0/12", "SubnetLen": 21, "Backend": {"Type": "vxlan", "DirectRouting": true}}'


# 使用get查看信息
[root@kubernetes-master-01 ~]# etcdctl \
 --ca-file=/etc/etcd/ssl/ca.pem \
 --cert-file=/etc/etcd/ssl/etcd.pem \
 --key-file=/etc/etcd/ssl/etcd-key.pem \
 --endpoints="https://172.16.0.50:2379,https://172.16.0.51:2379,https://172.16.0.52:2379" \
 get /coreos.com/network/config

{"Network":"10.244.0.0/12", "SubnetLen": 21, "Backend": {"Type": "vxlan", "DirectRouting": true}}
[root@kubernetes-master-01 ~]#Copy to clipboardErrorCopied

需要注意的是Network部分,這里面填寫的是CLUSTER_IP的地址,不是SERVIE_IP的地址,這點要搞清楚

啟動Flannel

由於flannel服務在啟動的時候需要etcd證書才可以訪問集群,因此所有的節點都需要把etcd證書復制過去,無論是master節點還是node節點。

  • 復制master節點的etcd證書至node節點
for i in kubernetes-node-01 kubernetes-node-02;do 
ssh root@$i "mkdir -pv /etc/etcd/ssl"
scp -p /etc/etcd/ssl/*.pem root@$i:/etc/etcd/ssl    
doneCopy to clipboardErrorCopied
  • 創建flanneld啟動腳本
cat > /usr/lib/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld address
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \\
  -etcd-cafile=/etc/etcd/ssl/ca.pem \\
  -etcd-certfile=/etc/etcd/ssl/etcd.pem \\
  -etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
  -etcd-endpoints=https://172.16.0.50:2379,https://172.16.0.51:2379,https://172.16.0.52:2379 \\
  -etcd-prefix=/coreos.com/network \\
  -ip-masq
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOFCopy to clipboardErrorCopied

配置詳解

配置選項 選項說明
-etcd-cafile 用於etcd 通信的 SSL CA 文件
-etcd-certfile 用於 etcd 通信的的 SSL 證書文件
-etcd-keyfile 用於 etcd 通信的 SSL 密鑰文件
--etcd-endpoints 所有etcd的endpoints
-etcd-prefix etcd中存儲的前綴
-ip-masq -ip-masq=true 如果設置為true,這個參數的目的是讓flannel進行ip偽裝,而不讓docker進行ip偽裝。這么做的原因是如果docker進行ip偽裝,流量再從flannel出去,其他host上看到的source ip就是flannel的網關ip,而不是docker容器的ip

分發配置文件

[root@kubernetes-master-01 ~]# for i in kubernetes-master-02 kubernetes-master-03 kubernetes-node-01 kubernetes-node-02;do scp /usr/lib/systemd/system/flanneld.service root@$i:/usr/lib/systemd/system; done
flanneld.service                                  100%  697     2.4MB/s   00:00
flanneld.service                                  100%  697     1.5MB/s   00:00
flanneld.service                                  100%  697     3.4MB/s   00:00
flanneld.service                                  100%  697     2.6MB/s   00:00Copy to clipboardErrorCopied
修改docker服務啟動腳本
[root@kubernetes-master-01 ~]# sed -i '/ExecStart/s/\(.*\)/#\1/' /usr/lib/systemd/system/docker.service
[root@kubernetes-master-01 ~]# sed -i '/ExecReload/a ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock' /usr/lib/systemd/system/docker.service
[root@kubernetes-master-01 ~]# sed -i '/ExecReload/a EnvironmentFile=-/run/flannel/subnet.env' /usr/lib/systemd/system/docker.serviceCopy to clipboardErrorCopied
分發
[root@kubernetes-master-01 ~]# for ip in kubernetes-master-02 kubernetes-master-03 kubernetes-node-01 kubernetes-node-02;do scp /usr/lib/systemd/system/docker.service root@${ip}:/usr/lib/systemd/system; done
docker.service                                           100% 1830     6.0MB/s   00:00
docker.service                                           100% 1830     4.7MB/s   00:00
docker.service                                           100% 1830     6.6MB/s   00:00
docker.service                                           100% 1830     7.3MB/s   00:00Copy to clipboardErrorCopied
重啟docker,開啟flanneld服務
[root@kubernetes-master-01 ~]# for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03 kubernetes-node-01 kubernetes-node-02;do
> echo ">>>  $i"
> ssh root@$i "systemctl daemon-reload"
> ssh root@$i "systemctl start flanneld"
> ssh root@$i "systemctl restart docker"
> done
>>>  kubernetes-master-01
>>>  kubernetes-master-02
>>>  kubernetes-master-03
>>>  kubernetes-node-01
>>>  kubernetes-node-02Copy to clipboardErrorCopied

部署CoreDNS解析插件

CoreDNS用於集群中Pod解析Service的名字,Kubernetes基於CoreDNS用於服務發現功能。

  • 下載配置文件
git clone https://github.com/coredns/deployment.gitCopy to clipboardErrorCopied
  • 確認DNS
CLUSTER_DNS_IP="10.96.0.2"Copy to clipboardErrorCopied
  • 綁定集群匿名用戶權限
[root@kubernetes-master-01 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubernetes
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous createdCopy to clipboardErrorCopied
  • 構建CoreDNS
# 替換coreDNS鏡像為:registry.cn-hangzhou.aliyuncs.com/k8sos/coredns:1.7.0
[root@kubernetes-master-01 ~]#  ./deploy.sh -i 10.96.0.2 -s | kubectl apply -f -
serviceaccount/coredns unchanged
clusterrole.rbac.authorization.k8s.io/system:coredns unchanged
clusterrolebinding.rbac.authorization.k8s.io/system:coredns unchanged
configmap/coredns unchanged
deployment.apps/coredns unchanged
service/kube-dns created
[root@kubernetes-master-01 ~]# kubectl get pods -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
coredns-85b4878f78-5xr2z   1/1     Running   0          2m31sCopy to clipboardErrorCopied
測試kubernetes集群
  • 創建Nginx服務
[root@kubernetes-master-01 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@kubernetes-master-01 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposedCopy to clipboardErrorCopied
  • 測試


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM