Kubernetes 二進制部署(一)單節點部署(Master 與 Node 同一機器)


0. 前言

  • 最近受“新冠肺炎”疫情影響,在家等着,入職暫時延后,在家里辦公和學習
  • 嘗試通過源碼編譯二進制的方式單一節點(Master 與 Node 部署在同一個機器上)上部署一個 k8s 環境,整理相關步驟和腳本如下
  • 參考原文:Kubernetes二進制部署(一)單節點部署

1. 相關概念

1.1 基本架構

1.2 核心組件 

1.2.1 Master

1.2.1.1 kube-apiserver

  • 集群的統一入口,各組件協調者
  • 以RESTful API提供接口服務
  • 所有對象資源的增刪改查和監聽操作都交給 kube-apiserver 處理
  • 之后再通過分布式存儲組件 Etcd 保存狀態

1.2.1.2 kube-controller-manager

  • 處理集群中常規后台任務
  • 一個資源對應一個控制器,而 kube-controller-manager 就是負責管理這些控制器的

1.2.1.3 kube-scheduler

  • 根據調度算法為新創建的 Pod 選擇一個 Node 節點
  • 可以任意部署,可以部署在同一個節點上,也可以部署在不同的節點上

1.2.1.4 etcd

  • 分布式鍵值存儲系統
  • 用於保存集群狀態數據,比如 Pod、Service 等對象信息

1.2.2 Node

1.2.2.1 kubelet

  • kubelet 是 Master 在Node 節點上的 Agent
  • 管理本機運行容器的生命周期,比如創建容器、Pod 掛載數據卷、下載 Secret、獲取容器和節點狀態等工作
  • kubelet 將每個 Pod 轉換成一組容器

1.2.2.2 kube-proxy

  • 在 Node 節點上實現 Pod 網絡代理,維護網絡規則和四層負載均衡工作
  • 對於從主機上發出的數據,它可以基於請求地址發現遠程服務器
  • 並將數據正確路由,在某些情況下會使用輪循調度算法(Round-robin)將請求發送到集群中的多個實例

1.2.2.3 docker

  • 容器引擎

2. 部署流程

2.1 源碼編譯

  • 安裝 golang 環境
  • kubernetes v1.18 要求使用的 golang 版本為 1.13
$ wget https://dl.google.com/go/go1.13.8.linux-amd64.tar.gz
$ tar -zxvf go1.13.8.linux-amd64.tar.gz -C /usr/local/
  • 添加如下環境變量至 ~/.bashrc 或者 ~/.zshrc
export GOROOT=/usr/local/go
 
# GOPATH
export GOPATH=$HOME/go

# GOROOT bin
export PATH=$PATH:$GOROOT/bin

# GOPATH bin
export PATH=$PATH:$GOPATH/bin
  • 更新環境變量
$ source ~/.bashrc
  • 從 github 上下載 kubernetes 最新源碼
$ git clone https://github.com/kubernetes/kubernetes.git
  • 編譯形成二進制文件
$ make KUBE_BUILD_PLATFORMS=linux/amd64
+++ [0215 22:16:44] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
+++ [0215 22:16:52] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/defaulter-gen
+++ [0215 22:17:00] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/conversion-gen
+++ [0215 22:17:12] Building go targets for linux/amd64:
    ./vendor/k8s.io/kube-openapi/cmd/openapi-gen
+++ [0215 22:17:25] Building go targets for linux/amd64:
    ./vendor/github.com/go-bindata/go-bindata/go-bindata
+++ [0215 22:17:27] Building go targets for linux/amd64:
    cmd/kube-proxy
    cmd/kube-apiserver
    cmd/kube-controller-manager
    cmd/kubelet
    cmd/kubeadm
    cmd/kube-scheduler
    vendor/k8s.io/apiextensions-apiserver
    cluster/gce/gci/mounter
    cmd/kubectl
    cmd/gendocs
    cmd/genkubedocs
    cmd/genman
    cmd/genyaml
    cmd/genswaggertypedocs
    cmd/linkcheck
    vendor/github.com/onsi/ginkgo/ginkgo
    test/e2e/e2e.test
    cluster/images/conformance/go-runner
    cmd/kubemark
    vendor/github.com/onsi/ginkgo/ginkgo
  • KUBE_BUILD_PLATFORMS 指定了編譯生成的二進制文件的目標平台,包括 darwin/amd64、linux/amd64 和 windows/amd64 等
  • 執行 make cross 會生成所有平台的二進制文件
  • 雲服務器占用資源比較小,建議在本地編譯然后上傳至服務器
  • 生成的 _output 目錄即為編譯生成文件,核心二進制文件在 _output/local/bin/linux/amd64 中
$ pwd
/root/Coding/kubernetes/_output/local/bin/linux/amd64
$ ls
apiextensions-apiserver genman                  go-runner               kube-scheduler          kubemark
e2e.test                genswaggertypedocs      kube-apiserver          kubeadm                 linkcheck
gendocs                 genyaml                 kube-controller-manager kubectl                 mounter
genkubedocs             ginkgo                  kube-proxy              kubelet
  • 其中 kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kube-proxy 和 kubelet 為安裝需要的二進制文件

2.2 安裝 docker

  • 雲服務器上已經安裝了 docker,因此此次部署無需安裝
  • 具體安裝細節參見 官方文檔

2.3 下載安裝腳本

  • 后續安裝部署的所有腳本已經上傳至 github 倉庫 中,感興趣的朋友可以下載
  • 創建工作目錄 k8s 和腳本目錄 k8s/scripts,復制倉庫中的所有腳本,到工作目錄中的腳本文件夾中
$ git clone https://github.com/wangao1236/k8s_single_deploy.git
$ cd k8s_single_deploy/scripts
$ chmod +x *.sh
$ mkdir -p k8s/scripts
$ cp k8s_single_deploy/scripts/* k8s/scripts

2.4 安裝 cfssl

  • 安裝 cfssl,執行 k8s/scripts/cfssl.sh 腳本,或者執行如下命令:
$ curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
$ curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
$ curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
$ chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
  • k8s/scripts/cfssl.sh 腳本內容如下:
$ cat k8s_single_deploy/scripts/cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

2.5 安裝 etcd

  • 創建目標文件夾
$ mkdir -p /opt/etcd/{cfg,bin,ssl}
  • 下載 etcd 最新版安裝包
$ wget https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz
$ tar -zxvf etcd-v3.3.18-linux-amd64.tar.gz
$ cp etcd-v3.3.18-linux-amd64/etcdctl etcd-v3.3.18-linux-amd64/etcd /opt/etcd/bin
  • 創建文件夾 k8s/etcd-cert,其中 k8s 部署相關文件和腳本的存儲根目錄,etcd-cert 暫存 etcd https 的證書
$ mkdir -p k8s/etcd-cert
  • 復制 etcd-cert.sh 腳本到 etcd-cert 目錄中,並執行
$ cp k8s/scripts/etcd-cert.sh k8s/etcd-cert 
  • 腳本內容如下:
$ cat k8s/scripts/etcd-cert.sh
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "10.206.240.188",
    "10.206.240.189",
    "10.206.240.111"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
View Code
  • 注意修改 server-csr.json 部分的 hosts 內容為 127.0.0.1 和服務器的 IP 地址
  • 執行腳本
$ ./etcd-cert.sh
2020/02/16 00:49:37 [INFO] generating a new CA key and certificate from CSR
2020/02/16 00:49:37 [INFO] generate received request
2020/02/16 00:49:37 [INFO] received CSR
2020/02/16 00:49:37 [INFO] generating key: rsa-2048
2020/02/16 00:49:38 [INFO] encoded CSR
2020/02/16 00:49:38 [INFO] signed certificate with serial number 18016478413052532961889837653710495342880481812
2020/02/16 00:49:38 [INFO] generate received request
2020/02/16 00:49:38 [INFO] received CSR
2020/02/16 00:49:38 [INFO] generating key: rsa-2048
2020/02/16 00:49:38 [INFO] encoded CSR
2020/02/16 00:49:38 [INFO] signed certificate with serial number 83852304780368923308324941155403278584239347004
2020/02/16 00:49:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
  • 拷貝證書
$ cp *.pem /opt/etcd/ssl
  • 執行 k8s/scripts/etcd.sh 腳本
$ ./k8s/scripts/etcd.sh etcd01 127.0.0.1
# 或者
$ ./k8s/scripts/etcd.sh etcd01 ${服務器 IP}
  • k8s/scripts/etcd.sh 腳本內容如下:
$ cat k8s/scripts/etcd.sh
#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10

ETCD_NAME=$1
ETCD_IP=$2

WORK_DIR=/opt/etcd

cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="${ETCD_NAME}=https://${ETCD_IP}:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
View Code
  • 由於證書已經加入了 127.0.0.1 和服務器 IP 地址,因此腳本第二個參數可以為 127.0.0.1 或者 服務器 IP
  • 為了隱私安全,本文隱藏了服務器 IP,盡可能使用 127.0.0.1 作為節點地址
  • 檢查安裝是否成功,執行如下命令:
$ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://127.0.0.1:2379" cluster-health
member f6947f26c76d8a6b is healthy: got healthy result from https://127.0.0.1:2379
cluster is healthy
# 或者
$ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://${服務器 IP}:2379" cluster-health
member f6947f26c76d8a6b is healthy: got healthy result from https://${服務器 IP}:2379
cluster is healthy 
  • 由於是單節點集群,因此指定集群地址時只有一個地址,出現“member ...... is healthy: go healthy result from .......”,說明 etcd 正常啟動了
  • 腳本會生成配置文件
$ cat /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://127.0.0.1:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://127.0.0.1:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://127.0.0.1:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

2.6 部署 flannel

  • 寫入分配的子網段到 etcd 中,供 flannel 使用:
$ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://127.0.0.1:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}} 
  • 查看寫入的信息
$ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://127.0.0.1:2379" get /coreos.com/network/config
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
  • 下載 flannel 最新安裝包
$ wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
$ tar -zxvf flannel-v0.11.0-linux-amd64.tar.gz
$ mkdir -p /opt/kubernetes/{cfg,bin,ssl}
$ mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
  • 執行腳本 k8s/scripts/flannel.sh,第一個參數為 etcd 地址 
$ ./k8s/scripts/flannel.sh https://127.0.0.1:2379
  • 腳本內容如下:
$ cat k8s/scripts/flannel.sh
#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
View Code
  • 查看啟動時指定的子網
$ cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.23.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.23.1/24 --ip-masq=false --mtu=1450"
  • 執行 vim /usr/lib/systemd/system/docker.service 修改 docker 配置
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
 
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H unix:///var/run/docker.sock
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.soc
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
......
  • 重啟 docker  服務
$ systemctl daemon-reload
$ systemctl restart docker
  • 查看 flannel 網絡,docker0 位於 flannel 分配的子網中
$ ifconfig
docker0:flags=4099<UP,BROADCAST,MULTICAST>mtu1500
        inet172.17.23.1netmask255.255.255.0broadcast172.17.23.255
        ether02:42:c4:96:b7:e3txqueuelen0(Ethernet)
        RXpackets0bytes0(0.0B)
        RXerrors0dropped0overruns0frame0
        TXpackets0bytes0(0.0B)
        TXerrors0dropped0overruns0carrier0collisions0
eth1:......
flannel.1:flags=4163<UP,BROADCAST,RUNNING,MULTICAST>mtu1450
        inet172.17.23.0netmask255.255.255.255broadcast0.0.0.0
        ether1e:7a:e8:a0:4d:a5txqueuelen0(Ethernet)
        RXpackets0bytes0(0.0B)
        RXerrors0dropped0overruns0frame0
        TXpackets0bytes0(0.0B)
        TXerrors0dropped0overruns0carrier0collisions0
lo:flags=73<UP,LOOPBACK,RUNNING>mtu65536
        inet127.0.0.1netmask255.0.0.0
        looptxqueuelen0(LocalLoopback)
        RXpackets2807bytes220030(214.8KiB)
        RXerrors0dropped0overruns0frame0
        TXpackets2807bytes220030(214.8KiB)
        TXerrors0dropped0overruns0carrier0collisions0
  • 創建容器,查看容器網絡
$ docker run -it centos:7 /bin/bash
[root@f04f38dfa5ec /]# yum install -y  net-tools
[root@f04f38dfa5ec /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.23.2  netmask 255.255.255.0  broadcast 172.17.23.255
        ether 02:42:ac:11:17:02  txqueuelen 0  (Ethernet)
        RX packets 10391  bytes 14947016 (14.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6295  bytes 445445 (435.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
[root@f04f38dfa5ec /]# ping 172.17.23.1
PING 172.17.23.1 (172.17.23.1) 56(84) bytes of data.
64 bytes from 172.17.23.1: icmp_seq=1 ttl=64 time=0.056 ms
64 bytes from 172.17.23.1: icmp_seq=2 ttl=64 time=0.056 ms
64 bytes from 172.17.23.1: icmp_seq=3 ttl=64 time=0.046 ms
64 bytes from 172.17.23.1: icmp_seq=4 ttl=64 time=0.048 ms
64 bytes from 172.17.23.1: icmp_seq=5 ttl=64 time=0.049 ms
64 bytes from 172.17.23.1: icmp_seq=6 ttl=64 time=0.046 ms
64 bytes from 172.17.23.1: icmp_seq=7 ttl=64 time=0.055 ms
  • 測試可以 ping 通 docker0 網卡 證明 flannel 起到路由作用

2.7 安裝 kube-apiserver

  • 修改 k8s/scripts/k8s-cert.sh 中 server-csr.json 部分的 hosts 字段為 127.0.0.1 和服務器 IP 地址
cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "${服務器IP}",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {   
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
  • 使用 k8s/scripts/k8s-cert.sh 腳本生成認證:
$ mkdir -p k8s/k8s-cert
$ cp k8s/scripts/k8s-cert.sh k8s/k8s-cert
$ cd k8s/k8s-cert
$ ./k8s-cert.sh
$ ls
admin.csr admin.pem ca-csr.json k8s-cert.sh kube-proxy-key.pem server-csr.json
admin-csr.json ca-config.json ca-key.pem kube-proxy.csr kube-proxy.pem server-key.pem
admin-key.pem ca.csr ca.pem kube-proxy-csr.json server.csr server.pem
$ cp ca*pem server*pem /opt/kubernetes/ssl/
  •  腳本內容如下:
$ cat k8s-cert.sh
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
              "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "10.206.176.19",
      "10.206.240.188",
      "10.206.240.189",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
View Code
  • 復制上述提到的 kube-apiserver、kubectl、kube-controller-manager、kube-scheduler、kubelet 和 kube-proxy/opt/kubernetes/bin/
$ cp kube-apiserver kubectl kube-controller-manager kube-scheduler kubelet kube-proxy /opt/kubernetes/bin/
  • 生成隨機序列號
$ head -c 16 /dev/urandom | od -An -t x | tr -d ' '
20cd735bd334f4334118f8be496df49d
$ cat /opt/kubernetes/cfg/token.csv            
20cd735bd334f4334118f8be496df49d,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  • 執行 k8s/scripts/apiserver.sh 腳本,啟動 kube-apiserver.service 服務,第一個參數為 Master 節點地址,第二個為 etcd 集群地址
$ ./k8s/scripts/apiserver.sh ${服務器IP} https://127.0.0.1:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
  • 腳本內容如下:
$ cat k8s/scripts/apiserver.sh
#!/bin/bash

MASTER_ADDRESS=$1
ETCD_SERVERS=$2

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
View Code
  • 腳本會創建 kube-apiserver.service 的服務,查看服務狀態
$ systemctl status kube-apiserver.service
* kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-02-16 03:10:06 CST; 3min 10s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 28740 (kube-apiserver)
    Tasks: 14
   Memory: 244.5M
   CGroup: /system.slice/kube-apiserver.service
           `-28740 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://127.0.0.1:...

Feb 16 03:13:10 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:10.491629   28740 available_controller.g...ons
Feb 16 03:13:10 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:10.514914   28740 httplog.go:90] verb="G…668":
Feb 16 03:13:10 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:10.516879   28740 httplog.go:90] verb="G...8":
Feb 16 03:13:10 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:10.525747   28740 httplog.go:90] verb="G...8":
Feb 16 03:13:10 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:10.527263   28740 httplog.go:90] verb="G...8":
Feb 16 03:13:10 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:10.528568   28740 httplog.go:90] verb="G…668":
Feb 16 03:13:11 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:11.609546   28740 httplog.go:90] verb="G...8":
Feb 16 03:13:11 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:11.611355   28740 httplog.go:90] verb="G...8":
Feb 16 03:13:11 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:11.619297   28740 httplog.go:90] verb="G...8":
Feb 16 03:13:11 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:11.624253   28740 httplog.go:90] verb="G…668":
Hint: Some lines were ellipsized, use -l to show in full.
  • 查看 kube-apiservce.service 的配置文件
$ cat /opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://127.0.0.1:2379 \
--bind-address=${服務器IP} \
--secure-port=6443 \
--advertise-address=${服務器IP} \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

2.8 安裝 kube-scheduler

  • 執行 k8s/scripts/scheduler.sh 腳本,創建 kube-scheduler.service 服務並啟動,第一個參數為 Master 節點地址
$ ./k8s/scripts/scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
  • 腳本內容如下:
$ cat k8s/scripts/scheduler.sh
#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
View Code 
  • 腳本會創建 kube-scheduler.service 服務,查看服務狀態
$ systemctl status kube-scheduler.service    
* kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-02-16 03:22:55 CST; 5h 35min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 31963 (kube-scheduler)
    Tasks: 14
   Memory: 12.4M
   CGroup: /system.slice/kube-scheduler.service
           `-31963 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-...

Feb 16 08:46:36 VM_121_198_centos kube-scheduler[31963]: I0216 08:46:36.873191   31963 reflector.go:494] k8s....ved
Feb 16 08:50:33 VM_121_198_centos kube-scheduler[31963]: I0216 08:50:33.873911   31963 reflector.go:494] k8s....ved
Feb 16 08:51:34 VM_121_198_centos kube-scheduler[31963]: I0216 08:51:34.876413   31963 reflector.go:494] k8s....ved
Feb 16 08:52:05 VM_121_198_centos kube-scheduler[31963]: I0216 08:52:05.874120   31963 reflector.go:494] k8s....ved
Feb 16 08:52:38 VM_121_198_centos kube-scheduler[31963]: I0216 08:52:38.873990   31963 reflector.go:494] k8s....ved
Feb 16 08:53:47 VM_121_198_centos kube-scheduler[31963]: I0216 08:53:47.869403   31963 reflector.go:494] k8s....ved
Feb 16 08:54:16 VM_121_198_centos kube-scheduler[31963]: I0216 08:54:16.876848   31963 reflector.go:494] k8s....ved
Feb 16 08:54:24 VM_121_198_centos kube-scheduler[31963]: I0216 08:54:24.873540   31963 reflector.go:494] k8s....ved
Feb 16 08:55:52 VM_121_198_centos kube-scheduler[31963]: I0216 08:55:52.876115   31963 reflector.go:494] k8s....ved
Feb 16 08:57:42 VM_121_198_centos kube-scheduler[31963]: I0216 08:57:42.874884   31963 reflector.go:494] k8s....ved
Hint: Some lines were ellipsized, use -l to show in full. 

2.9 安裝 kube-controller-manager

  • 執行 k8s/scripts/controller-manager.sh 腳本,創建 kube-controller-manager.service 服務並啟動,第一個參數為 Master 節點地址
$ ./k8s/scripts/controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
  • 腳本內容如下:
$ cat ./k8s/scripts/controller-manager.sh
#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager


KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
View Code
  • 腳本會創建 kube-controller-manager.service 服務,查看服務狀態
$ systemctl status kube-controller-manager.service
* kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-02-16 09:28:07 CST; 54s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 17119 (kube-controller)
    Tasks: 13
   Memory: 24.9M
   CGroup: /system.slice/kube-controller-manager.service
           `-17119 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 ...

Feb 16 09:28:57 VM_121_198_centos kube-controller-manager[17119]: I0216 09:28:57.577189   17119 pv_controller_...er
Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.128738   17119 request.go:556...2s
Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.178743   17119 request.go:556...2s
Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.228729   17119 request.go:556...2s
Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.278737   17119 request.go:556...2s
Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.635791   17119 request.go:556...2s
Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.685804   17119 request.go:556...2s
Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.735807   17119 request.go:556...2s
Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.785776   17119 request.go:556...2s
Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.786828   17119 resource_quota...nc
Hint: Some lines were ellipsized, use -l to show in full.
  • 至此,Master 節點的相關組件已經安裝完畢了
  • 將二進制文件目錄加入環境變量:export PATH=$PATH:/opt/kubernetes/bin/
$ vim ~/.zshrc
......
export PATH=$PATH:/opt/kubernetes/bin/
$ source ~/.zshrc
  • 查看 Master 節點狀態
$ kubectl get cs 
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

2.10 安裝 kubelet

  • 從此節開始,安裝的組件均為 Node 節點使用,因為是單節點,因此安裝在同一機器上
  • 創建工作目錄,復制 k8s/scripts/kubeconfig.sh 腳本
$ mkdir -p k8s/kubeconfig
$ cp k8s/scripts/kubeconfig.sh k8s/kubeconfig
$ cd k8s/kubeconfig
  • 查看 kube-apiserver 的 token
$ cat /opt/kubernetes/cfg/token.csv
20cd735bd334f4334118f8be496df49d,kubelet-bootstrap,10001,"system:kubelet-bootstrap" 
  • 修改腳本 設置客戶端認證參數 部分,將如下:
......
# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
......
  • 替換為 /opt/kubernetes/cfg/token.csv 中的 token
# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
   --token=20cd735bd334f4334118f8be496df49d \
   --kubeconfig=bootstrap.kubeconfig 
  • 執行 kubeconfig.sh 腳本,第一個參數為 kube-apiserver 監聽的 IP 地址,第二個為上述建立的 k8s-cert 目錄,生成的文件復制到配置文件目錄中
$ ./kubeconfig.sh ${服務器 IP} ../k8s-cert
f920d3cad77834c418494860695ea887
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" modified.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" modified.
Switched to context "default".
$ cp bootstrap.kubeconfig kube-proxy.kubeconfig /opt/kubernetes/cfg/ 
  • 腳本內容如下:
$ cat kubeconfig.sh
# 創建 TLS Bootstrapping Token
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
echo ${BOOTSTRAP_TOKEN}

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#----------------------

APISERVER=$1
SSL_DIR=$2

# 創建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"

# 設置集群參數
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=20cd735bd334f4334118f8be496df49d \
  --kubeconfig=bootstrap.kubeconfig

# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 創建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
View Code
  • 執行 k8s/scripts/kubelet.sh 腳本,創建 kubelet.service 服務並啟動,第一個參數為 Node 節點地址(127.0.0.1 或者服務器 IP)
$ ./k8s/scripts/kubelet.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
  • 腳本內容如下:
$ cat ./k8s/scripts/kubelet.sh
#!/bin/bash

NODE_ADDRESS=$1
DNS_SERVER_IP=${2:-"10.0.0.2"}

cat <<EOF >/opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--node-labels=node.kubernetes.io/k8s-master=true \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

EOF

cat <<EOF >/opt/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP} 
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF

cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
View Code
  • 創建 bootstrap 角色賦予權限用於連接 kube-apiserver 請求簽名
$ kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
  • 檢查請求
$ kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-080q0dk5uSm0JxED14c6UB7q4jeUSCUHUxqosnqtnpA   16s   kubelet-bootstrap   Pending
  • 同意請求並頒發證書
$ kubectl certificate approve node-csr-080q0dk5uSm0JxED14c6UB7q4jeUSCUHUxqosnqtnpA
certificatesigningrequest.certificates.k8s.io/node-csr-080q0dk5uSm0JxED14c6UB7q4jeUSCUHUxqosnqtnpA approved
$ kubectl get csr                                                                 
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-080q0dk5uSm0JxED14c6UB7q4jeUSCUHUxqosnqtnpA   2m51s   kubelet-bootstrap   Approved,Issued
  • 查看集群節點
$ kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
127.0.0.1   Ready    <none>   77s   v1.18.0-alpha.5.158+1c60045db0bd6e 
  • 已經是 Ready 狀態,說明加入成功
  • 由於該 Node 同時也是 Master 角色,因此需要標記一下
$ kubectl label node 127.0.0.1 node-role.kubernetes.io/master=true
node/127.0.0.1 labeled
$ kubectl get node                                                
NAME        STATUS   ROLES    AGE     VERSION
127.0.0.1   Ready    master   4m21s   v1.18.0-alpha.5.158+1c60045db0bd6e

2.11 安裝 kube-proxy

  • 執行 k8s/scripts/proxy.sh 腳本,創建 kube-proxy.service 服務並啟動,第一個參數為 Node 節點地址
$ ./k8s/scripts/proxy.sh 127.0.0.1  
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. 
  • 腳本內容如下:
$ cat ./k8s/scripts/proxy.sh
#!/bin/bash

NODE_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--cluster-cidr=10.0.0.0/24 \\
--proxy-mode=ipvs \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
View Code
  • 腳本會創建 kube-proxy.service 服務,查看服務狀態
$ systemctl status kube-proxy.service             
* kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-02-16 10:39:16 CST; 4min 13s ago
 Main PID: 3474 (kube-proxy)
    Tasks: 10
   Memory: 7.6M
   CGroup: /system.slice/kube-proxy.service
           `-3474 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=127.0.0.1 --cluste...

Feb 16 10:43:20 VM_121_198_centos kube-proxy[3474]: I0216 10:43:20.716190    3474 config.go:169] Calling han...date
Feb 16 10:43:20 VM_121_198_centos kube-proxy[3474]: I0216 10:43:20.800724    3474 config.go:169] Calling han...date
Feb 16 10:43:22 VM_121_198_centos kube-proxy[3474]: I0216 10:43:22.724946    3474 config.go:169] Calling han...date
Feb 16 10:43:22 VM_121_198_centos kube-proxy[3474]: I0216 10:43:22.809768    3474 config.go:169] Calling han...date
Feb 16 10:43:24 VM_121_198_centos kube-proxy[3474]: I0216 10:43:24.733676    3474 config.go:169] Calling han...date
Feb 16 10:43:24 VM_121_198_centos kube-proxy[3474]: I0216 10:43:24.818662    3474 config.go:169] Calling han...date
Feb 16 10:43:26 VM_121_198_centos kube-proxy[3474]: I0216 10:43:26.743754    3474 config.go:169] Calling han...date
Feb 16 10:43:26 VM_121_198_centos kube-proxy[3474]: I0216 10:43:26.830673    3474 config.go:169] Calling han...date
Feb 16 10:43:28 VM_121_198_centos kube-proxy[3474]: I0216 10:43:28.755816    3474 config.go:169] Calling han...date
Feb 16 10:43:28 VM_121_198_centos kube-proxy[3474]: I0216 10:43:28.838915    3474 config.go:169] Calling han...date
Hint: Some lines were ellipsized, use -l to show in full.

2.12 檢驗安裝

  • 創建 yaml 文件
$ mkdir -p k8s/yamls
$ cd k8s/yamls
$ vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
  • 創建 deployment 對象,查看生成的 Pod,進入 Running 狀態,說明已經成功創建
$ kubectl apply -f nginx-deployment.yaml
$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-54f57cf6bf-c24p4   1/1     Running   0          12s
nginx-deployment-54f57cf6bf-w2pqp   1/1     Running   0          12s
  • 查看 Pod 具體信息
$ kubectl describe pod  nginx-deployment-54f57cf6bf-c24p4
Name:         nginx-deployment-54f57cf6bf-c24p4
Namespace:    default
Priority:     0
Node:         127.0.0.1/127.0.0.1
Start Time:   Sun, 16 Feb 2020 10:49:05 +0800
Labels:       app=nginx
              pod-template-hash=54f57cf6bf
Annotations:  <none>
Status:       Running
IP:           172.17.23.2
IPs:
  IP:           172.17.23.2
Controlled By:  ReplicaSet/nginx-deployment-54f57cf6bf
Containers:
  nginx:
    Container ID:   docker://ffdfdefa8a743e5911634ca4b4d5c00b3e98955799c7c52c8040e6a8161706f9
    Image:          nginx:1.7.9
    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 16 Feb 2020 10:49:06 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-x7gjm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-x7gjm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-x7gjm
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From                Message
  ----    ------     ----       ----                -------
  Normal  Scheduled  <unknown>  default-scheduler   Successfully assigned default/nginx-deployment-54f57cf6bf-c24p4 to 127.0.0.1
  Normal  Pulled     2m11s      kubelet, 127.0.0.1  Container image "nginx:1.7.9" already present on machine
  Normal  Created    2m11s      kubelet, 127.0.0.1  Created container nginx
  Normal  Started    2m11s      kubelet, 127.0.0.1  Started container nginx
  • 若查看時報如下錯誤:
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) 
  • 則需要給集群加一個 cluster-admin 權限:
$ kubectl create clusterrolebinding system:anonymous   --clusterrole=cluster-admin   --user=system:anonymous

3. 小結

  • 相對於 kubeadm,二進制安裝復雜的多
  • 但是基於源碼的開發和測試需要通過二進制方式部署
  • 因此掌握二進制方式部署 k8s 集群的方法非常重要
  • 上述的腳本均上傳至 github 倉庫
  • 歡迎各位提出問題和批評

4. 參考文獻


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM