Kubernetes集群幾種部署方式


  minikube

Minikube是一個工具,可以在本地快速運行一個單點的Kubernetes,嘗試Kubernetes或日常開發的用戶使用。不能用於生產環境。

官方地址:https://kubernetes.io/docs/setup/minikube/    

  kubeadm

Kubeadm也是一個工具,提供kubeadm init和kubeadm join,用於快速部署Kubernetes集群。

官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/  二進制包

  從官方下載發行版的二進制包,手動部署每個組件,組成Kubernetes集群。

小結: 生產環境中部署Kubernetes集群,只有Kubeadm和二進制包可選,Kubeadm降低部署門檻,但屏蔽了很多細節,遇到問題很難排查。我們這里使用二進制包部署Kubernetes集群,我也是推薦大家使用這種方式,雖然手動部署麻煩點,但學習很多工作原理,更有利於后期維護。

  軟件環境

軟件

版本

操作系統

CentOS7.5_x64

Docker

18-ce

Kubernetes

1.12

 

  服務器角色

角色

IP

組件

k8s-master

192.168.31.63

kube-apiserver,kube-controller-manager,kube-scheduler,etcd

k8s-node1

192.168.31.65

kubelet,kube-proxy,docker,flannel,etcd

k8s-node2

192.168.31.66

kubelet,kube-proxy,docker,flannel,etcd

 

 

1. 部署Etcd集群

使用cfssl來生成自簽證書,先下載cfssl工具:

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 
wget
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget
https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

1.1 生成證書                                                                                         

創建以下三個文件:

 

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "XiAn",
            "ST": "XiAn"
        }
    ]
}
EOF

cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.40.140",
    "192.168.40.141",
    "192.168.40.142"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "XiAn",
            "ST": "XiAn"
        }
    ]
}
EOF

 

生成證書:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
# ls *pem
ca-key.pem ca.pem server-key.pem server.prm

1.2 部署ETCD

二進制包下載地址:https://github.com/coreos/etcd/releases/tag/v3.2.12

以下部署步驟在規划的三個etcd節點操作一樣,唯一不同的是etcd配置文件中的服務器IP要寫當前的:

解壓二進制包:

# mkdir /opt/etcd/{bin,cfg,ssl} -p
# tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
# mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

創建etcd配置文件

# cat /opt/etcd/cfg/etcd  
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.63:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.63:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.63:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.63:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.31.63:2380,etcd02=https://192.168.31.65:2380,etcd03=https://192.168.31.66:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

ETCE_NAME: 節點名稱
ETCD_DATE_DIR: 數據目錄
ETCD_LISTEN_PEER_URLS: 集群通信監聽地址
ETCD_LISTEN_CLIENT_URLS: 客戶端訪問監聽地址
ETCD_INITIAL_ADVERTISE_PEER_URLS: 集群通告地址 ETCD_ADVERTISE_CLIENT_URLS: 客戶端通告地址 ETCD_INITIAL_CLUSTER: 集群節點地址 ETCD_INITIAL_CLUSTER_TOKEN: 集群Token ETCD_INITIAL_CLUSTER_STATE: 加入集群當前狀態,new是新集群,existing表示加入已有集群

system管理etcd

# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

把剛才生成的證書拷貝到配置文件中的位置

# cp ca*pem server*pem /opt/etcd/ssl

 

啟動並設置開機自動啟動

# systemctl start etcd
# systemctl enable etcd

都部署完成之后,檢查etcd集群狀態

# /opt/etcd/bin/etcdctl \ 
--ca-file=/opt/etcd/ssl/ca.pem \
--cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/ectd/ssl/server-key.pem \
--endpoints="https://192.168.40.140:2379,https://192.168.40.141:2379,https://192.168.40.142:2379" \
cluster-health

  member 70825cd7bcf63a14 is healthy: got healthy result from https://192.168.40.141:2379
  member 82175c0a2d4f0d9e is healthy: got healthy result from https://192.168.40.142:2379
  member e1dcf128dfc257ee is healthy: got healthy result from https://192.168.40.140:2379
  cluster is healthy

如果輸出上面信息,就說明集群部署成功,如果有問題第一步先看日志:/var/log/message 或  journalctl -u  etcd(如果提示timeout 則檢查防火牆)

2.在node安裝Docker

# yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager \
    --add-repo  \
    https://download.docker.com/linux/centos/docker-ce.repo
# yum install docker-ce -y
# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io
# systemctl start docker
# systemctl enable docker

3. 部署Flannel網絡

Overlay Network :覆蓋網絡,在現有的網絡環境中在疊加一層虛擬網絡的技術模式,改網絡中的主機通過虛擬鏈路連接起來

VxLan: 將原數據包封裝到UDP中,並使用基礎網絡的IP/MAC 作為外層報文頭進行封裝,然后在以太網上傳,到達目的之后又隧道端點解封裝裝並將數據發送給目標地址

Flannel: 是Overlay網絡的一種,也是將源數據包封裝在另一種網絡包里面進行路由轉發和通信,目前已經支持, UDP、VXLAN、HOSTGW、AWS、VPC、和GCE路由等數據轉發方式。

Falnnel要用etcd存儲自身一個子網信息,所以要保證能成功連接Etcd,寫入預定義子網段:

# /opt/etcd/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://192.168.40.140:2379,https://192.168.40.141:2379,https://192.168.40.142:2379 \
set /coreos.com/network/config  '{ "Network": "172.17.0.0/16", "Backend": {"Type":"vxlan"}}'

以上部署步驟在規划的每個node節點都操作

下載二進制包:

# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
# tar zxvf flannel-v0.9.1-linux-amd64.tar.gz
# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin

配置Flannel:

# cat /opt/kubernetes/cfg/flanneld 
FLANNEL_OPTIONS="
--etcd-endpoints=https://192.168.40.140:2379,https://192.168.40.141:2379,https://192.168.40.142:2379
-etcd-cafile=/opt/etcd/ssl/ca.pem
-etcd-certfile=/opt/etcd/ssl/server.pem
-etcdkeyfile=/opt/etcd/ssl/server-key.pem"

systemd管理Flannel:

 

# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[install]
WantedBy=multi-user.target

配置Docker啟動指定子網段

# cat /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

重啟flannel和docker:

# systemctl daemon-reload
# systemctl start flanneld
# systemctl enable flanneld
# systemctl restart docker

檢查是否生效:

# ps -ef |grep docker
root     20941     1  1 Jun28 ?        09:15:34 /usr/bin/dockerd --bip=172.17.34.1/24 -ip-masq=false --mtu=1450
# ip addr
3607: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 8a:2e:3d:09:dd:82 brd ff:ff:ff:ff:ff:ff
    inet 172.17.34.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
3608: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP 
    link/ether 02:42:31:8f:d3:02 brd ff:ff:ff:ff:ff:ff     inet 172.17.34.1/24 brd 172.17.34.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:31ff:fe8f:d302/64 scope link 
       valid_lft forever preferred_lft forever

確保docker0與flannel.1在同一網段。 測試不同節點互通,在當前節點訪問另一個Node節點docker0 IP:

# ping 172.17.58.1
PING 172.17.58.1 (172.17.58.1) 56(84) bytes of data.
64 bytes from 172.17.58.1: icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from 172.17.58.1: icmp_seq=2 ttl=64 time=0.204 ms

如果能通說明Flannel部署成功。如果不通檢查下日志:journalctl -u flannel

4。在Master節點部署組件

在部署Kubernetes之前一定要確保etcd、flannel、docker是正常工作的,否則先解決問題再繼續。

4.1 生產證書

創建CA證書:

 

# cat ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}


# cat ca-csr.json

{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShenZhen",
      "L": "ShenZhen",
      "O": "k8s",
      "OU": "System"
    }
  ]
}


# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

生產apiserver證書:

# cat server-csr.json
{
"CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.31.60", "192.168.31.61", "192.168.31.62", "192.168.31.63", "192.168.31.64", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShenZhen", "L": "ShenZhen", "O": "k8s", "OU": "System" } ] }

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

生成kube-proxy證書:

# cat kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShenZhen",
      "L": "ShenZhen",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

最終生成以下證書文件

# ls *pem
ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem

4.2 部署apiserver組件

下載二進制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md 下載這個包(kubernetes-server-linux-amd64.tar.gz)就夠了,包含了所需的所有組件。

 

# mkdir /opt/kubernetes/{bin,cfg,ssl} -p
# tar zxvf kubernetes-server-linux-amd64.tar.gz
# cd kubernetes/server/bin
# cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin

創建token文件,用途后面會講到:

# cat /opt/kubernetes/cfg/token.csv
674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

第一列:隨機字符串,自己可生成 第二列:用戶名 第三列:UID 第四列:用戶組

創建apiserver配置文件:

# cat /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.31.63:2379,https://192.168.31.65:2379,https://192.168.31.66:2379 \
--bind-address=192.168.31.63 \
--secure-port=6443 \
--advertise-address=192.168.31.63 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

 

 

配置好前面生成的證書,確保能連接etcd。

參數說明:

--logtostderr 啟用日志

---v 日志等級

--etcd-servers etcd集群地址

--bind-address 監聽地址

--secure-port https安全端口

--advertise-address 集群通告地址

--allow-privileged 啟用授權

--service-cluster-ip-range Service虛擬IP地址段

--enable-admission-plugins 准入控制模塊

--authorization-mode 認證授權,啟用RBAC授權和節點自管理

--enable-bootstrap-token-auth 啟用TLS bootstrap功能,后面會講到

--token-auth-file token文件

--service-node-port-range Service Node類型默認分配端口范圍 systemd管理

apiserver:

# cat /usr/lib/systemd/system/kube-apiserver.service
[unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

 

啟動

# systemctl daemon-reload
# systemctl enable kube-apiserver
# systemctl restart kube-apiserver

 

4.3 部署scheduler組件

創建schduler配置文件:

# cat /opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"

 

參數說明:

--master 連接本地apiserver

 

--leader-elect 當該組件啟動多個時,自動選舉(HAsystemd管理schduler

 

組件:

# cat /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動:

# systemctl daemon-reload
# systemctl enable kube-apiserver
# systemctl restart kube-apiserver

 4.4 部署controller-manager 組件

創建controller-manager配置文件

# cat /opt/kubernetes/cfg/kube-controller-manager 
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

systemd管理controller-manage組件:

# cat /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動:

# systemctl daemon-reload
# systemctl enable kube-controller-manager
# systemctl restart kube-controller-manager

所有組件都已經啟動成功,通過kubectl工具查看當前群組件狀態:

# /opt/kubernetes/bin/kubectl get cs 
NAME                 STATUS    MESSAGE             ERROR 
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health":"true"}    
etcd-2               Healthy   {"health":"true"}    
etcd-1               Healthy   {"health":"true"}    
controller-manager   Healthy   ok 

如上輸出說明組件正常

5 在NODE節點部署組件

Master apiserver啟用TLS認證后,Node節點kubelet組件想要加入集群,必須使用CA簽發的有效證書才能與 apiserver通信,當Node節點很多時,簽署證書是一件很繁瑣的事情,因此有了TLS Bootstrapping機制,kubelet 會以一個低權限用戶自動向apiserver申請證書,kubelet的證書由apiserver動態簽署。認證大致工作流程如圖所示:

 

5.1 將kubelet-bootstrap用戶綁定到系統集群角色 

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

5.2 創建kubeconfig文件                                                                       

在生成kubernetes證書的目錄下執行以下命令生成kubeconfig文件:

# 創建 kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc
KUBE_APISERVER="https://192.168.31.63:6443"

# 設置集群參數
kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

#設置客戶端默認參數

kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

#設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

# ---------------------------

#創建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig


kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

# ls
bootstrap.kubeconfig  kube-proxy.kubeconfig

將這兩個文件拷貝到Node節點/opt/kubernetes/cfg目錄下。

5.2 部署kubelet組件                                                                            

將前面下載的二進制包中的kubelet和kube-proxy拷貝到/opt/kubernetes/bin目錄下。

創建kubelet配置文件:

# cat /opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.31.65 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pauseamd64:3.0"

 

參數說明:

--hostname-override 在集群中顯示的主機名

--kubeconfig 指定kubeconfig文件位置,會自動生成

--bootstrap-kubeconfig 指定剛才生成的bootstrap.kubeconfig文件

--cert-dir 頒發證書存放位置

--pod-infra-container-image 管理Pod網絡的鏡像其中/opt/kubernetes/cfg/kubelet.config配置文件如下:

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.31.65
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true 

systemd管理kubelet組件:

# cat /usr/lib/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

啟動:

# systemctl daemon-reload
# systemctl enable kubelet
# systemctl restart kubelet

在Master審批Node加入集群:啟動后還沒加入到集群中,需要手動允許該節點才可以。 在Master節點查看請求簽名的Node:

# kubectl get csr
# kubectl certificate approve XXXXID
# kubectl get node

5.3 部署kube-proxy組件     創建kube-proxy配置文件:

# cat /opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.31.65 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

systemd管理kube-proxy組件:

# cat /usr/lib/systemd/system/kube-proxy.service 

[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動:

# systemctl daemon-reload
# systemctl enable kube-proxy
# systemctl restart kube-proxy

Node2部署方式一樣

6. 查看集群狀態

# kubectl get node 
NAME             STATUS    ROLES     AGE       VERSION 
192.168.31.65   Ready     <none>    1d       v1.12.0 
192.168.31.66   Ready     <none>    1d       v1.12.0 
# kubectl get cs 
NAME                 STATUS    MESSAGE             ERROR 
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-2               Healthy   {"health":"true"} 
etcd-1               Healthy   {"health":"true"}    
etcd-0               Healthy   {"health":"true"}       

7. 運行一個測試示例

創建一個Nginx Web,測試集群是否正常工作:

# kubectl run nginx --image=nginx --replicas=3 
# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort 查看

Pod,Service:

# kubectl get pods 
NAME                     READY     STATUS    RESTARTS   AGE 
nginx-64f497f8fd-fjgt2   1/1       Running   3          1d 
nginx-64f497f8fd-gmstq   1/1       Running   3          1d 
nginx-64f497f8fd-q6wk9   1/1       Running   3          1d 
# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                        AGE 
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP                        28d 
nginx        NodePort    10.0.0.175   <none>        88:38696/TCP                   28d 

訪問集群中部署的Nginx,打開瀏覽器輸入:http://192.168.31.66:38696

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM