kubernetes 集群安裝,離線版


kubernetes 集群安裝,離線版

為什么使用kubeadm來安裝

kubeadm是官方社區推出的一個用於快速部署kubernetes集群的工具。這個工具能通過兩條指令快速完成一個kubernetes集群的部署。

網上很多人說通過二進制安裝能了解到配置的細節,其實通過kubeadm安裝也能查看到配置的細節。

可以自動生成證書,對初學者帶來了不少便利。

網絡環境

我們完全模擬生產環境中,不可以訪問外部互聯網的情況。

基礎的yum源是有提供的,像什么docker-ce、kubernetes的源是沒有的。

k8s.gcr.io、quay.io這些域名也是不可以訪問的。

准備環境

如果沒有特殊提及,安裝及操作需要在所有master及node節點上執行。

機器網絡及配置

復制三台虛擬機。

主機名 IP 節點類型 最低配置
k8s-master 192.168.18.134 master節點 CPU 2Core, Memory 100GB
k8s-node1 192.168.18.135 node節點 CPU 2Core, Memory 100GB
k8s-node2 192.168.18.136 node節點 CPU 3Core, Memory 100GB

master節點需要至少2個CPU,不然會報如錯誤:

error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

關閉防火牆

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

關閉selinux

SELINUX=enforcing替換成SELINUX=disabled


[root@localhost ~]# sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
[root@localhost ~]# setenforce 0

查看一下selinux的狀態。

[root@localhost ~]# getenforce
Permissive

關閉Swap

[root@localhost ~]# swapoff -a
[root@localhost ~]# cp /etc/fstab /etc/fstab_bak
[root@localhost ~]# cat /etc/fstab_bak | grep -v swap > /etc/fstab

grep -v swap是查找不包含swap的行。

查看一下swap的情況,Swap已經全部為0了。


[root@localhost ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           2117         253        1351           9         512        1704
Swap:             0           0           0

設置主機名

在master節點上設置主機名。

hostnamectl set-hostname k8s-master

在node1節點上設置主機名。

hostnamectl set-hostname k8s-node1

在node2節點上設置主機名。

hostnamectl set-hostname k8s-node2

在master上查看主機名。

[root@k8s-master ~]# hostname
k8s-master

設置hosts

>>表示文件末尾追加記錄。

cat >> /etc/hosts <<EOF
192.168.18.134   k8s-master
192.168.18.135   k8s-node1
192.168.18.136   k8s-node2
EOF

修改sysctl.conf

暫時未修改,裝docker的時候會自動修改。可以暫時先跳過這一步。

如果未修改成功,在執行docker info命令時,會顯示如下提示信息。

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
cat /proc/sys/net/bridge/bridge-nf-call-iptables
0
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
0

可通過以下方法來做修改。

# 修改 /etc/sysctl.conf
# 如果有配置,則修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
# 可能沒有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
sysctl -p

也就是在/etc/sysctl.conf末尾加上如下內容:

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

同時讓配置生效sysctl -p

安裝Docker

下載docker

由於我們在生產環境中是沒法連接互聯網的,所以要提前准備好docker rpm包。

我們在另一台可以聯網的機器上下載安裝所需的軟件。

添加docker yum源

在聯網的機器上,下載docker

配置docker-ce源

cd /etc/yum.repos.d/
wget https://download.docker.com/linux/centos/docker-ce.repo

或者

官方源地址(比較慢)

$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

阿里雲

$ sudo yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

清華大學源

$ sudo yum-config-manager \
    --add-repo \
    https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo

查看docker所有版本

[root@k8s-master ~]# yum list docker-ce --showduplicates
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
已安裝的軟件包
docker-ce.x86_64                               18.06.3.ce-3.el7                                       @/docker-ce-18.06.3.ce-3.el7.x86_64
可安裝的軟件包
docker-ce.x86_64                               17.03.0.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.03.1.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.03.2.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.03.3.ce-1.el7                                       docker-ce-stable
docker-ce.x86_64                               17.06.0.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.06.1.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.06.2.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.09.0.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.09.1.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.12.0.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               17.12.1.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               18.03.0.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               18.03.1.ce-1.el7.centos                                docker-ce-stable
docker-ce.x86_64                               18.06.0.ce-3.el7                                       docker-ce-stable
docker-ce.x86_64                               18.06.1.ce-3.el7                                       docker-ce-stable
docker-ce.x86_64                               18.06.2.ce-3.el7                                       docker-ce-stable
docker-ce.x86_64                               18.06.3.ce-3.el7                                       docker-ce-stable
docker-ce.x86_64                               3:18.09.0-3.el7                                        docker-ce-stable
...

我們選擇安裝docker-ce.18.06.3.ce-3.el7

下載

yum install --downloadonly --downloaddir ~/k8s/docker docker-ce-18.06.3.ce-3.el7

docker及其依賴會下載到~/docker文件夾中。

我們可以看到只有docker-ce是來自docker-ce-stable源的。

依賴關系解決

================================================================================================================================================================
 Package                                    架構                       版本                                          源                                    大小
================================================================================================================================================================
正在安裝:
 docker-ce                                  x86_64                     18.06.3.ce-3.el7                              docker-ce-stable                      41 M
為依賴而安裝:
 audit-libs-python                          x86_64                     2.8.5-4.el7                                   base                                  76 k
 checkpolicy                                x86_64                     2.5-8.el7                                     base                                 295 k
 container-selinux                          noarch                     2:2.119.2-1.911c772.el7_8                     extras                                40 k
 libcgroup                                  x86_64                     0.41-21.el7                                   base                                  66 k
 libsemanage-python                         x86_64                     2.5-14.el7                                    base                                 113 k
 policycoreutils-python                     x86_64                     2.5-34.el7                                    base                                 457 k
 python-IPy                                 noarch                     0.75-6.el7                                    base                                  32 k
 setools-libs                               x86_64                     3.3.8-4.el7                                   base                                 620 k

所以,我們只需要把docker-ce-18.06.3.ce-3.el7.x86_64.rpm拷貝到master及node節點里面。

在master及node節點里創建~/k8s/docker目錄,用於存放docker安裝rpm包。

mkdir -p ~/k8s/docker

拷貝到k8s集群

通過scp命令拷貝。

scp docker-ce-18.06.3.ce-3.el7.x86_64.rpm root@192.168.18.135:~/k8s/docker/
scp docker-ce-18.06.3.ce-3.el7.x86_64.rpm root@192.168.18.136:~/k8s/docker/

當然不copy的話,也可以按照上面的步驟,在node節點上也執行下下載命令

安裝Docker

yum本地安裝

yum install k8s/docker/docker-ce-18.06.3.ce-3.el7.x86_64.rpm

設置開機啟動

systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

我們可以查看一下安裝包到底生成了哪些文件。

rpm -ql docker-ce

或者

rpm -qpl k8s/docker/docker-ce-18.06.3.ce-3.el7.x86_64.rpm

啟動Docker

systemctl start docker

查看docker服務信息。

docker info
...
Cgroup Driver: cgroupfs
...

呆會兒我們還需要修改這個值

安裝k8s組件

由於kubeadm是依賴kubelet, kubectl的,所以我們只需要下載kubeadm的rpm,其依賴就自動下載下來了。但是版本可能不是我們想要的,所以可能需要單獨下載。比如我下載kubeadm-1.15.6,它依賴的可能是kubelet-1.16.x。

下載k8s組件

我們需要安裝kubeadm, kubelet, kubectl,版本需要一致。在可以連外網的機器上下載組件,同上面docker。

添加kubernetes yum源

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
enabled=1
EOF

查看kubeadm版本

yum list kubeadm --showduplicates
...
kubeadm.x86_64                                                         1.15.6-0     
...     

版本很多,我們選擇1.15.6-0 的版本

下載

yum install --downloadonly --downloaddir ~/k8s/kubernetes kubeadm-1.15.6

根據如下依賴關系

====================================================================================================================================================
 Package                                    架構                       版本                                    源                              大小
====================================================================================================================================================
正在安裝:
 kubeadm                                    x86_64                     1.15.6-0                                kubernetes                     8.9 M
為依賴而安裝:
 conntrack-tools                            x86_64                     1.4.4-5.el7_7.2                         updates                        187 k
 cri-tools                                  x86_64                     1.13.0-0                                kubernetes                     5.1 M
 kubectl                                    x86_64                     1.16.3-0                                kubernetes                      10 M
 kubelet                                    x86_64                     1.16.3-0                                kubernetes                      22 M
 kubernetes-cni                             x86_64                     0.7.5-0                                 kubernetes                      10 M
 libnetfilter_cthelper                      x86_64                     1.0.0-10.el7_7.1                        updates                         18 k
 libnetfilter_cttimeout                     x86_64                     1.0.0-6.el7_7.1                         updates                         18 k
 libnetfilter_queue                         x86_64                     1.0.2-2.el7_2                           base                            23 k
 socat                                      x86_64                     1.7.3.2-2.el7                           base                           290 k

我們只需要把來自kubernetes源的kubeadm和4個依賴cri-toolskubectlkubeletkubernetes-cni拷貝到master和node節點。

下載kubelet-1.15.6

yum install --downloadonly --downloaddir ~/k8s/kubernetes kubelet-1.15.6

下載kubectl-1.15.6

yum install --downloadonly --downloaddir ~/k8s/kubernetes kubectl-1.15.6

拷貝到k8s集群

在master及node節點里創建~/k8s/kubernetes目錄,用於存放k8s組件安裝的rpm包。

mkdir -p ~/k8s/kubernetes

此處省略copy腳本,如果嫌麻煩,可以將上面的下載命令在node節點上也執行一下

安裝k8s組件

yum install ~/k8s/kubernetes/*.rpm
--> 解決依賴關系完成
錯誤: Multilib version problems found. This often means that the root
      cause is something else and multilib version checking is just
      pointing out that there is a problem. Eg.:

        1. You have an upgrade for kubectl which is missing some
           dependency that another package requires. Yum is trying to
           solve this by installing an older version of kubectl of the
           different architecture. If you exclude the bad architecture
           yum will tell you what the root cause is (which package
           requires what). You can try redoing the upgrade with
           --exclude kubectl.otherarch ... this should give you an error
           message showing the root cause of the problem.

        2. You have multiple architectures of kubectl installed, but
           yum can only see an upgrade for one of those architectures.
           If you don't want/need both architectures anymore then you
           can remove the one with the missing update and everything
           will work.

        3. You have duplicate versions of kubectl installed already.
           You can use "yum check" to get yum show these errors.

      ...you can also use --setopt=protected_multilib=false to remove
      this checking, however this is almost never the correct thing to
      do as something else is very likely to go wrong (often causing
      much more problems).

      保護多庫版本:kubectl-1.23.5-0.x86_64 != kubectl-1.15.6-0.x86_64
錯誤:保護多庫版本:kubelet-1.15.6-0.x86_64 != kubelet-1.23.5-0.x86_64

安裝的時候,報錯了。明明安裝的是1.15的版本,不知道怎么還有1.23的版本

我們到 ~/k8s/kubernetes 目錄下看看

[root@localhost docker]# cd ~/k8s/kubernetes
[root@localhost kubernetes]# ll
總用量 98512
-rw-r--r--. 1 root root  7401938 3月  18 06:26 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm
-rw-r--r--. 1 root root  9920490 1月   4 2021 5181c2b7eee876b8ce205f0eca87db2b3d00ffd46d541882620cb05b738d7a80-kubectl-1.15.6-0.x86_64.rpm
-rw-r--r--. 1 root root  9294306 1月   4 2021 62cd53776f5e5d531971b8ba4aac5c9524ca95d2bb87e83996cf3f54873211e5-kubeadm-1.15.6-0.x86_64.rpm
-rw-r--r--. 1 root root  9921646 3月  18 06:33 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm
-rw-r--r--. 1 root root   191000 4月   4 2020 conntrack-tools-1.4.4-7.el7.x86_64.rpm
-rw-r--r--. 1 root root 21546750 3月  18 06:38 d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm
-rw-r--r--. 1 root root 19487362 1月   4 2021 db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm
-rw-r--r--. 1 root root 22728902 1月   4 2021 e9e7cc53edd19d0ceb654d1bde95ec79f89d26de91d33af425ffe8464582b36e-kubelet-1.15.6-0.x86_64.rpm
-rw-r--r--. 1 root root    18400 4月   4 2020 libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
-rw-r--r--. 1 root root    18212 4月   4 2020 libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
-rw-r--r--. 1 root root    23584 8月  11 2017 libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
-rw-r--r--. 1 root root   296632 8月  11 2017 socat-1.7.3.2-2.el7.x86_64.rpm

發現確實有兩個1.23的版本,解決辦法,就是將其刪掉

rm -f 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm
rm -f d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm

刪除之后,在執行上面的安裝命令

這樣,kubeadm, kubectl, kubelet就已經安裝好了。

設置kubelet的開機啟動。我們並不需要啟動kubelet,就算啟動,也是不能成功的。執行kubeadm命令,會生成一些配置文件 ,這時才會讓kubelet啟動成功的。

systemctl enable kubelet

拉取鏡像

執行kubeadm時,需要用到一些鏡像,我們需要提前准備。

查看需要依賴哪些鏡像


[root@localhost kubernetes]# kubeadm config images list
I0410 16:34:41.007521   20037 version.go:248] remote version is much newer: v1.23.5; falling back to: stable-1.15
k8s.gcr.io/kube-apiserver:v1.15.12
k8s.gcr.io/kube-controller-manager:v1.15.12
k8s.gcr.io/kube-scheduler:v1.15.12
k8s.gcr.io/kube-proxy:v1.15.12
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

在生產環境,是肯定訪問不了k8s.gcr.io這個地址的。在有大陸聯網的機器上,也是無法訪問的。所以我們需要使用國內鏡像先下載下來。

解決辦法跟簡單,我們使用docker命令搜索下

[root@localhost kubernetes]# docker search kube-apiserver
NAME                                    DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
aiotceo/kube-apiserver                  k8s.gcr.io/kube-apiserver                       20
mirrorgooglecontainers/kube-apiserver                                                   19
kubesphere/kube-apiserver                                                               7
kubeimage/kube-apiserver-amd64          k8s.gcr.io/kube-apiserver-amd64                 5
empiregeneral/kube-apiserver-amd64      kube-apiserver-amd64                            4                                       [OK]
graytshirt/kube-apiserver               Alpine with the kube-apiserver binary           2
k8simage/kube-apiserver                                                                 1
docker/desktop-kubernetes-apiserver     Mirror of selected tags from k8s.gcr.io/kube…   1
cjk2atmb/kube-apiserver                                                                 0
kope/kube-apiserver-healthcheck                                                         0
forging2012/kube-apiserver                                                              0
ramencloud/kube-apiserver               k8s.gcr.io/kube-apiserver                       0
lbbi/kube-apiserver                     k8s.gcr.io                                      0
v5cn/kube-apiserver                                                                     0
cangyin/kube-apiserver                                                                  0
mesosphere/kube-apiserver-amd64                                                         0
boy530/kube-apiserver                                                                   0
ggangelo/kube-apiserver                                                                 0
opsdockerimage/kube-apiserver                                                           0
mesosphere/kube-apiserver                                                               0
lchdzh/kube-apiserver                   kubernetes原版基礎鏡像,Registry為k8s.gcr.io            0
willdockerhub/kube-apiserver                                                            0
woshitiancai/kube-apiserver                                                             0
k8smx/kube-apiserver                                                                    0
rancher/kube-apiserver                                                                  0

鏡像很多,一般選擇 STARS 梳理多的。樓主選擇的是 aiotceo/kube-apiserver

在三台機器上拉取如下鏡像。

docker pull aiotceo/kube-apiserver:v1.15.6
docker pull aiotceo/kube-controller-manager:v1.15.6
docker pull aiotceo/kube-scheduler:v1.15.6
docker pull aiotceo/kube-proxy:v1.15.6
docker pull aiotceo/pause:3.1
docker pull aiotceo/etcd:3.3.10
docker pull aiotceo/coredns:1.3.1

查看拉取鏡像。

[root@localhost kubernetes]# docker images
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
aiotceo/kube-proxy                v1.15.6             d756327a2327        2 years ago         82.4MB
aiotceo/kube-apiserver            v1.15.6             9f612b9e9bbf        2 years ago         207MB
aiotceo/kube-controller-manager   v1.15.6             83ab61bd43ad        2 years ago         159MB
aiotceo/kube-scheduler            v1.15.6             502e54938456        2 years ago         81.1MB
aiotceo/coredns                   1.3.1               eb516548c180        3 years ago         40.3MB
aiotceo/etcd                      3.3.10              2c4adeb21b4f        3 years ago         258MB
aiotceo/pause                     3.1                 da86e6ba6ca1        4 years ago         742kB

tag鏡像

為了讓kubeadm程序能找到k8s.gcr.io下面的鏡像,需要把剛才下載的鏡像名稱重新打一下tag。

docker images | grep aiotceo | sed 's/aiotceo/k8s.gcr.io/' | awk '{print "docker tag " $3 " " $1 ":" $2}' | sh

刪除舊的鏡像,當然,你留着也不會占用太多空間。

docker images | grep aiotceo | awk '{print "docker rmi " $1 ":" $2}' | sh

查看鏡像

REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.15.6             d756327a2327        2 years ago         82.4MB
k8s.gcr.io/kube-apiserver            v1.15.6             9f612b9e9bbf        2 years ago         207MB
k8s.gcr.io/kube-controller-manager   v1.15.6             83ab61bd43ad        2 years ago         159MB
k8s.gcr.io/kube-scheduler            v1.15.6             502e54938456        2 years ago         81.1MB
k8s.gcr.io/coredns                   1.3.1               eb516548c180        3 years ago         40.3MB
k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        3 years ago         258MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        4 years ago         742kB

鏡像搞定了。

部署k8s集群

初始化master節點

在master節點上執行kubeadm init命令。

如果使用flannel網絡。則要把參數中必須設置--pod-network-cidr=10.244.0.0/16,這個IP地址是固定的。

如果不用,則不需要。樓主因為網絡問題,沒有使用flannel網絡

kubeadm init --kubernetes-version=v1.15.6 \
    --apiserver-advertise-address=192.168.18.134 \
    --pod-network-cidr=10.244.0.0/16

解決WARNING

我們看到上面的消息中有一句

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

還記得前面我在查看docker info時,有提到要修改cgroup driver么?現在就來修改吧。

修改或創建/etc/docker/daemon.json,添加如下內容:

{
	"exec-opts": ["native.cgroupdriver=systemd"]
}

重啟docker

systemctl restart docker

查看修改結果,如果Cgroup Driver改為systemd后就表示成功了。

docker info
...
Cgroup Driver: systemd
...

重置

kubeadm reset

[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1207 22:12:18.285935   27649 reset.go:98] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://172.16.64.233:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 172.16.64.233:6443: connect: connection refused
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1207 22:12:19.569005   27649 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

再次初始化Master節點

apiserver-advertise-addresspod-network-cidr參數都可以省略掉。

kubeadm init --kubernetes-version=v1.15.6 \
--apiserver-advertise-address=192.168.18.134 \
--pod-network-cidr=10.244.0.0/16


[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.18.134]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.18.134 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.18.134 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 33.002499 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 8y4nd8.ww9f2npklyebtjqp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.18.134:6443 --token 8y4nd8.ww9f2npklyebtjqp \
    --discovery-token-ca-cert-hash sha256:c5f01fe144020785cb82b53bcda3b64c2fb8d955af3ca863b8c31d9980c32023

提示信息和上面初始化時的信息一樣,只是少了剛才的WARNING。

按照信息提示,執行如下命令,目前登錄的就是root用戶,所以也不需要用sudo了。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看節點信息,節點狀態為NotReady:

kubectl get no
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   2m22s   v1.15.6

往集群里面加入node節點

在節點node1上,按上面的提示執行命令:

kubeadm join 192.168.18.134:6443 --token 8y4nd8.ww9f2npklyebtjqp \
    --discovery-token-ca-cert-hash sha256:c5f01fe144020785cb82b53bcda3b64c2fb8d955af3ca863b8c31d9980c32023

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在Master節點上(control-plane)上查看節點信息

kubectl get no
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   master   7m    v1.15.6
k8s-node1    NotReady   <none>   65s   v1.15.6
k8s-node2    NotReady   <none>   65s   v1.15.6

我們看到了多了一個節點,雖然現在都是NotReady狀態。

Token過期后再加入節點

過了一段時間后,再加入節點,這個時候會提示token已經過期了。我們可以這樣拿到token和hash值。

kubeadm token create
kubeadm token list
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

# 或者在master節點重新生成 Join Token,然后復制生成的內容,到從節點,執行下
kubeadm token create --print-join-command

kubeadm join 192.168.18.134:6443 --token h9g5rn.y07uajj3d9r3v5hh     --discovery-token-ca-cert-hash sha256:cfb734386ee0d27d4864900648c3eaf0e2f84b1e9f98d04b483ad9e702653c9e

安裝Network插件

安裝flannel網絡插件。(網絡允許的話)

查看安裝方法

查看flannel的官網https://github.com/coreos/flannel,找到安裝方法。

For Kubernetes v1.7+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

下載yml文件

在有網絡的機器上下載kube-flannel.yml文件

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

把下載好的yml文件分發到k8s集群的三台機器里面。

下載鏡像

cat kube-flannel.yml | grep image
        image: quay.io/coreos/flannel:v0.11.0-amd64
        ...

還記得前面方法么?不記得就回到上面再看看吧。

docker pull quay.azk8s.cn/coreos/flannel:v0.11.0-amd64
docker tag ff281650a721 quay.io/coreos/flannel:v0.11.0-amd64
docker rmi quay.azk8s.cn/coreos/flannel:v0.11.0-amd64

安裝flannel

我們也可以選擇安裝Calico網絡插件。

在Master節點執行:

kubectl apply -f kube-flannel.yml

podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

網絡不行的話

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

"kubeadm config print init-defaults"這個命令可以告訴我們kubeadm.yaml版本信息。

查看節點信息

[root@k8s-master ~]# kubectl get no
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   5h46m   v1.15.6
k8s-node1    Ready    <none>   5h41m   v1.15.6
k8s-node2    Ready    <none>   5h38m   v1.15.6

這一下所有節點都已經ready了。

查看進程

Master節點

[root@k8s-master ~]# ps -ef | grep kube
root       1674      1  1 14:17 ?        00:02:55 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1
root       2410   2393  1 14:17 ?        00:02:24 etcd --advertise-client-urls=https://192.168.18.134:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://192.168.18.134:2380 --initial-cluster=k8s-master=https://192.168.18.134:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.18.134:2379 --listen-peer-urls=https://192.168.18.134:2380 --name=k8s-master --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
root       2539   2520  3 14:18 ?        00:04:58 kube-apiserver --advertise-address=192.168.18.134 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root       2822   2802  0 14:18 ?        00:00:05 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8s-master
root       3382   2994  0 14:18 ?        00:00:01 /home/weave/kube-utils -run-reclaim-daemon -node-name=k8s-master -peer-name=da:f9:bb:91:b9:c4 -log-level=debug
root      19885  19841  2 14:55 ?        00:02:25 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --node-cidr-mask-size=24 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --use-service-account-credentials=true
root      19894  19866  0 14:55 ?        00:00:10 kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
root      71218  19968  0 16:55 pts/1    00:00:00 grep --color=auto kube

Worker節點

[root@k8s-node1 ~]# ps -ef | grep kube
root       5013      1  1 14:24 ?        00:02:08 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1
root       5225   5206  0 14:24 ?        00:00:07 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8s-node1
root       5765   5517  0 14:24 ?        00:00:01 /home/weave/kube-utils -run-reclaim-daemon -node-name=k8s-node1 -peer-name=a2:4e:07:10:2c:21 -log-level=debug
root      15767   8087  0 16:56 pts/1    00:00:00 grep --color=auto kube

測試k8s集群

安裝一個nginx。

創建一個部署(deployment)

在master節點(Control Plane)安裝一個叫nginx-deployment的deployment:

kubectl create deploy nginx-deployment --image=nginx
deployment.apps/nginx-deployment created

查看deployment狀態

[root@k8s-master ~]# kubectl get deploy
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   1/1     1            1           119m

查看pod狀態

[root@k8s-master ~]# kubectl get po
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-6f77f65499-tnztr   1/1     Running   0          120m

如果STATUS 不是Running狀態,說明拉去很慢,可以修改下docker的鏡像

配置docker源

在生產環境,肯定是有內部的鏡像源的,在這里,我就模擬把源配置為阿里的鏡像源了。

/etc/docker/daemon.json內容如下:

{
	"exec-opts": ["native.cgroupdriver=systemd"],
	"registry-mirrors": ["http://hub-mirror.c.163.com"]
}

重啟docker

systemctl restart docker

這個時候,鏡像就容易拉取了。

測試pod

再次查看deploy, pod,狀態已經變為READY了。

NAME                                READY   STATUS    RESTARTS   AGE    IP          NODE        NOMINATED NODE   READINESS GATES
nginx-deployment-6f77f65499-tnztr   1/1     Running   0          122m   10.46.0.1   k8s-node1   <none>           <none>

我們看到pod的IP為10.46.0.1。

在集群內的三個節點訪問nginx,能成功訪問。

[root@k8s-node1 ~]# curl 10.46.0.1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

創建Service

我們把deployment暴露出來。

kubectl expose deploy nginx-deployment --port=80 --type=NodePort
service/nginx-deployment exposed

查看狀態

[root@k8s-master ~]# kubectl get svc
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP        5h54m
nginx-deployment   NodePort    10.111.68.248   <none>        80:31923/TCP   122m

在三個節點內訪問nginx

[root@k8s-node1 ~]# curl 10.111.68.248
...
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
...

在集群外訪問nginx

curl 192.168.18.134:31923

...
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
...


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM