K8s 離線集群部署(二進制包無dashboard)


https://www.cnblogs.com/cocowool/p/install_k8s_offline.html

https://www.jianshu.com/p/073577bdec98

https://blog.csdn.net/crabdave/article/details/84880771

本文環境 Redhat Linux 7.3,操作系統采用的最小安裝方式。

Kubernetes的版本為 V1.10

Docker版本為18.03.1-ce

etcd 版本為 V3.3.8

1. 准備規划

1.1 Node 規划

主機名

地址

角色

devops-101

192.168.0.101

k8s master

devops-102

192.168.0.102

k8s node

1.2 Network 網絡

1.3 安裝文件

Kubernetes安裝需要以下二進制文件:

  • etcd
  • docker
  • Kubernetes
    • kubelet
    • kube-proxy
    • kube-apiserver
    • kube-controller-manager
    • kube-scheduler

我們可以下載編譯好的二進制文件,也可以下載源碼自己編譯,源碼編譯可以參考這里本文只討論二進制的安裝方式。在KubernetesGithub Latest 頁面,可以看到最新打包的版本。也可以到 Tag 頁面中找到自己需要的版本,我下載的是 V1.11

注意這個頁面有可能不是最新的版本,我查看的時候顯示的版本是 V1.9.9,但是最新的版本是 V1.11,這時就需要切換到 Tag 頁面查找。

服務器上需要的二進制文件並不在下載的 tar 包中,需要解壓tar包,然后執行cluster/get-kube-binaries.sh。下載需要訪問 storage.googleapis.com,因為大家都知道的原因,可能無法正常訪問,需要大家科學的獲取安裝文件。下載完成后,解壓kubernetes-server-linux-amd64.tar.gz

可以看到文件列表

[root@devops-101 bin]# pwd
/root/kubernetes/server/bin
[root@devops-101 bin]# ls -lh
total 1.8G
-rwxr-xr-x. 1 root root 57M Jun 28 04:55 apiextensions-apiserver
-rwxr-xr-x. 1 root root 132M Jun 28 04:55 cloud-controller-manager
-rw-r--r--. 1 root root 8 Jun 28 04:55 cloud-controller-manager.docker_tag
-rw-r--r--. 1 root root 134M Jun 28 04:55 cloud-controller-manager.tar
-rwxr-xr-x. 1 root root 218M Jun 28 04:55 hyperkube
-rwxr-xr-x. 1 root root 56M Jun 28 04:55 kube-aggregator
-rw-r--r--. 1 root root 8 Jun 28 04:55 kube-aggregator.docker_tag
-rw-r--r--. 1 root root 57M Jun 28 04:55 kube-aggregator.tar
-rwxr-xr-x. 1 root root 177M Jun 28 04:55 kube-apiserver
-rw-r--r--. 1 root root 8 Jun 28 04:55 kube-apiserver.docker_tag
-rw-r--r--. 1 root root 179M Jun 28 04:55 kube-apiserver.tar
-rwxr-xr-x. 1 root root 147M Jun 28 04:55 kube-controller-manager
-rw-r--r--. 1 root root 8 Jun 28 04:55 kube-controller-manager.docker_tag
-rw-r--r--. 1 root root 149M Jun 28 04:55 kube-controller-manager.tar
-rwxr-xr-x. 1 root root 50M Jun 28 04:55 kube-proxy
-rw-r--r--. 1 root root 8 Jun 28 04:55 kube-proxy.docker_tag
-rw-r--r--. 1 root root 96M Jun 28 04:55 kube-proxy.tar
-rwxr-xr-x. 1 root root 54M Jun 28 04:55 kube-scheduler
-rw-r--r--. 1 root root 8 Jun 28 04:55 kube-scheduler.docker_tag
-rw-r--r--. 1 root root 55M Jun 28 04:55 kube-scheduler.tar
-rwxr-xr-x. 1 root root 55M Jun 28 04:55 kubeadm
-rwxr-xr-x. 1 root root 53M Jun 28 04:56 kubectl
-rwxr-xr-x. 1 root root 156M Jun 28 04:55 kubelet
-rwxr-xr-x. 1 root root 2.3M Jun 28 04:55 mounter

1.4 系統配置

  • 配置Hosts
  • 關閉防火牆

$ systemctl stop firewalld
$ systemctl disable firewalld

  • 關閉selinux

$ vim /etc/selinux/config

SELINUX=enforcing改為SELINUX=disabledwq保存退出。

  • 關閉swap

$ swapoff -a
$ vim /etc/fstab #
修改自動掛載配置,注釋掉即可
#/dev/mapper/centos-swap swap swap defaults 0 0

2. 安裝 Node

我們需要在Node機器上安裝以下應用:

  • Docker
  • kubelet
  • kube-proxy

2.1 Docker

Docker的版本需要與kubelete版本相對應,最好都使用最新的版本。Redhat 中需要使用 Static Binary 方式安裝,具體可以參考我之前的一篇文章

2.2 拷貝 kubeletkube-proxy

在之前解壓的 kubernetes 文件夾中拷貝二進制文件

$ cp /root/kubernetes/server/bin/kubelet /usr/bin/
$ cp /root/kubernetes/server/bin/kube-proxy /usr/bin/

2.3 安裝 kube-proxy 服務

$ vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

創建配置目錄,並添加配置文件

$ mkdir -p /etc/kubernetes
$ vim /etc/kubernetes/proxy
KUBE_PROXY_ARGS=""
$ vim /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.0.101:8080"

啟動服務

[root@devops-102 ~]# systemctl daemon-reload
[root@devops-102 ~]# systemctl start kube-proxy.service
[root@devops-102 ~]# netstat -lntp | grep kube-proxy
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 10522/kube-proxy
tcp6 0 0 :::10256 :::* LISTEN 10522/kube-proxy

2.4 安裝 kubelete 服務

$ vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
$ mkdir -p /var/lib/kubelet
$ vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=192.168.0.102"
KUBELET_API_SERVER="--api-servers=http://192.168.0.101:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=reg.docker.tb/harbor/pod-infrastructure:latest"
KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubeconfig"

創建配置文件 vim /var/lib/kubelet/kubeconfig

apiVersion: v1
kind: Config
users:
- name: kubelet
clusters:
- name: kubernetes
cluster:
server: http://192.168.0.101:8080
contexts:
- context:
cluster: kubernetes
user: kubelet
name: service-account-context
current-context: service-account-context

啟動kubelet並進習驗證。

$ swapoff -a
$ systemctl daemon-reload
$ systemctl start kubelet.service
$ netstat -tnlp | grep kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 10630/kubelet
tcp 0 0 127.0.0.1:37865 0.0.0.0:* LISTEN 10630/kubelet
tcp6 0 0 :::10250 :::* LISTEN 10630/kubelet
tcp6 0 0 :::10255 :::* LISTEN 10630/kubelet

3. 安裝 Master

3.1 安裝etcd

本文采用二進制安裝方法,首先下載安裝包。

之后進行解壓,文件拷貝,編輯 etcd.serviceetcd.conf文件夾

$ tar zxf etcd-v3.2.11-linux-amd64.tar.gz
$ cd etcd-v3.2.11-linux-amd64
$ cp etcd etcdctl /usr/bin/
$ vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd.service

[Service]
Type=notify
TimeoutStartSec=0
Restart=always
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd

[Install]
WantedBy=multi-user.target
$ mkdir -p /var/lib/etcd && mkdir -p /etc/etcd/
$ vim /etc/etcd/etcd.conf
ETCD_NAME=ETCD Server
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.101:2379"
#
啟動etcd
$ systemctl daemon-reload
$ systemctl start etcd.service

查看etcd狀態是否正常

$ etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://192.168.0.101:2379
cluster is healthy

3.2 安裝kube-apiserver

添加啟動文件

[Unit]
Description=Kubernetes API Server
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_LOG \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

創建配置文件

$ vim /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.0.101:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/24"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""

啟動服務

$ systemctl daemon-reload
$ systemctl start kube-apiserver.service

查看啟動是否成功

$ netstat -tnlp | grep kube
tcp6 0 0 :::6443 :::* LISTEN 10144/kube-apiserve
tcp6 0 0 :::8080 :::* LISTEN 10144/kube-apiserve

3.3 安裝kube-controller-manager

創建啟動文件

$ vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

創建配置文件

$ vim /etc/kubernetes/controller-manager
KUBE_MASTER="--master=http://192.168.0.101:8080"
KUBE_CONTROLLER_MANAGER_ARGS=" "

啟動服務

$ systemctl daemon-reload
$ systemctl start kube-controller-manager.service

驗證服務狀態

$ netstat -lntp | grep kube-controll
tcp6 0 0 :::10252 :::* LISTEN 10163/kube-controll

3.4 安裝kube-scheduler

創建啟動文件

$ vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
User=root
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

修改配置

$ vim /etc/kubernetes/scheduler
KUBE_MASTER="--master=http://192.168.0.101:8080"
KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/home/log/kubernetes --v=2"

啟動服務

$ systemctl daemon-reload
$ systemctl start kube-scheduler.service

驗證服務狀態

$ netstat -lntp | grep kube-schedule
tcp6 0 0 :::10251 :::* LISTEN 10179/kube-schedule

3.5 配置Profile

$ sed -i '$a export PATH=$PATH:/root/kubernetes/server/bin/' /etc/profile
$ source /etc/profile

3.6 安裝 kubectl 並查看狀態

$ cp /root/kubernetes/server/bin/kubectl /usr/bin/
$ kubectl get cs
NAME STATUS MESSAGE ERROR
etcd-0 Healthy {"health":"true"}
controller-manager Healthy ok
scheduler Healthy ok

到這里Master節點就配置完畢。

4. 配置flannel網絡

Flannel可以使整個集群的docker容器擁有唯一的內網IP,並且多個node之間的docker0可以互相訪問。下載地址

5. 集群驗證

101上執行命令,檢查nodes,如果能看到,表明集群現在已經OK了。

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
devops-102 Ready <none> 12s v1.11.0

6.將二進制的docker包加入到systemctl啟動中

首先將解壓后的docker目錄下的所有文件復制到/usr/bin目錄下,然后service文件

將二進制的docker包加入到systemctl啟動中

創建service文件加入以下內容

vi /usr/lib/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

After=network-online.target firewalld.service

Wants=network-online.target

   

[Service]

Type=notify

# the default is not to use systemd for cgroups because the delegate issues still

# exists and systemd currently does not support the cgroup feature set required

# for containers run by docker

ExecStart=/usr/bin/dockerd

ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead

# in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.

# Only systemd 226 and above support this version.

#TasksMax=infinity

TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

# kill only the docker process, not all processes in the cgroup

KillMode=process

# restart the docker process if it exits prematurely

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM