高可用k8s集群搭建


虛擬機選擇

  • Win10 Hyper-V

總體架構

三個master,三個node

master的組件

  • etcd
  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler
  • kubelet
  • kube-proxy
  • docker
  • nginx

node組件

  • kubelet
  • kube-proxy
  • docker
  • nginx

環境准備

在所有節點操作

所有主機統一hosts

cat /etc/hosts

127.0.0.1 apiserver.k8s.local
192.168.31.21 master01
192.168.31.22 master02
192.168.31.23 master03
192.168.31.24 node01
192.168.31.25 node02
192.168.31.26 node03

設置主機名

hostnamectl set-hostname NAME

硬件配置

IP HostName 內核 CPU Memory
192.168.31.21 master01 3.10.0-1062 2 4G
192.168.31.22 master02 3.10.0-1062 2 4G
192.168.31.23 master03 3.10.0-1062 2 4G
192.168.31.24 node01 3.10.0-1062 2 4G
192.168.31.25 node02 3.10.0-1062 2 4G
192.168.31.26 node03 3.10.0-1062 2 4G
  • kubeadm好像要求最低配置2c2g還是多少來着,越高越好
  • 所有操作全部用root使用者進行,系統盤根目錄一定要大,不然到時候鏡像多了例如到了85%會被gc回收鏡像
  • 高可用一般建議大於等於3台的奇數台,使用3台master來做高可用

所有機器升級內核(可選)

導入升級內核的yum源

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

查看可用版本 kernel-lt指長期穩定版 kernel-ml指最新版

yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

安裝kernel-ml

yum --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel -y

設置啟動項

查看系統上的所有可用內核

awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg

設置新的內核為grub2的默認版本

grub2-set-default 'CentOS Linux (5.7.7-1.el7.elrepo.x86_64) 7 (Core)'

生成 grub 配置文件並重啟

grub2-mkconfig -o /boot/grub2/grub.cfg

reboot

所有機器都關閉防火牆,swap,selinux

#關閉防火牆
systemctl disable --now firewalld

#關閉swap
swapoff -a
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

#關閉selinux
setenforce 0
sed -ri '/^[^#]*SELINUX=/s#=.+$#=disabled#' /etc/selinux/config

所有機器yum

yum install epel-release -y

yum update -y
yum -y install  gcc bc gcc-c++ ncurses ncurses-devel cmake elfutils-libelf-devel openssl-devel flex* bison* autoconf automake zlib* fiex* libxml* ncurses-devel libmcrypt* libtool-ltdl-devel* make cmake  pcre pcre-devel openssl openssl-devel   jemalloc-devel tlc libtool vim unzip wget lrzsz bash-comp* ipvsadm ipset jq sysstat conntrack libseccomp conntrack-tools socat curl wget git conntrack-tools psmisc nfs-utils tree bash-completion conntrack libseccomp net-tools crontabs sysstat iftop nload strace bind-utils tcpdump htop telnet lsof

所有機器都加載ipvs

cat > /etc/modules-load.d/ipvs.conf <<EOF
module=(
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
nf_conntrack_ipv4
br_netfilter
  )
for kernel_module in ${module[@]};do
    /sbin/modinfo -F filename $kernel_module |& grep -qv ERROR && echo $kernel_module >> /etc/modules-load.d/ipvs.conf || :
done
EOF

加載ipvs模塊

systemctl daemon-reload
source  /etc/modules-load.d/ipvs.conf

查詢ipvs是否加載

$ lsmod | grep ip_vs
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  11 
ip_vs                 145497  17 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          133095  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              12644  3 ip_vs,nf_nat,nf_conntrack

所有機器都設置k8s系統參數

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv6.conf.all.disable_ipv6 = 1           #禁用ipv6
net.ipv6.conf.default.disable_ipv6 = 1       #禁用ipv6
net.ipv6.conf.lo.disable_ipv6 = 1            #禁用ipv6
net.ipv4.neigh.default.gc_stale_time = 120   #決定檢查過期多久鄰居條目
net.ipv4.conf.all.rp_filter = 0              #關閉反向路由校驗
net.ipv4.conf.default.rp_filter = 0          #關閉反向路由校驗
net.ipv4.conf.default.arp_announce = 2       #始終使用與目標IP地址對應的最佳本地IP地址作為ARP請求的源IP地址
net.ipv4.conf.lo.arp_announce = 2            #始終使用與目標IP地址對應的最佳本地IP地址作為ARP請求的源IP地址
net.ipv4.conf.all.arp_announce = 2           #始終使用與目標IP地址對應的最佳本地IP地址作為ARP請求的源IP地址
net.ipv4.ip_forward = 1                      #啟用ip轉發功能
net.ipv4.tcp_max_tw_buckets = 5000           #表示系統同時保持TIME_WAIT套接字的最大數量
net.ipv4.tcp_syncookies = 1                  #表示開啟SYN Cookies。當出現SYN等待隊列溢出時,啟用cookies來處理
net.ipv4.tcp_max_syn_backlog = 1024          #接受SYN同包的最大客戶端數量
net.ipv4.tcp_synack_retries = 2              #活動TCP連接重傳次數
net.bridge.bridge-nf-call-ip6tables = 1      #要求iptables對bridge的數據進行處理
net.bridge.bridge-nf-call-iptables = 1       #要求iptables對bridge的數據進行處理
net.bridge.bridge-nf-call-arptables = 1      #要求iptables對bridge的數據進行處理
net.netfilter.nf_conntrack_max = 2310720     #修改最大連接數
fs.inotify.max_user_watches=89100            #同一用戶同時可以添加的watch數目
fs.may_detach_mounts = 1                     #允許文件卸載
fs.file-max = 52706963                       #系統級別的能夠打開的文件句柄的數量
fs.nr_open = 52706963                        #單個進程可分配的最大文件數
vm.overcommit_memory=1                       #表示內核允許分配所有的物理內存,而不管當前的內存狀態如何
vm.panic_on_oom=0                            #內核將檢查是否有足夠的可用內存供應用進程使用
vm.swappiness = 0                            #關注swap
net.ipv4.tcp_keepalive_time = 600            #修復ipvs模式下長連接timeout問題,小於900即可
net.ipv4.tcp_keepalive_intvl = 30            #探測沒有確認時,重新發送探測的頻度
net.ipv4.tcp_keepalive_probes = 10      升級內核(     #在認定連接失效之前,發送多少個TCP的keepalive探測包
vm.max_map_count=524288                      #定義了一個進程能擁有的最多的內存區域
EOF

sysctl --system

所有機器都設置文件最大數

cat>/etc/security/limits.d/kubernetes.conf<<EOF
*       soft    nproc   131072
*       hard    nproc   131072
*       soft    nofile  131072
*       hard    nofile  131072
root    soft    nproc   131072
root    hard    nproc   131072
root    soft    nofile  131072
root    hard    nofile  131072
EOF

所有機器都設置docker 安裝

docker yum

wget -P /etc/yum.repos.d/  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

官方腳本檢查

grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

#然后重啟
reboot

docker安裝

yum install docker-ce -y

配置docker

cp /usr/share/bash-completion/completions/docker /etc/bash_completion.d/

mkdir -p /etc/docker/

cat > /etc/docker/daemon.json <<EOF
{
    "log-driver": "json-file",
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-opts": {
    "max-size": "100m",
    "max-file": "3"
    },
    "live-restore": true,
    "max-concurrent-downloads": 10,
    "max-concurrent-uploads": 10,
    "registry-mirrors": ["https://2lefsjdg.mirror.aliyuncs.com"],
    "storage-driver": "overlay2",
    "storage-opts": [
    "overlay2.override_kernel_check=true"
    ]
}
EOF

啟動docker

systemctl enable --now docker

kubeadm部署

所有機器都設置kubeadm yum

在所有節點操作

cat <<EOF >/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

maser節點安裝

yum install -y \
    kubeadm-1.18.2 \
    kubectl-1.18.2 \
    kubelet-1.18.2 \
    --disableexcludes=kubernetes && \
    systemctl enable kubelet

node節點安裝

yum install -y \
    kubeadm-1.18.2 \
    kubelet-1.18.2 \
    --disableexcludes=kubernetes && \
    systemctl enable kubelet

master高可用

mkdir -p /etc/kubernetes

cat > /etc/kubernetes/nginx.conf << EOF
error_log stderr notice;

worker_processes 2;
worker_rlimit_nofile 130048;
worker_shutdown_timeout 10s;

events {
  multi_accept on;
  use epoll;
  worker_connections 16384;
}

stream {
  upstream kube_apiserver {
    least_conn;
    server master01:6443;
    server master02:6443;
    server master03:6443;
    }

  server {
    listen        8443;
    proxy_pass    kube_apiserver;
    proxy_timeout 10m;
    proxy_connect_timeout 1s;
  }
}

http {
  aio threads;
  aio_write on;
  tcp_nopush on;
  tcp_nodelay on;

  keepalive_timeout 5m;
  keepalive_requests 100;
  reset_timedout_connection on;
  server_tokens off;
  autoindex off;

  server {
    listen 8081;
    location /healthz {
      access_log off;
      return 200;
    }
    location /stub_status {
      stub_status on;
      access_log off;
    }
  }
}
EOF
docker run --restart=always \
    -v /etc/kubernetes/nginx.conf:/etc/nginx/nginx.conf \
    -v /etc/localtime:/etc/localtime:ro \
    --name k8sHA \
    --net host \
    -d \
    nginx

kubeadm配置文件

在master01節點操作

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
imageRepository: registry.cn-hangzhou.aliyuncs.com/k8sxio
kubernetesVersion: v1.18.2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
networking: 
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
controlPlaneEndpoint: apiserver.k8s.local:8443
apiServer:
  timeoutForControlPlane: 4m0s
  extraArgs:
    authorization-mode: "Node,RBAC"
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeClaimResize,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Priority,PodPreset"
    runtime-config: api/all=true,settings.k8s.io/v1alpha1=true
    storage-backend: etcd3
    etcd-servers: https://192.168.31.21:2379,https://192.168.31.22:2379,https://192.168.31.23:2379 #修改對應的ip
  certSANs:
  - 10.96.0.1
  - 127.0.0.1
  - localhost
  - apiserver.k8s.local
  - 192.168.31.21   #修改對應的ip
  - 192.168.31.22   #修改對應的ip
  - 192.168.31.23   #修改對應的ip
  - master01       #修改對應的hostname
  - master02       #修改對應的hostname
  - master03       #修改對應的hostname
  - master
  - kubernetes
  - kubernetes.default 
  - kubernetes.default.svc 
  - kubernetes.default.svc.cluster.local
  extraVolumes:
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    readOnly: true
controllerManager:
  extraArgs:
    bind-address: "0.0.0.0"
    experimental-cluster-signing-duration: 867000h
  extraVolumes:
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    readOnly: true
scheduler: 
  extraArgs:
    bind-address: "0.0.0.0"
  extraVolumes:
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    readOnly: true
dns:
  type: CoreDNS
  imageRepository: registry.aliyuncs.com/k8sxio
  imageTag: 1.6.7
etcd:
  local:
    imageRepository: registry.aliyuncs.com/k8sxio
    imageTag: 3.4.3-0
    dataDir: /var/lib/etcd
    serverCertSANs:
    - master
    - 192.168.31.21   #修改對應的ip
    - 192.168.31.22   #修改對應的ip
    - 192.168.31.23   #修改對應的ip
    - master01      #修改對應的hostname
    - master02      #修改對應的hostname
    - master03      #修改對應的hostname
    peerCertSANs:
    - master
    - 192.168.31.21   #修改對應的ip
    - 192.168.31.22   #修改對應的ip
    - 192.168.31.23   #修改對應的ip
    - master01           #修改對應的hostname
    - master02           #修改對應的hostname
    - master03           #修改對應的hostname
    extraArgs:
      auto-compaction-retention: "1h"
      max-request-bytes: "33554432"
      quota-backend-bytes: "8589934592"
      enable-v2: "false"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: "rr"
  strictARP: false
  syncPeriod: 15s
iptables:
  masqueradeAll: true
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: "systemd"
failSwapOn: true

檢查文件是否錯誤,忽略warning,錯誤的話會拋出error,沒錯則會輸出到包含字符串kubeadm join xxx啥的

kubeadm init --config /root/initconfig.yaml --dry-run

預先拉取鏡像

kubeadm config images pull --config /root/initconfig.yaml

部署master

在master01節點操作

kubeadm init --config /root/initconfig.yaml --upload-certs

...
...
...
You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join apiserver.k8s.local:8443 --token 8lmdqu.cqe8r0rxa0056vmm \
    --discovery-token-ca-cert-hash sha256:5ca87fff6b414a0872ab5452972d7e36e4bad7ab3a0bc385abe0138ce671eabb \
    --control-plane --certificate-key 7a1d432b2834464a82fd7cba0e9e5d8409c492cf9a4ee6328fb4f84b6a78934a

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use 
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join apiserver.k8s.local:8443 --token 8lmdqu.cqe8r0rxa0056vmm \
    --discovery-token-ca-cert-hash sha256:5ca87fff6b414a0872ab5452972d7e36e4bad7ab3a0bc385abe0138ce671eabb

復制kubectl的kubeconfig,kubectl的kubeconfig路徑默認是~/.kube/config

mkdir -p $HOME/.kube

sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

init的yaml信息實際上會存在集群的configmap里,我們可以隨時查看,該yaml在其他node和master join的時候會使用到

kubectl -n kube-system get cm kubeadm-config -o yaml

設置ep的rbac

kube-apiserver的web健康檢查路由有權限,我們需要開放用來監控或者對接SLB的健康檢查

cat > /root/healthz-rbac.yml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: healthz-reader
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: healthz-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:authenticated
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:unauthenticated
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: healthz-reader
rules:
- nonResourceURLs: ["/healthz", "/healthz/*"]
  verbs: ["get", "post"]
EOF
kubectl apply -f /root/healthz-rbac.yml

配置其他master的k8s管理組件

將master01上的配置文件發到其他2個master節點上

for node in 192.168.31.22 192.168.31.23;do
    ssh $node 'mkdir -p /etc/kubernetes/pki/etcd'
    scp -r /root/initconfig.yaml $node:/root/initconfig.yaml
    scp -r /etc/kubernetes/pki/ca.* $node:/etc/kubernetes/pki/
    scp -r /etc/kubernetes/pki/sa.* $node:/etc/kubernetes/pki/
    scp -r /etc/kubernetes/pki/front-proxy-ca.* $node:/etc/kubernetes/pki/
    scp -r /etc/kubernetes/pki/etcd/ca.* $node:/etc/kubernetes/pki/etcd/
done

其他master join進來

先拉取鏡像

kubeadm config images pull --config /root/initconfig.yaml

查看master01上 帶有--control-plane的那一行

kubeadm join apiserver.k8s.local:8443 --token 8lmdqu.cqe8r0rxa0056vmm \
    --discovery-token-ca-cert-hash sha256:5ca87fff6b414a0872ab5452972d7e36e4bad7ab3a0bc385abe0138ce671eabb \
    --control-plane --certificate-key 7a1d432b2834464a82fd7cba0e9e5d8409c492cf9a4ee6328fb4f84b6a78934a

所有master配置kubectl

准備kubectl的kubeconfig

mkdir -p $HOME/.kube

sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

設置kubectl的補全腳本

yum -y install bash-comp*

source <(kubectl completion bash)

echo 'source <(kubectl completion bash)' >> ~/.bashrc

master配置etcdctl

所有master節點先復制出容器里的etcdctl

docker cp `docker ps -a | awk '/k8s_etcd/{print $1}'|head -n1`:/usr/local/bin/etcdctl /usr/local/bin/etcdctl

編寫一個簡單別名,記得替換對應的ip

cat >/etc/profile.d/etcd.sh<<'EOF'
ETCD_CERET_DIR=/etc/kubernetes/pki/etcd/
ETCD_CA_FILE=ca.crt
ETCD_KEY_FILE=healthcheck-client.key
ETCD_CERT_FILE=healthcheck-client.crt
ETCD_EP=https://192.168.31.21:2379,https://192.168.31.22:2379,https://192.168.31.23:2379

alias etcd_v3="ETCDCTL_API=3 \
    etcdctl   \
   --cert ${ETCD_CERET_DIR}/${ETCD_CERT_FILE} \
   --key ${ETCD_CERET_DIR}/${ETCD_KEY_FILE} \
   --cacert ${ETCD_CERET_DIR}/${ETCD_CA_FILE} \
   --endpoints $ETCD_EP"
EOF
source  /etc/profile.d/etcd.sh
etcd_v3 endpoint status --write-out=table

+-----------------------------+------------------+---------+---------+-----------+-----------+------------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+-----------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://192.168.31.21:2379 | c724c500884441af |  3.4.3  |  1.6 MB |      true |         7 |       1865 |
| https://192.168.31.22:2379 | 3dcceec24ad5c5d4 |  3.4.3  |  1.6 MB |     false |         7 |       1865 |
| https://192.168.31.23:2379 | bc21062efb4a5d4c |  3.4.3  |  1.5 MB |     false |         7 |       1865 |
+-----------------------------+------------------+---------+---------+-----------+-----------+------------+
etcd_v3 endpoint health --write-out=table

+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.31.23:2379 |   true | 19.288026ms |       |
| https://192.168.31.22:2379 |   true |   19.2603ms |       |
| https://192.168.31.21:2379 |   true | 22.490443ms |       |
+-----------------------------+--------+-------------+-------+

部署node

在node節點執行
和master的join一樣,提前准備好環境和docker,然后join的時候不需要帶--control-plane

kubeadm join apiserver.k8s.local:8443 --token 8lmdqu.cqe8r0rxa0056vmm \
    --discovery-token-ca-cert-hash sha256:5ca87fff6b414a0872ab5452972d7e36e4bad7ab3a0bc385abe0138ce671eabb

打標簽

role只是一個label,可以打label,想顯示啥就node-role.kubernetes.io/xxxx

[root@master01 ~]# kubectl get nodes
NAME       STATUS     ROLES    AGE   VERSION
master01   NotReady   master   17m   v1.18.2
master02   NotReady   master   14m   v1.18.2
master03   NotReady   master   13m   v1.18.2
node01     NotReady   <none>   24s   v1.18.2
node02     NotReady   <none>   18s   v1.18.2
node03     NotReady   <none>   11s   v1.18.2
[root@master01 ~]# kubectl label node node01 node-role.kubernetes.io/node=""
node/node01 labeled
[root@master01 ~]# kubectl label node node02 node-role.kubernetes.io/node=""
node/node02 labeled
[root@master01 ~]# kubectl label node node03 node-role.kubernetes.io/node=""
node/node03 labeled

[root@master01 ~]# kubectl get nodes 
NAME       STATUS     ROLES    AGE     VERSION
master01   NotReady   master   25m     v1.18.2
master02   NotReady   master   22m     v1.18.2
master03   NotReady   master   21m     v1.18.2
node01     NotReady   node     8m      v1.18.2
node02     NotReady   node     7m54s   v1.18.2
node03     NotReady   node     7m47s   v1.18.2

部署網絡插件Calico

沒有網絡插件,所有節點都是notready
在master01上操作

https://docs.projectcalico.org/v3.11/manifests/calico.yaml
sed -i -e "s?192.168.0.0/16?10.244.0.0/16?g" calico.yaml
kubectl apply -f calico.yaml

測試

驗證集群可用性

最基本的3master3node集群搭建完成了,必須有

  • 3個 kube-apiserver
  • 3個 kube-controller-manager
  • 3個 kube-scheduler
  • 3個 etcd
  • 6個 kube-proxy
  • 6個 calico-node
  • 1個 calico-kube-controllers
  • 2個 core-dns
kubectl get pods --all-namespaces

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-648f4868b8-6pcqf   1/1     Running   0          2m10s
kube-system   calico-node-d4hqw                          1/1     Running   0          2m10s
kube-system   calico-node-glmcl                          1/1     Running   0          2m10s
kube-system   calico-node-qm8zz                          1/1     Running   0          2m10s
kube-system   calico-node-s64r9                          1/1     Running   0          2m10s
kube-system   calico-node-shxhv                          1/1     Running   0          2m10s
kube-system   calico-node-zx7nw                          1/1     Running   0          2m10s
kube-system   coredns-7b8f8b6cf6-kh22h                   1/1     Running   0          14m
kube-system   coredns-7b8f8b6cf6-vp9x6                   1/1     Running   0          14m
kube-system   etcd-master01                              1/1     Running   0          35m
kube-system   etcd-master02                              1/1     Running   0          33m
kube-system   etcd-master03                              1/1     Running   0          32m
kube-system   kube-apiserver-master01                    1/1     Running   0          35m
kube-system   kube-apiserver-master02                    1/1     Running   0          33m
kube-system   kube-apiserver-master03                    1/1     Running   0          31m
kube-system   kube-controller-manager-master01           1/1     Running   1          34m
kube-system   kube-controller-manager-master02           1/1     Running   0          33m
kube-system   kube-controller-manager-master03           1/1     Running   0          31m
kube-system   kube-proxy-2zbx4                           1/1     Running   0          32m
kube-system   kube-proxy-bbvqk                           1/1     Running   0          19m
kube-system   kube-proxy-j8899                           1/1     Running   0          33m
kube-system   kube-proxy-khrw5                           1/1     Running   0          19m
kube-system   kube-proxy-srpz9                           1/1     Running   0          19m
kube-system   kube-proxy-tz24q                           1/1     Running   0          36m
kube-system   kube-scheduler-master01                    1/1     Running   1          35m
kube-system   kube-scheduler-master02                    1/1     Running   0          33m
kube-system   kube-scheduler-master03                    1/1     Running   0          31m
 

重啟docker,kubelet

由於kubeadm默認使用cgoupfs,官方推薦用systemd,所有節點都得進行檢查和修改成systemd,然后重啟docker,kubelelt

vim /var/lib/kubelet/kubeadm-flags.env

KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sxio/pause:3.2"
vim /etc/docker/daemon.json

{
    "log-driver": "json-file",
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-opts": {
    "max-size": "100m",
    "max-file": "3"
    },
    "live-restore": true,
    "max-concurrent-downloads": 10,
    "max-concurrent-uploads": 10,
    "registry-mirrors": ["https://2lefsjdg.mirror.aliyuncs.com"],
    "storage-driver": "overlay2",
    "storage-opts": [
    "overlay2.override_kernel_check=true"
    ]
}

所有節點先重啟docker 再重啟kubelet

systemctl restart docker
systemctl restart kubelet
[root@master01 ~]# kubectl get  nodes
NAME       STATUS   ROLES    AGE   VERSION
master01   Ready    master   37m   v1.18.2
master02   Ready    master   34m   v1.18.2
master03   Ready    master   33m   v1.18.2
node01     Ready    node     19m   v1.18.2
node02     Ready    node     19m   v1.18.2
node03     Ready    node     19m   v1.18.2

demo測試

cat<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:alpine
        name: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28.4
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF
[root@master01 ~]# kubectl get all  -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
pod/busybox                  1/1     Running   0          73s   10.244.186.194   node03   <none>           <none>
pod/nginx-5c559d5697-24zck   1/1     Running   0          73s   10.244.186.193   node03   <none>           <none>

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   42m   <none>
service/nginx        ClusterIP   10.111.219.3   <none>        80/TCP    73s   app=nginx

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
deployment.apps/nginx   1/1     1            1           73s   nginx        nginx:alpine   app=nginx

NAME                               DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES         SELECTOR
replicaset.apps/nginx-5c559d5697   1         1         1       73s   nginx        nginx:alpine   app=nginx,pod-template-hash=5c559d5697

驗證集群dns

[root@master01 ~]# kubectl exec -ti busybox -- nslookup kubernetes
Server:   10.96.0.10
Address:  10.96.0.10#53

Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1

測試nginx是否通

在master上curl nginx的pod的ip出現nginx的index內容即集群正常

[root@master01 ~]# curl 10.244.186.193
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

在master上curl nginx的svc的ip出現nginx的index內容即集群正常

[root@master01 ~]# curl 10.111.219.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master01 ~]# kubectl exec -ti busybox -- nslookup nginx
Server:   10.96.0.10
Address:  10.96.0.10#53

Name: nginx.default.svc.cluster.local
Address: 10.111.219.3

ipvs驗證

[root@node01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.33.101:6443          Masq    1      1          0         
  -> 192.168.33.102:6443          Masq    1      0          0         
  -> 192.168.33.103:6443          Masq    1      1          0         
TCP  10.96.0.10:53 rr
  -> 10.244.140.65:53             Masq    1      0          0         
  -> 10.244.140.67:53             Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.140.65:9153           Masq    1      0          0         
  -> 10.244.140.67:9153           Masq    1      0          0         
TCP  10.111.219.3:80 rr
  -> 10.244.186.193:80            Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.140.65:53             Masq    1      0          0         
  -> 10.244.140.67:53             Masq    1      0          0

搭建Dashboard

下載yaml文件

https://github.com/w3liu/k8s/tree/main/dashboard

執行

kubectl appy -f admin-user.yaml
kubectl appy -f admin-user-role-binding.yaml
kubectl appy -f dashboard-deployment.yaml

通過API Server訪問

如果Kubernetes API服務器是公開的,並可以從外部訪問,那我們可以直接使用API Server的方式來訪問,也是比較推薦的方式。
Dashboard的訪問地址為:

https://192.168.31.21:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/login

但是返回的結果可能如下:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:anonymous\" cannot get services/proxy in the namespace \"kube-system\"",
  "reason": "Forbidden",
  "details": {
    "name": "https:kubernetes-dashboard:",
    "kind": "services"
  },
  "code": 403
}

這是因為最新版的k8s默認啟用了RBAC,並為未認證用戶賦予了一個默認的身份:anonymous。
對於API Server來說,它是使用證書進行認證的,我們需要先創建一個證書:
1.首先找到kubectl命令的配置文件,默認情況下為/etc/kubernetes/admin.conf,在 上一篇 中,我們已經復制到了$HOME/.kube/config中。
2.然后我們使用client-certificate-data和client-key-data生成一個p12文件,可使用下列命令:

# 生成client-certificate-data
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt

# 生成client-key-data
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key

# 生成p12
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

3.最后導入上面生成的p12文件,關閉瀏覽器並重新打開即可

搭建Metrics-Server

下載yaml文件

https://github.com/w3liu/k8s/tree/main/metrics-server

執行

kubectl appy -f components.yaml

參考文獻

  1. https://www.jianshu.com/p/7ad86c485f49
  2. https://www.yuque.com/xiaowei-trt7k/tw/usx3v0
  3. https://www.cnblogs.com/danhuang/p/12617745.html


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM