k8s-基於阿里雲服務器使用kubeadm搭建k8s集群


學習內容總結來自B站UP主"尚硅谷"的Kubernetes(k8s)教學視頻: https://www.bilibili.com/video/BV1w4411y7Go

k8s-基於阿里雲服務器使用kubeadm搭建k8s集群

本人也是新手學習k8s, 先搭建一個比較簡單的1主2從的集群, 這些服務器當然不能體現k8s的威力, 但是由於新手上路, 先搞個簡單的集群試試看, 后面熟練了再使用更多的服務器嘗試.

購買阿里雲服務器

這里先購買三台阿里雲服務器, 但是買的都是按量付費類型的, 即按其規定步驟停機后可以不收取費用 (停機再重啟后不會影響已經搭建好的集群結構)

購買鏈接為:

https://ecs-buy.aliyun.com/wizard?spm=5176.ecssimplebuy.header.1.15fd36751sf2fA#/prepay/cn-shanghai

已經購買過阿里雲服務器的話, 也可以在控制台的點擊創建實例, 進入

image-20200801193819302

選擇服務器

  • 選擇按量付費
  • 選擇離你比較近的地域
  • 這里選擇了2和2G的突發性能實例 t6 (ecs.t6-c1m1.large)(先用着看看, 不確定配置是否夠用)
  • 數量3台
  • 系統鏡像選擇了64位Centos8.0(docker安裝要求 Centos7.0以上)
  • 磁盤選擇了40G

綜上費用為¥ 0.413 /時

image-20200801195033902

網絡和安全組

  • 網絡選擇默認
  • 寬帶計費模式選擇"按使用流量"計費, 峰值可以隨便選(如40M), 即用了多少扣多少錢
  • 其他默認即可

系統配置

  • 登錄憑證選擇"自定義密碼", 可以給root設置統一的密碼, 好管理
  • 其他默認即可

分組配置

  • 默認即可

確認配置, 創建訂單

image-20200801195646559

初始化服務器設置(三台都要)

為了方便管理, 將服務器的實例名稱改成: k8s-master01-225/k8s-node01-228/k8s-node02-229(其中225/228/229是私網IP的最后三位, 命名規則可以自行定義)

使用xshell工具連接三個服務器

image-20200801200231430

測試一下三個服務器可以通過私網相互ping通, 后面使用私網連接而不用公網, 因為公網流量要錢

修改主機名稱

# k8s-master01-225 機器上
hostnamectl set-hostname k8s-master01-225
# k8s-node01-228 機器上
hostnamectl set-hostname k8s-node01-228
# k8s-node02-229 機器上
hostnamectl set-hostname k8s-node02-229

設置/etc/hosts文件

真正的集群應該是使用自己搭建的dns服務器來進行IP和域名綁定, 這里處於簡單考慮, 就直接使用hosts文件關聯IP和主機名了, 在三台服務的/etc/hosts文件中添加相同的三句話

172.19.199.225 k8s-master01-225
172.19.188.228 k8s-node01-228
172.19.188.229 k8s-node02-229

xshel有個強大功能是能輸入一個命令同時控制多個終端, 在其中一個終端中右鍵, 選擇"發送鍵輸入到所有會話", 這樣不用一個一個服務器取運行了, 不過要注意有時候只需要某一個服務器運行的命令時, 不要忘了把公用命令的設置去掉

image-20200801201646328

安裝依賴包

yum install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git

關閉防火牆

systemctl stop firewalld &&  systemctl  disable firewal

安裝設置Iptables規則為空

yum -y install iptables-services  &&  systemctl  start iptables  &&  systemctl  enable iptables&&  iptables -F  &&  service iptables save

關閉swap分區

不關閉的話, pod容器可能運行在swap(虛擬內存)中, 影響效率

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

關閉selinux

setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

針對K8S調整內核參數

編輯配置文件

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1 # 開啟網橋模式
net.bridge.bridge-nf-call-ip6tables=1 # 開啟網橋模式
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空間,只有當系統 OOM 時才允許使用它
vm.overcommit_memory=1 # 不檢查物理內存是否夠用
vm.panic_on_oom=0 # 開啟 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1 # 關閉IPV6協議
net.netfilter.nf_conntrack_max=2310720
EOF

生效配置文件

cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf

調整系統時區(時區正常的可以不用設置)

# 設置系統時區為中國/上海
timedatectl set-timezone Asia/Shanghai
# 將當前的 UTC 時間寫入硬件時鍾
timedatectl set-local-rtc 0
# 重啟依賴於系統時間的服務
systemctl restart rsyslog
systemctl restart crond

關閉系統不需要的服務(如果有的話)

systemctl stop postfix && systemctl disable postfix

設置日志系統

選擇systemd journald的日志系統, 而不是rsyslogd

創建日志目錄

mkdir /var/log/journal # 持久化保存日志的目錄
mkdir /etc/systemd/journald.conf.d

編寫配置文件

cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盤
Storage=persistent

# 壓縮歷史日志
Compress=yes

SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000

# 最大占用空間 10G
SystemMaxUse=10G

# 單日志文件最大 200M
SystemMaxFileSize=200M

# 日志保存時間 2 周
MaxRetentionSec=2week

# 不將日志轉發到syslog
ForwardToSyslog=no
EOF

重啟日志系統

systemctl restart systemd-journald

kube-proxy開啟ipvs的前置條件

# 加載br_netfilter模塊
modprobe br_netfilter

# 編寫依賴文件
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

# 授權
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安裝Docker

# 安裝依賴
yum install -y yum-utils device-mapper-persistent-data lvm2

# 配置阿里源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安裝安裝最新的 containerd.io
dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm

# 安裝docker
yum update -y && yum install -y docker-ce

# 查看docker版本(是否安裝成功)
docker --version

# 創建 /etc/docker 目錄
mkdir /etc/docker

# 配置 daemon.json
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https:xxxxx"] # 在阿里雲控制台選擇"容器鏡像服務", 再選擇"鏡像加速器"側邊欄, 查看加速器地址
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  }
}
EOF

# 創建目錄
mkdir-p /etc/systemd/system/docker.service.d

# 重啟docker
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

安裝Kubeadm(主從配置)

下載kubeadm(三台服務器)

# 配置阿里源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg # 注意兩個網址在一行, 空格隔開
EOF

# 安裝 kubelet kubeadm kubectl
yum install -y kubelet kubeadm kubectl
systemctl enable --now kubelet

下載必須鏡像(三台服務器)

正常情況下, 接下來可以直接init操作, 在init操作時, 也會下載一些必須的組件鏡像, 這些鏡像是在k8s.gcr.io網站上下載的, 但是由於我們國內把該網址牆掉了, 不能直接訪問, 於是需要先提前將這些鏡像通過其他的方式下載好, 這里比較好的方式就是從另一個網站源下載.

# 查看需要下載的鏡像
kubeadm config images list
# 輸出結果, 這些都是K8S的必要組件, 但是由於被牆, 是不能直接docker pull下來的
k8s.gcr.io/kube-apiserver:v1.18.6
k8s.gcr.io/kube-controller-manager:v1.18.6
k8s.gcr.io/kube-scheduler:v1.18.6
k8s.gcr.io/kube-proxy:v1.18.6
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
# 直接pull的話會報錯超時
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.18.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

經過百度后, 發現這篇大佬的博客中第二個方法對我是管用的, 這里搬來用一用

https://blog.csdn.net/weixin_43168190/article/details/107227626

即先從gotok8s倉庫下載鏡像, 然后重新tag一下, 修改起名字即可, 這里使用大佬的腳本自動化執行全過程

# 編寫pull腳本
vim pull_k8s_images.sh

# 內容為
set -o errexit
set -o nounset
set -o pipefail

##這里定義需要下載的版本
KUBE_VERSION=v1.18.6
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.3-0
DNS_VERSION=1.6.7

##這是原來被牆的倉庫
GCR_URL=k8s.gcr.io

##這里就是寫你要使用的倉庫,可以gotok8s不變
DOCKERHUB_URL=gotok8s

##這里是鏡像列表
images=(
kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${DNS_VERSION}
)

##這里是拉取和改名的循環語句, 先下載, 再tag重命名生成需要的鏡像, 再刪除下載的鏡像
for imageName in ${images[@]} ; do
  docker pull $DOCKERHUB_URL/$imageName
  docker tag $DOCKERHUB_URL/$imageName $GCR_URL/$imageName
  docker rmi $DOCKERHUB_URL/$imageName
done

# 賦予執行權限
chmod +x ./pull_k8s_images.sh

# 執行腳本
./pull_k8s_images.sh

# 查看下載結果
[root@k8s-master01-225 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.18.6             c3d62d6fe412        2 weeks ago         117MB
k8s.gcr.io/kube-controller-manager   v1.18.6             ffce5e64d915        2 weeks ago         162MB
k8s.gcr.io/kube-apiserver            v1.18.6             56acd67ea15a        2 weeks ago         173MB
k8s.gcr.io/kube-scheduler            v1.18.6             0e0972b2b5d1        2 weeks ago         95.3MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        5 months ago        683kB
k8s.gcr.io/coredns                   1.6.7               67da37a9a360        6 months ago        43.8MB
gotok8s/kube-controller-manager      v1.17.0             5eb3b7486872        7 months ago        161MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        9 months ago        288MB

初始化主節點(只有主節點服務器才需要初始化)

生成初始化文件

kubeadm config print init-defaults > kubeadm-config.yaml

修改初始化文件

# 編輯文件
vim kubeadm-config.yaml

# 修改項下面標出
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.19.199.225 # 1.修改IP地址, 使用私網IP地址即可
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01-225
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.6  # 2.修改版本, 與前面版本一致, 也可通過 kubeadm version 查看版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16" # 3.新增pod子網, 固定該IP即可
  serviceSubnet: 10.96.0.0/12
scheduler: {}
# 4.新增下面設置, 固定即可
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs

運行初始化命令

kubeadm init --config=kubeadm-config.yaml | tee kubeadm-init.log

# 正常運行結果
....
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.19.199.225:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:873f80617875dc39a23eced3464c7069689236d460b60692586e7898bf8a254a

如果init運行錯誤

可以根據錯誤信息來排錯, 多半原因是配置文件kubeadm-config.yaml沒寫好, 如版本號沒對上, IP地址沒改, 多余空格等等...

修改完之后之后, 如果直接運行init命令, 可能還會報錯端口已被占用或者一些文件已經存在等

[root@k8s-node01-228 ~]# kubeadm init --config=kubeadm-config.yaml | tee kubeadm-init.log
W0801 18:35:22.768809   44882 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileExisting-tc]: tc not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-10259]: Port 10259 is in use
	[ERROR Port-10257]: Port 10257 is in use
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
	[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

原因可能是之前init到一半成功了一部分, 但是報錯后有沒有回滾, 那么需要先運行kubeadm reset重新設置為init之前的狀態

[root@k8s-node01-228 ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0801 18:57:02.630170   52554 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://172.19.188.226:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: context deadline exceeded
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0801 18:57:07.534409   52554 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

重設完之后再繼續執行上述的init即可, 知道init成功

init運行成功后

可以查看最后的輸出結果或者查看運行日志kubeadm-init.log, 里面告訴說需要操作下面的步驟

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

查看當前節點, 發現狀態為NotReady

[root@k8s-master01-225 ~]# kubectl get node
NAME               STATUS     ROLES    AGE   VERSION
k8s-master01-225   NotReady   master   40m   v1.18.6

部署flannel網絡(主節點服務器)

可以先整理一下當前文件夾

# 創建整理安裝所需的文件夾
[root@k8s-master01-225 ~]# mkdir -p install-k8s/core

# 將主要的文件放入文件夾中
[root@k8s-master01-225 ~]# mv kubeadm-init.log kubeadm-config.yaml install-k8s/core

# 創建flannel文件夾
[root@k8s-master01-225 ~]# cd install-k8s
[root@k8s-master01-225 install-k8s]# mkdir plugin
[root@k8s-master01-225 install-k8s]# cd plugin/
[root@k8s-master01-225 plugin]# mkdir flannel
[root@k8s-master01-225 plugin]# cd flannel/

# 下載kube-flannel.yml文件
[root@k8s-master01-225 flannel]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 下載命令的打印結果
--2020-08-01 19:23:44--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.108.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14366 (14K) [text/plain]
Saving to: ‘kube-flannel.yml’
kube-flannel.yml              100%[================================================>]  14.03K  --.-KB/s    in 0.05s   
2020-08-01 19:23:44 (286 KB/s) - ‘kube-flannel.yml’ saved [14366/14366]

# 創建flannel
[root@k8s-master01-225 flannel]# kubectl create -f kube-flannel.yml
# 創建命令的打印結果
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

# 查看pod, 可以看到flannel組件已經運行起來了. 默認系統組件都安裝在 kube-system 這個命名空間(namespace)下
[root@k8s-master01-225 flannel]# kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-tlqdw                   1/1     Running   0          18m
coredns-66bff467f8-zpg4q                   1/1     Running   0          18m
etcd-k8s-master01-225                      1/1     Running   0          18m
kube-apiserver-k8s-master01-225            1/1     Running   0          18m
kube-controller-manager-k8s-master01-225   1/1     Running   0          18m
kube-flannel-ds-amd64-5hpff                1/1     Running   0          32s
kube-proxy-xh6wh                           1/1     Running   0          18m
kube-scheduler-k8s-master01-225            1/1     Running   0          18m

# 再次查看node, 發現狀態已經變成了 Ready
[root@k8s-master01-225 flannel]# kubectl get node
NAME               STATUS   ROLES    AGE   VERSION
k8s-master01-225   Ready    master   19m   v1.18.6

將子節點加到主節點下面(在子節點服務器運行)

還是在主節點的init命令的輸出日志下, 有子節點的加入命令, 在兩台子節點服務器上運行

kubeadm join 172.19.199.225:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:23816230102e09bf09766f14896828f7b377d0b3aa44e619342cbdf47ccd37b5

稍等片刻后, 加入成功如下:

W0801 19:27:06.500319   12557 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在主節點服務器上查看子節點狀態為Ready

[root@k8s-master01-225 flannel]# kubectl get node
NAME               STATUS   ROLES    AGE   VERSION
k8s-master01-225   Ready    master   20m   v1.18.6
k8s-node01-228     Ready    <none>   34s   v1.18.6
k8s-node02-229     Ready    <none>   29s   v1.18.6

但是在子節點服務器上運行kubectl get node卻發現報錯了, 如下

(root@k8s-node02-229:~)# kubectl get node
The connection to the server localhost:8080 was refused - did you specify the right host or port?

經百度后發現按安裝成功日志提示的如下步驟操作即可

# 在各個子節點創建.kube目錄
(root@k8s-node02-229:~)# mkdir -p $HOME/.kube
# 這里需要在主節點將admin.conf復制到各個子節點
scp /etc/kubernetes/admin.conf root@k8s-node01-228:$HOME/.kube/config
scp /etc/kubernetes/admin.conf root@k8s-node02-229:$HOME/.kube/config
# 授權
(root@k8s-node02-229:~)# chown $(id -u):$(id -g) $HOME/.kube/config
# 最后運行測試, 發現不報錯了
(root@k8s-node02-229:~)# kubectl get node
NAME               STATUS   ROLES    AGE   VERSION
k8s-master01-225   Ready    master   37h   v1.18.6
k8s-node01-228     Ready    <none>   36h   v1.18.6
k8s-node02-229     Ready    <none>   36h   v1.18.6

解決pod的IP無法ping通的問題

集群安裝完成后, 啟動一個pod

# 啟動pod, 命名為nginx-offi, 里面運行的容器為從官網拉取的Nginx鏡像
(root@k8s-master01-225:~)# kubectl run nginx-offi --image=nginx
pod/nginx-offi created
# 查看pod的運行信息, 可以看到狀態為 "Running" ,IP為 "10.244.1.7", 運行在了 "k8s-node01-228" 節點上
(root@k8s-master01-225:~)# kubectl get pod -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP           NODE             NOMINATED NODE   READINESS GATES
nginx-offi                  1/1     Running   0          55s   10.244.1.7   k8s-node01-228   <none>           <none>

但是如果在主節點k8s-master01-225或者另一個子節點 k8s-node02-229上訪問剛才運行的pod, 卻發現訪問不到, ping該IP地址10.244.1.7也ping不通, 盡管前面我們已經安裝好了flannel.

經過百度后發現, 是因為 iptables 規則的問題, 前面我們在初始化服務器設置的時候清除了iptables的規則, 但是不知道是不是因為安裝了 flannel 還是哪一步的問題, 會導致 iptables 里面又多出了規則

# 查看iptables
(root@k8s-master01-225:~)# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
KUBE-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
DOCKER-USER  all  --  0.0.0.0/0            0.0.0.0/0           
DOCKER-ISOLATION-STAGE-1  all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  0.0.0.0/0            0.0.0.0/0           
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  0.0.0.0/0            0.0.0.0/0           
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP       all  -- !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
# Warning: iptables-legacy tables present, use iptables-legacy to see them

我們需要再次清空iptables規則

iptables -F &&  iptables -X &&  iptables -F -t nat &&  iptables -X -t nat

再次查看iptables

(root@k8s-master01-225:~)# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
KUBE-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
# Warning: iptables-legacy tables present, use iptables-legacy to see them

再次ping或者訪問pod, 即可成功

(root@k8s-master01-225:~)# curl 10.244.1.7
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

安裝私有倉庫harbor

Harbor是一個用於存儲和分發Docker鏡像的企業級Registry服務器,可以用來構建企業內部的Docker鏡像倉庫。

harbor是基於docker registry進行了相應的企業級擴展,從而獲得了更加廣泛的應用,新特性包括:

  • 管理用戶界面
  • 基於角色的訪問控制
  • AD/LDAP集成
  • 審計日志等

相比於原生的docker registry, 更加方便管理企業量級的容器, 並且通過內網搭建的傳輸效率也是非常高的

前置條件

  • python應該是2.7或更高版本

  • Docker引擎應為1.10或更高版本

  • Docker Compose需要為1.6.0或更高版本

安裝Docker-compose

官網安裝教程: https://docs.docker.com/compose/install/

下載最新的安裝包, 到/usr/local/bin/docker-compose目錄

sudo curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

授權

sudo chmod +x /usr/local/bin/docker-compose

創建軟連接

sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

測試安裝結果

# docker-compose --version
docker-compose version 1.26.2, build eefe0d31

下載harbor

官網下載地址: https://github.com/vmware/harbor/releases

  • 選擇最新發布的版本: v1.10.4

  • 下載600多兆的線下版本(這樣便於后續安裝): harbor-offline-installer-v1.10.4.tgz

wget https://github.com/goharbor/harbor/releases/download/v1.10.4/harbor-offline-installer-v1.10.4.tgz

解壓至自定義的目錄, 這里放在/usr/local下

tar xvf harbor-offline-installer-v1.10.4.tgz -C /usr/local/
# 重命名並創建軟連接(推薦使用, 便於后續升級管理的常用方式)
cd /usr/local/
(root@Aliyun-Alex:/usr/local)# mv harbor/ harbor-v1.10.4
(root@Aliyun-Alex:/usr/local)# ln -s /usr/local/harbor-v1.10.4/ /usr/local/harbor
(root@Aliyun-Alex:/usr/local)# cd harbor
(root@Aliyun-Alex:/usr/local/harbor)# ls
common.sh  harbor.v1.10.4.tar.gz  harbor.yml  install.sh  LICENSE  prepare

修改安裝配置文件harbor.yml

# vim harbor.yml
# 1. 修改主機名, 可以是IP或者域名, 用來進入管理UI界面和倉庫服務的
# 這里我隨便使用一個域名 alex.gcx.com, 然后在本機Windows10電腦的hosts中添加設置: alex.gcx.com 阿里雲公網IP
# hosts文件其實就是一個dns的作用, 在瀏覽器中輸入域名后, 會找到其對應的IP地址
hostname: alex.gcx.com

# 2. harbor提供了http和https兩種協議方式訪問harbor服務, 以前版本默認使用http協議, 現在默認使用https協議, 
# http 協議, 正如下面官網注釋所說, 如果https服務是可用的, 那么就算訪問的是http的端口, 也會重定向到https的端口上
# 將原來的80端口改為8002(自定義)端口, 之所以改80端口因為一般來說80端口都是給Nginx用的, 可以先查看端口是否被占用 netstat -anp |grep 8002
# http related config                                                           
http:                                                                           
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 8002                                                                    
# https 協議, 如果不想用https協議, 就可以把下面的設置注釋掉, 我兩種方式都有嘗試, https比較麻煩的一點就是需要創建授權證書
# 若證書創建好了就可以在下面配置證書信息, 創建https證書的步驟下面會介紹
# https related config                                                          
# https:                                                                        
 # https port for harbor, default is 443                                       
 # port: 443                                                                   
 # The path of cert and key files for nginx                                    
 # certificate: /data/cert/server.crt                                          
 # private_key: /data/cert/server.key

# 3. (可選)登錄harbor管理界面的用戶 admin 的登錄密碼
harbor_admin_password: your_password

# 4. (可選)修改數據卷目錄和容器目錄
data_volume: /data/harbor
location: /data/harbor/logs

創建https證書(可選)

創建密鑰, 使用openssl工具生成一個RSA私鑰

(root@Aliyun-Alex:~)# openssl genrsa -des3 -out server.key 2048
# 輸入兩次自定義的密碼
Generating RSA private key, 2048 bit long modulus (2 primes)
...........+++++
...........................+++++
e is 65537 (0x010001)
Enter pass phrase for server.key:
Verifying - Enter pass phrase for server.key:
(root@Aliyun-Alex:~)# ls
server.key

生成CSR(證書簽名請求), 輸入的信息可以隨意輸入, 這里只是隨便做一個虛擬的證書, 如果是真實的證書需要將證書發送給證書頒發機構(CA),CA驗證過請求者的身份之后,會出具簽名證書,需要花錢。

(root@Aliyun-Alex:~)# openssl req -new -key server.key -out server.csr
Enter pass phrase for server.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN # 
State or Province Name (full name) []:SH
Locality Name (eg, city) [Default City]:SH
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:alex.gcx.com
Email Address []:111@163.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
(root@Aliyun-Alex:~)# ls
3000  dump.rdb  server.csr  server.key

刪除密鑰中的密碼, 如果不刪除密碼,在應用加載的時候會出現輸入密碼進行驗證的情況,不方便自動化部署。

# 備份證書
(root@Aliyun-Alex:~)# cp server.key server.key.back
# 刪除密碼
(root@Aliyun-Alex:~)# openssl rsa -in server.key -out server.key
Enter pass phrase for server.key:
writing RSA key

生成自簽名證書

(root@Aliyun-Alex:~)# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Signature ok
subject=C = CN, ST = SH, L = SH, O = Default Company Ltd, CN = alex, emailAddress = 111@163.com
Getting Private key

生成pem格式的公鑰(可選), 有些服務,需要有pem格式的證書才能正常加載,可以用下面的命令:

openssl x509 -in server.crt -out server.pem -outform PEM

創建證書目錄

# 創建目錄
(root@Aliyun-Alex:~)# mkdir -p /data/cert

# 將證書相關文件移動至證書目錄
(root@Aliyun-Alex:~)# mv server.* /data/cert/
(root@Aliyun-Alex:~)# cd /data/cert/
(root@Aliyun-Alex:/data/cert)# ls
server.crt  server.csr  server.key  server.key.back

# 授權
chmod -R 777 /data/cert

修改harbor.yml中證書路徑配置

# vim /usr/local/harbor-v1.10.4/harbor.yml
certificate: /data/cert/server.crt
private_key: /data/cert/server.key

運行腳本安裝harhor

(root@Aliyun-Alex:~)# sh /usr/local/harbor/install.sh
[Step 0]: checking if docker is installed ...

Note: docker version: 19.03.12

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 1.26.2

[Step 2]: loading Harbor images ...
...
[Step 5]: starting Harbor ...
Creating network "harbor-v1104_harbor" with the default driver
Creating harbor-log ... done
Creating registry      ... done
Creating harbor-portal ... done
Creating redis         ... done
Creating registryctl   ... done
Creating harbor-db     ... done
Creating harbor-core   ... done
Creating nginx             ... done
Creating harbor-jobservice ... done
✔ ----Harbor has been installed and started successfully.----

登錄網站查看harbor的管理頁面

http://alex.gcx.com:8002

image-20200802172941369

在終端中登錄harbor

(root@Aliyun-Alex:/usr/local/harbor)# docker login alex.gcx.com
Username: admin
Password: 
Error response from daemon: Get https://alex.gcx.com/v2/: x509: certificate signed by unknown authority

發現登錄報錯, 這是因為還是和上面一樣, 重定向到了https的地址, 需要證書認證, 但是我們的證書是虛擬的, docker客戶端認為證書是不安全的, 所以會報錯, 那么這里我們需要修改一下docker的配置文件/etc/docker/daemon.json

vim /etc/docker/daemon.json
# 在里面添上一句話(顯示時可能不會顯示雙引號)
# 告訴docker客戶端這個域名可以訪問
"insecure-registries": ["https://alex.gcx.com"]

# 重啟docker
systemctl restart docker

# 再次登錄發現可以成功
(root@Aliyun-Alex:/usr/local)# docker login alex.gcx.com
Username: admin      
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

其他服務器訪問harbor需要修改的地方

# 1.添加hosts
echo "172.19.67.12 alex.gcx.com" >> /etc/hosts

# 2.添加/etc/docker/daemon.json
"insecure-registries": ["https://alex.gcx.com"]
# 3.重啟docker
systemctl restart docker

運維操作-啟停harbor

若想要修改harbor配置, 如這里想啟用https協議, 步驟為

# 進入harbor目錄
(root@Aliyun-Alex:~)# cd /usr/local/harbor
(root@Aliyun-Alex:/usr/local/harbor)# ls
common  common.sh  docker-compose.yml  harbor.v1.10.4.tar.gz  harbor.yml  install.sh  LICENSE  prepare

# 關閉harbor服務(docker-compose)
(root@Aliyun-Alex:/usr/local/harbor)# docker-compose down -v
Stopping harbor-jobservice ... done
Stopping nginx             ... done
Stopping harbor-core       ... done
Stopping harbor-portal     ... done
Stopping harbor-db         ... done
Stopping redis             ... done
Stopping registryctl       ... done
Stopping registry          ... done
Stopping harbor-log        ... done
Removing harbor-jobservice ... done
Removing nginx             ... done
Removing harbor-core       ... done
Removing harbor-portal     ... done
Removing harbor-db         ... done
Removing redis             ... done
Removing registryctl       ... done
Removing registry          ... done
Removing harbor-log        ... done
Removing network harbor-v1104_harbor

# 編輯harbor.yml, 修改https設置
(root@Aliyun-Alex:/usr/local/harbor)# vim harbor.yml
# https related config                                                          
https:
  # https port for harbor, default is 443                                       
  port: 443                                                                     
  # The path of cert and key files for nginx                                    
  certificate: /data/cert/server.crt                                            
  private_key: /data/cert/server.key
  
# 執行啟動前准備
(root@Aliyun-Alex:/usr/local/harbor)# ./prepare
prepare base dir is set to /usr/local/harbor-v1.10.4
Clearing the configuration file: /config/log/logrotate.conf
Clearing the configuration file: /config/log/rsyslog_docker.conf
Clearing the configuration file: /config/nginx/nginx.conf
Clearing the configuration file: /config/core/env
Clearing the configuration file: /config/core/app.conf
Clearing the configuration file: /config/registry/config.yml
Clearing the configuration file: /config/registry/root.crt
Clearing the configuration file: /config/registryctl/env
Clearing the configuration file: /config/registryctl/config.yml
Clearing the configuration file: /config/db/env
Clearing the configuration file: /config/jobservice/env
Clearing the configuration file: /config/jobservice/config.yml
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
loaded secret from file: /secret/keys/secretkey
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir

# 啟動docker-compose
(root@Aliyun-Alex:/usr/local/harbor)# docker-compose up -d
Creating network "harbor-v1104_harbor" with the default driver
Creating harbor-log ... done
Creating redis         ... done
Creating registry      ... done
Creating harbor-db     ... done
Creating registryctl   ... done
Creating harbor-portal ... done
Creating harbor-core   ... done
Creating harbor-jobservice ... done
Creating nginx             ... done

瀏覽器中再次訪問http的網址: http://alex.gcx.com:8002, 發現其重定向為https的網址了

image-20200802173828807


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM