arm64平台 Kubernetes 及 Harbor 離線部署方案



前言

隨着國產化浪潮的興起,部分企業及部門開始采用國產服務器,使用國產操作系統。本文針對國產服務器(長城)及國產操作系統(麒麟)實現離線部署 Kubernetes,相關離線程序包、配置文件、安裝說明文檔將在文中給出。


安裝的組件如下:

程序包 版本
docker-ce 19.03.14
kubeadm、kubelet、kubectl v1.19.7
flannel v0.14.0
traefik 2.1.2
kube-dashboard v2.0.5
harbor v1.9.1

環境介紹


服務器型號


image-20210828135752853


操作系統信息


image-20210828135849998


部署規划


主機名 IP 用途
k8s-master 192.168.1.161 k8s 主節點
k8s-node1 192.168.1.162 k8s 計算節點-1
k8s-node2 192.168.1.163 k8s 計算節點-2
harbor 192.168.1.164 k8s 私有鏡像倉庫

部署流程


Kubernetes 集群部署流程


  • docker-ce 安裝
  • kubeadm 、kubectl、kubelet 安裝
  • 初始化 Kubernetes 集群
  • 為 Kubernetes 集群加入 node 節點
  • 安裝部署 Flannel
  • 安裝部署 Traefik
  • 安裝部署 kube-dashboard

Harbor 鏡像倉庫部署流程


  • docker-ce 安裝
  • docker-compose 安裝
  • 修改 harbor.yml 配置文件,初始化 harbor 倉庫


程序包版本


程序包 版本
docker-ce 19.03.14
kubeadm、kubelet、kubectl v1.19.7
flannel v0.14.0
traefik 2.1.2
kube-dashboard v2.0.5


離線安裝包


離線安裝包已在測試、生產環境確認無誤,沒有任何遺漏。


離線安裝包括以下內容:

  1. 安裝組件程序包
  2. 各個組件必需的鏡像文件
  3. 每個組件安裝詳細說明文檔

創作、測試、收集不易,還請需要的朋友在文章末尾援助一個月百度網盤會員的費用,我私信分享給您。



Kubernetes 安裝部署


系統初始化


  1. 關閉 selinux 、iptables
[root@k8s-master(192.168.1.161) ~]#systemctl disable firewalld
[root@k8s-master(192.168.1.161) ~]#sed -i 's@SELINUX=enforcing@SELINUX=disabled@g' /etc/selinux/config
[root@k8s-master(192.168.1.161) ~]#reboot

  1. ntp 時間同步
[root@k8s-master(192.168.1.161) ~]#ntpdate tiger.sina.com.cn


  1. 修改主機名
[root@k8s-master(192.168.1.161) ~]#cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.161	k8s-master
192.168.1.162	k8s-node1
192.168.1.163	k8s-node2
192.168.1.164	harbor

[root@k8s-master(192.168.1.161) ~]#for i in {2..4}; do scp /etc/hosts 192.168.1.16${i}:/etc/;done


  1. 修改文件句柄數
[root@k8s-master(192.168.1.161) ~]#ulimit -SHn 655350
[root@k8s-master(192.168.1.161) ~]#cat << 'EOF' >> /etc/rc.local
> ulimit -SHn 655350
> EOF
[root@k8s-master(192.168.1.161) ~]#chmod +x /etc/rc.local
[root@k8s-master(192.168.1.161) ~]#vim /etc/sysctl.d/99-sysctl.conf
...
net.ipv4.ip_forward=1
...
[root@k8s-master(192.168.1.161) ~]#sysctl --system



  1. 如果有公網,直接使用默認yum源。如沒有公網請在有公網的主機下載包,自行制作 repo 倉庫,可私信我獲取。
[root@k8s-master(192.168.1.161) ~]#yum repolist
repo id                                                              repo name
docker-ce                                                            docker-ce-stable
ks10-adv-os                                                          ks10-adv-os
kubernetes                                                           kubernetes

  1. 准備安裝程序包
[root@k8s-master(192.168.1.161) ~]#ls
anaconda-ks.cfg  initial-setup-ks.cfg  kubernetes-arm64.zip

[root@k8s-master(192.168.1.161) ~]#unzip kubernetes-arm64.zip
Archive:  kubernetes-arm64.zip
   creating: kubernetes-arm64/
  inflating: kubernetes-arm64/README.txt
 extracting: kubernetes-arm64/docker-ce-19.03-arm64.tar.gz
 extracting: kubernetes-arm64/docker-compose-arm64-1.22.tar.gz
 extracting: kubernetes-arm64/flannel-v0.14.0-arm64.tar.gz
 extracting: kubernetes-arm64/harbor-v1.9.1-arm64.tar.gz
 extracting: kubernetes-arm64/k8s-rpm-images-arm64.tar.gz
 extracting: kubernetes-arm64/kube-dashboard-v2.0.5-arm64.tar.gz
 extracting: kubernetes-arm64/traefik-2.1.2-arm64.tar.gz
 
[root@k8s-master(192.168.1.161) ~]#cd kubernetes-arm64

[root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#ls
docker-ce-19.03-arm64.tar.gz      flannel-v0.14.0-arm64.tar.gz  k8s-rpm-images-arm64.tar.gz         README.txt
docker-compose-arm64-1.22.tar.gz  harbor-v1.9.1-arm64.tar.gz    kube-dashboard-v2.0.5-arm64.tar.gz  traefik-2.1.2-arm64.tar.gz

[root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#cat README.txt
# Kubernetes 安裝順序

1. docker-ce-19.03-arm64.tar.gz
2. k8s-rpm-images-arm64.tar.gz
3. flannel-v0.14.0-arm64.tar.gz
4. traefik-2.1.2-arm64.tar.gz
5. kube-dashboard-v2.0.5-arm64.tar.gz


# harbor 安裝順序

1. docker-ce-19.03-arm64.tar.gz
2. docker-compose-arm64-1.22.tar.gz
3. harbor-v1.9.1-arm64.tar.gz


說明:
每個 tar.gz 代表一種組件,解壓后會有 README.md 文件,按照操作即可食用。


k8s-master 節點安裝


安裝docker-ce

[root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#tar xf docker-ce-19.03-arm64.tar.gz
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#cd docker-ce-19.03-arm64
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/docker-ce-19.03-arm64]#cat README.md
## docker 安裝方式:
yum localinstall *.rpm -y

## 拷貝 docker 目錄到 /etc/ 下
cp -a docker /etc/

## 啟動 docker
systemctl enable docker ; systemctl start docker ; docker info

################ 按照上面 README 操作 ################
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/docker-ce-19.03-arm64]#yum localinstall *.rpm -y
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/docker-ce-19.03-arm64]#cp -a docker /etc/
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/docker-ce-19.03-arm64]#systemctl enable docker ; systemctl start docker ; docker info

安裝 kubernetes 程序包


[root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#tar xf k8s-rpm-images-arm64.tar.gz
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#cd k8s-rpm-images/v1.19.7/
---------------README---------------
## 關閉 swap
swapoff -a
sed -i '/swap/ s$^\(.*\)$#\1$g' /etc/fstab



## 安裝 k8s 程序包
yum localinstall *.rpm -y
kubeadm completion bash > /etc/bash_completion.d/kubeadm
kubectl completion bash > /etc/bash_completion.d/kubectl

斷開,重新連接會話操作

## 生成並編輯配置文件
kubeadm config print init-defaults > init-default.yaml

### init-defualt.yaml 配置文件如下:
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 2400h0m0s   ## 可以稍稍延長時間
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.166  ## 修改為你本機的IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers   ## 修改為阿里雲的鏡像倉庫
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16  ## pod 網絡網段
scheduler: {}

## 導入鏡像
注意:這里 k8s-v1.19.7 用的鏡像就是 v1.19.0
docker load < k8s-v1.19.0-arm64.tar.gz


## 初始化 k8s 集群
kubeadm init --config ./init-default.yaml

...
Your Kubernetes control-plane has initialized successfully!
...
出現如上信息初始化成功.

執行:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

記錄如下信息在node 節點執行:
kubeadm join 192.168.1.166:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:cd20a4bce0edb28588ae12b8ed36057ed535bfc4fa38c5d24d16e9614fb8a6ab

## 查看是否成功
kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
master   NotReady   master   4m59s   v1.19.7

## 下一步安裝 flannel
查看 flannel 壓縮包 README.md
--------------------------------------

每個組件的壓縮包下面都有一個安裝指南 README.md 文檔,按照安裝即可。

[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#swapoff -a
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#sed -i '/swap/ s$^\(.*\)$#\1$g' /etc/fstab
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#exit
// 重新連接
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#kubeadm config print init-defaults > init-default.yaml

### 修改初始化文件,需要重點注意 ###
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#vim init-default.yaml

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 240000h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.161    ## k8s-master 主機IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers   ## 鏡像倉庫地址
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16		## 定義 pod 網絡
scheduler: {}

導入鏡像文件:

[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#docker load < k8s-v1.19.0-arm64.tar.gz

初始化集群:

[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#kubeadm init --config ./init-default.yaml
...
Your Kubernetes control-plane has initialized successfully!
...

[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#  mkdir -p $HOME/.kube
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/k8s-rpm-images/v1.19.7]#  sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master(192.168.1.161) ~]#kubectl get nodes
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   master   42s   v1.19.7


安裝 flannel 網絡插件


[root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#tar xf flannel-v0.14.0-arm64.tar.gz
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#cd flannel-v0.14.0-arm64/
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/flannel-v0.14.0-arm64]#cat README.md
## 導入 flannel images
注意:node 節點都需要導入
docker load < flannel-v0.14.0-arm64-image.tar.gz

## 執行 yml 文件
kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

## 查看節點狀態
kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   62m   v1.19.7

STATUS 為 Ready 則 flannel 安裝成功.

[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/flannel-v0.14.0-arm64]#docker load < flannel-v0.14.0-arm64-image.tar.gz
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/flannel-v0.14.0-arm64]#kubectl apply -f kube-flannel.yml
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/flannel-v0.14.0-arm64]#kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-6d56c8448f-bj55k             0/1     Pending   0          2m7s
coredns-6d56c8448f-pqrs2             0/1     Pending   0          2m7s
etcd-k8s-master                      1/1     Running   0          2m15s
kube-apiserver-k8s-master            1/1     Running   0          2m15s
kube-controller-manager-k8s-master   1/1     Running   0          2m15s
kube-flannel-ds-9dsjt                1/1     Running   0          12s    ### 重點關注
kube-proxy-npf66                     1/1     Running   0          2m7s
kube-scheduler-k8s-master            1/1     Running   0          2m15s

[root@k8s-master(192.168.1.161) ~]#kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   2m50s   v1.19.7

### 集群已經從 NotReady 變為 Ready,集群搭建成功。

安裝 traefik 網關


[root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#tar xf traefik-2.1.2-arm64.tar.gz
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#cd traefik/
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#cat README.md
## 導入鏡像
注意:集群每個節點都需要導入
docker load < traefik-arm64-2.1.2.tar.gz


## 執行 yaml 清單文件,順序如下:
traefik-crd.yaml
traefik-rbac.yaml
traefik-configmap.yaml
traefik-deployment.yaml
traefik-dashboard-route.yaml


## 查看 traefik
kubectl get pod -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP           NODE
traefik-ingress-controller-649b45b97-5r7w7   1/1     Running   0          31s   10.244.0.7   master

## 在 windows 上修改 C:\Windows\System32\drivers\etc\hosts 文件,解析如下:
192.168.1.166		www.test.com

注意:192.168.1.166 為上面 pod 運行所在的 master 節點IP地址

## 通過瀏覽器訪問 www.test.com
出現 traefik 頁面表示配置成功.

按照 README.md 安裝即可:

[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#docker load < traefik-arm64-2.1.2.tar.gz
##### 執行期間有 Warning 信息忽略 #####
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#kubectl apply -f  traefik-crd.yaml
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#kubectl apply -f  traefik-rbac.yaml
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#kubectl apply -f  traefik-configmap.yaml
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#kubectl apply -f  traefik-deployment.yaml
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#kubectl apply -f  traefik-dashboard-route.yaml

[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/traefik]#kubectl get pod
NAME                                         READY   STATUS    RESTARTS   AGE
traefik-ingress-controller-649b45b97-qzdr7   1/1     Running   0          18s

# 啟動成功

在windows 上做解析,訪問 http://www.test.com/ 是否能打開 traefik 控制台

image-20210828160717701


添加如下一條解析:

image-20210828160836016

保存退出,通過瀏覽器訪問域名 http://www.test.com

image-20210828161521605


能打開 traefik dashboard 就表示 traefik 安裝成功。


安裝 kube-dashboard 界面


[root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#tar xf kube-dashboard-v2.0.5-arm64.tar.gz
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64]#cd kube-dashboard-v2.0.5-arm64/
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#cat README.md
## 創建 kubernetes-dashboard 名稱空間
kubectl create ns kubernetes-dashboard
namespace/kubernetes-dashboard created

## 創建 kubernetes-dashboard-certs
注意:不創建下面 kubernetes-dashboard pod 無法啟動成功

mkdir -p /tmp/kube-dashboard-key/ && cd /tmp/kube-dashboard-key/
openssl genrsa -out dashboard.key 2048
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.1.131'  ## 這里為自己主機的IP地址
openssl x509 -days 3650 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
cd -
kubectl create secret generic kubernetes-dashboard-certs --from-file=/tmp/kube-dashboard-key/dashboard.key --from-file=/tmp/kube-dashboard-key/dashboard.crt -n kubernetes-dashboard




## 導入 dashboard 鏡像文件
注意:集群內每個節點都需要導入
docker load < dashboard-v2.0.5-arm64-images.tar.gz

## 執行 ymal 文件
kubectl apply -f  kubernetes-dashboard.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created


## 查看 dashboard
kubectl get pod,svc -n kubernetes-dashboard

NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-678d548797-hjq5z   1/1     Running   0          4m46s
pod/kubernetes-dashboard-664667b775-5lsdd        1/1     Running   0          42s

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.96.68.73    <none>        8000/TCP        4m46s
service/kubernetes-dashboard        NodePort    10.104.22.95   <none>        443:31234/TCP   4m46s


## 通過頁面訪問
https://192.168.1.131:31234/

## 獲取token的方式
kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace)|
awk '/^token/{print $2}'

按照 README.md 文檔安裝即可

[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#kubectl create ns kubernetes-dashboard
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#mkdir -p /tmp/kube-dashboard-key/ && cd /tmp/kube-dashboard-key/
[root@k8s-master(192.168.1.161) /tmp/kube-dashboard-key]#openssl genrsa -out dashboard.key 2048
[root@k8s-master(192.168.1.161) /tmp/kube-dashboard-key]#cd -
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#kubectl create secret generic kubernetes-dashboard-certs --from-file=/tmp/kube-dashboard-key/dashboard.key --from-file=/tmp/kube-dashboard-key/dashboard.crt -n kubernetes-dashboard
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#docker load < dashboard-v2.0.5-arm64-images.tar.gz
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#kubectl apply -f  kubernetes-dashboard.yaml
[root@k8s-master(192.168.1.161) ~/kubernetes-arm64/kube-dashboard-v2.0.5-arm64]#kubectl get pod,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-678d548797-vq7t6   1/1     Running   0          15s
pod/kubernetes-dashboard-664667b775-2pbkf        1/1     Running   0          15s

NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.100.121.127   <none>        8000/TCP        15s
service/kubernetes-dashboard        NodePort    10.105.90.90     <none>        443:31234/TCP   15s

通過瀏覽器訪問 https://192.168.1.161:31234/#/login 注意:這里是 https

image-20210828162626888


獲取 token 的方式,在 README.md 中有說明:

[root@k8s-master(192.168.1.161) ~]#kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace)| awk '/^token/{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6IlF1WHBWS2xzUEQzazctaE1fbTNENmpIZXFwZkZ2WUFGUEN5YzVFVWV0WVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlci10b2tlbi1oaDlqZiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImFhMWFiNjczLTE4ZTktNDFmMC1hNmY0LTg1NDFmYjNkNjBjMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY29udHJvbGxlciJ9.eBNteuGfC4xGicv9ghq2ZcfE2ju2g3QAp7JxUlZl4yo0HM9YAHgBKYaOok8oKBvFEWaKkC0EUHo1xhzrhK9rOF2OvSrQgpzRkztBRjpZh0Gmdk-8Jvhr8OAKOPiXn_FBmmJz6H2KAKUGWtihQx_YJEWiB18Ht97o2dh1bTsP3FdjoCbLe4Xs3nuQ4tM_tqy-CqzkcoQ1wAK-HhYC-dPV0GdMMhLXaXP6UsaYiyxkiAkH-71EgXbTqafSK2e_vdyMMaE0A3LlBELnOCSlGSzQCpDywLS5BdihU09M3MokMG9WVEDqpyWhl4IvtY9qhp9Jql9EyiQdQNa8hpWPAuQwnw

復制如上 token 到頁面的文本框中點擊登錄

image-20210828162804065


到此, k8s-master 節點安裝完畢,接下來為 k8s 添加 node 節點。


k8s-node 節點安裝


安裝 docker-ce

[root@k8s-node1 docker-ce-19.03-arm64]# yum localinstall *.rpm -y
[root@k8s-node1 docker-ce-19.03-arm64]# cp -a docker /etc/
[root@k8s-node1 docker-ce-19.03-arm64]# systemctl enable docker ; systemctl start docker ; docker info

安裝 kubernetes 程序包

[root@k8s-node1 v1.19.7]# swapoff -a
[root@k8s-node1 v1.19.7]# sed -i '/swap/ s$^\(.*\)$#\1$g' /etc/fstab
[root@k8s-node1 v1.19.7]# yum localinstall *.rpm -y
[root@k8s-node1 v1.19.7]# kubeadm completion bash > /etc/bash_completion.d/kubeadm
[root@k8s-node1 v1.19.7]# kubectl completion bash > /etc/bash_completion.d/kubectl

node 導入 master 鏡像文件


  1. 備份 master 節點 所有鏡像文件
[root@k8s-master(192.168.1.161) ~]#docker save $(docker images | awk 'NR>1{print $1":"$2}' | awk '{printf "%s ",$0}') | gzip > k8s-arm-1.19.7.tar.gz
[root@k8s-master(192.168.1.161) ~]#scp k8s-arm-1.19.7.tar.gz k8s-node1:/root/

  1. node 導入鏡像文件
[root@k8s-node1 ~]# docker load < k8s-arm-1.19.7.tar.gz

加入 k8s 集群

加入的命令在 k8s-master 初始化完成后顯示出來的。

[root@k8s-node1 ~]# kubeadm join 192.168.1.161:6443 --token abcdef.0123456789abcdef \>     --discovery-token-ca-cert-hash sha256:b0f1cd12aed05afad71577b79ba2f9c37e2472b33b08c1c712753b9b455424a1

檢查是否加入集群

在 k8s-master 節點上查看

[root@k8s-master(192.168.1.161) ~]#kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   53m   v1.19.7k8s-node1    Ready    <none>   48s   v1.19.7

另一台 node 節點同樣如上操作。加入集群后,集群如下:

[root@k8s-master(192.168.1.161) ~]#kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   86m     v1.19.7
k8s-node1    Ready    <none>   33m     v1.19.7
k8s-node2    Ready    <none>   2m56s   v1.19.7


Harbor 安裝部署


系統初始化

參考 Kubernetes 系統初始化操作。


安裝docker-ce

[root@harbor ~]# unzip kubernetes-arm64.zip
[root@harbor ~]# cd kubernetes-arm64
[root@harbor kubernetes-arm64]# cd docker-ce-19.03-arm64
[root@harbor docker-ce-19.03-arm64]# yum localinstall *.rpm -y
[root@harbor docker-ce-19.03-arm64]# cp -a docker /etc/
[root@harbor docker-ce-19.03-arm64]# systemctl enable docker ; systemctl start docker ; docker info

安裝 docker-compose

[root@harbor kubernetes-arm64]# tar xf docker-compose-arm64-1.22.tar.gz
[root@harbor docker-compose-arm64-1.22]# cat README.md
## 安裝 docker-compose
yum localinstall *.rpm -y
## 查看版本
docker-compose version

按照 README.md 文檔操作

[root@harbor docker-compose-arm64-1.22]# yum localinstall *.rpm -y
[root@harbor docker-compose-arm64-1.22]# docker-compose version
docker-compose version 1.22.0, build f46880f
docker-py version: 4.0.2
CPython version: 3.7.4
OpenSSL version: OpenSSL 1.1.1d  10 Sep 2019


安裝部署 harbor

[root@harbor kubernetes-arm64]# tar xf harbor-v1.9.1-arm64.tar.gz
[root@harbor kubernetes-arm64]# cd harbor-v1.9.1-arm64
[root@harbor harbor-v1.9.1-arm64]# cat README.md
## 前提條件
1. 安裝完畢 docker-ce (參考 docker-ce-19.03-arm64.tar.gz 安裝)
2. 安裝完畢 docker-compose (參考 docker-compose-arm64-1.22.tar.gz 安裝)

## 修改 harbor.yml 中的IP地址
第5行 hostname 修改為本機IP

5 hostname: 192.168.1.133

## 確保 prepare 有執行權限
chmod 777 prepare


## 執行 install.sh 安裝 harbor
./install.sh
---
Creating network "harbor-arm64-v191_harbor" with the default driver
Creating harbor-log ... done
Creating registry      ... done
Creating registryctl   ... done
Creating harbor-portal ... done
Creating redis         ... done
Creating harbor-db     ... done
Creating harbor-core   ... done
Creating nginx             ... done
Creating harbor-jobservice ... done

✔ ----Harbor has been installed and started successfully.----

Now you should be able to visit the admin portal at http://192.168.1.133.
For more details, please visit https://github.com/goharbor/harbor .
---

出現如上所示表示安裝成功。

## 查看 docker-compose
docker-compose ps
      Name                     Command                       State                     Ports
------------------------------------------------------------------------------------------------------
harbor-core         /harbor/harbor_core              Up (healthy)
harbor-db           /docker-entrypoint.sh            Up (health: starting)   5432/tcp
harbor-jobservice   /harbor/harbor_jobservice  ...   Up (health: starting)
harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)            127.0.0.1:1514->10514/tcp
harbor-portal       nginx -g daemon off;             Up (healthy)            8080/tcp
nginx               nginx -g daemon off;             Up (health: starting)   0.0.0.0:80->8080/tcp
redis               docker-entrypoint.sh redis ...   Up                      6379/tcp
registry            /entrypoint.sh /etc/regist ...   Up (healthy)            5000/tcp
registryctl         /harbor/start.sh                 Up (healthy)

## 通過瀏覽器訪問:
http://IP/

按照 README.md 文檔操作

[root@harbor harbor-v1.9.1-arm64]# vim harbor.yml
...
  5 hostname: 192.168.1.164		# 第 5 行修改為自己harbor 服務器地址
... 
[root@harbor harbor-v1.9.1-arm64]# ls
common  docker-compose.yml  harbor-arm64-images-v1.9.1.tar.gz  harbor.yml  install.sh  prepare  README.md

### 執行 install.sh 腳本 ###
[root@harbor harbor-v1.9.1-arm64]# ./install.sh
...
✔ ----Harbor has been installed and started successfully.----
...

# 查看 docker-compose
[root@harbor harbor-v1.9.1-arm64]# docker-compose ps
      Name                     Command                       State                     Ports
------------------------------------------------------------------------------------------------------
harbor-core         /harbor/harbor_core              Up (healthy)
harbor-db           /docker-entrypoint.sh            Up (health: starting)   5432/tcp
harbor-jobservice   /harbor/harbor_jobservice  ...   Up (healthy)
harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)            127.0.0.1:1514->10514/tcp
harbor-portal       nginx -g daemon off;             Up (healthy)            8080/tcp
nginx               nginx -g daemon off;             Up (healthy)            0.0.0.0:80->8080/tcp
redis               docker-entrypoint.sh redis ...   Up                      6379/tcp
registry            /entrypoint.sh /etc/regist ...   Up (healthy)            5000/tcp
registryctl         /harbor/start.sh                 Up (healthy)

通過瀏覽器訪問 http://192.168.1.164/

image-20210828175232320




--- EOF ---


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM