用 KubeKey 快速離線部署 K8s 與 KubeSphere


作者:尹珉,KubeSphere Ambassador,KubeSphere 社區用戶委員會杭州站站長

一、KubeKey 介紹

KubeKey(以下簡稱 KK) 是一個用於部署 Kubernetes 集群的開源輕量級工具。它提供了一種靈活、快速、便捷的方式來僅安裝 Kubernetes/K3s,或同時安裝 Kubernetes/K3s 和 KubeSphere,以及其他雲原生插件。除此之外,它也是擴展和升級集群的有效工具。

KubeKey v2.0.0 版本新增了清單(manifest)和制品(artifact)的概念,為用戶離線部署 Kubernetes 集群提供了一種解決方案。在過去,用戶需要准備部署工具,鏡像 tar 包和其他相關的二進制文件,每位用戶需要部署的 Kubernetes 版本和需要部署的鏡像都是不同的。現在使用 kk,用戶只需使用清單 manifest 文件來定義將要離線部署的集群環境需要的內容,再通過該 manifest 來導出制品 artifact 文件即可完成准備工作。離線部署時只需要 kk 和 artifact 就可快速、簡單的在環境中部署鏡像倉庫和 Kubernetes 集群。

二、部署准備

1. 資源清單

名稱 數量 用途
kubesphere3.2.1 1 源集群打包使用
服務器 2 離線環境部署使用

2. 源集群中下載解壓 KK2.0.0-rc-3

說明:由於 KK 版本不斷更新請按照 github 上最新 Releases 版本為准

$ wget https://github.com/kubesphere/kubekey/releases/download/v2.0.0-rc.3/kubekey-v2.0.0-rc.3-linux-amd64.tar.gz
$ tar -zxvf kubekey-v2.0.0-rc.3-linux-amd64.tar.gz 

3. 源集群中使用 KK 創建 manifest

說明:manifest 就是一個描述當前 Kubernetes 集群信息和定義 artifact 制品中需要包含哪些內容的文本文件。目前有兩種方式來生成該文件:

根據模版手動創建並編寫該文件。
使用 kk 命令根據已存在的集群生成該文件。

$ ./kk create manifest

4. 源集群中修改 manifest 配置

說明:

1.reppostiory 部分需要指定服務器系統的依賴 iso 包,可以直接在 url 中填入對應下載地址或者提前下載 iso 包到本地在 localPath 里填寫本地存放路徑並刪除 url 配置項即可

  1. 開啟 harbor、docker-compose 配置項,為后面通過 KK 自建 harbor 倉庫推送鏡像使用

  2. 默認創建的 manifest 里面的鏡像列表從 docker.io 獲取,建議修改以下示例中的青雲倉庫中獲取鏡像

  3. 可根據實際情況修改 manifest-sample.yaml 文件的內容,用以之后導出期望的 artifact 文件

$ vim manifest.yaml
---
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
  operatingSystems:
  - arch: amd64
    type: linux
    id: centos
    version: "7"
    repository:
      iso:
        localPath: /mnt/sdb/kk2.0-rc/kubekey/centos-7-amd64-rpms.iso
        url: #這里填寫下載地址也可以
  kubernetesDistributions:
  - type: kubernetes
    version: v1.21.5
  components:
    helm:
      version: v3.6.3
    cni:
      version: v0.9.1
    etcd:
      version: v3.4.13
    ## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.
    ## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.
    containerRuntimes:
    - type: docker
      version: 20.10.8
    crictl:
      version: v1.22.0
    ##
    # docker-registry:
    #   version: "2"
    harbor:
      version: v2.4.1
    docker-compose:
      version: v2.2.2
  images:
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.5
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.5
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.5
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.5
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.19.9
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.19.9
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.19.9
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.19.9
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:2.10.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:2.10.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
  - registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
  - registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v0.48.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
  - registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.7.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/edge-watcher:v0.1.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/edge-watcher-agent:v0.1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.2.0-2.249.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jnlp-slave:3.27-1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.26.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.43.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.43.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v1.9.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v0.18.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-prometheus-adapter-amd64:v0.6.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.21.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.18.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:7.4.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.7.0-1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.11.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38
  - registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text
  - registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache
  - registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java:openjdk-8-jre-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3
  registry:
    auths: {}

5. 源集群中導出制品 artifact

說明:

制品就是一個根據指定的 manifest 文件內容導出的包含鏡像 tar 包和相關二進制文件的 tgz 包。在 kk 初始化鏡像倉庫、創建集群、添加節點和升級集群的命令中均可指定一個 artifact,kk 將自動解包該 artifact 並將在執行命令時直接使用解包出來的文件。

注意:

  1. 導出命令會從互聯網中下載相應的二進制文件,請確保網絡連接正常。

  2. 導出命令會根據 manifest 文件中的鏡像列表逐個拉取鏡像,請確保 kk 的工作節點已安裝 containerd 或最低版本為 18.09 的 docker。

3.kk 會解析鏡像列表中的鏡像名,若鏡像名中的鏡像倉庫需要鑒權信息,可在 manifest 文件中的 .registry.auths 字段中進行配置。

  1. 若需要導出的 artifact 文件中包含操作系統依賴文件(如:conntarck、chrony 等),可在 operationSystem 元素中的 .repostiory.iso.url 中配置相應的 ISO 依賴文件下載地址。
$ export KKZONE=cn
$ ./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz
#默認tar包的名字是kubekey-artifact.tar.gz,可通過-o參數自定義包名

三、離線環境安裝集群

1. 離線環境下載 KK

$ wget https://github.com/kubesphere/kubekey/releases/download/v2.0.0-rc.3/kubekey-v2.0.0-rc.3-linux-amd64.tar.gz

2. 創建離線集群配置文件

$./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5 -f config-sample.yaml

3. 修改配置文件

$ vim config-sample.yaml

說明:

  1. 按照實際離線環境配置修改節點信息
  2. 必須指定 registry 倉庫部署節點(因為 KK 部署自建 harbor 倉庫需要使用)
    3.registry 里必須指定 type 類型為 harbor,不配 harbor 的話默認是裝的 docker registry
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 192.168.149.133, internalAddress: 192.168.149.133, user: root, password: "Supaur@2022"}
  - {name: node1, address: 192.168.149.134, internalAddress: 192.168.149.134, user: root, password: "Supaur@2022"}

  roleGroups:
    etcd:
    - master
    control-plane:
    - master
    worker:
    - node1
    # 如需使用 kk 自動部署鏡像倉庫,請設置該主機組 (建議倉庫與集群分離部署,減少相互影響)
    registry:
    - node1
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.21.5
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    # 如需使用 kk 部署 harbor, 可將該參數設置為 harbor,不設置該參數且需使用 kk 創建容器鏡像倉庫,將默認使用docker registry。
    type: harbor 
    # 如使用 kk 部署的 harbor 或其他需要登錄的倉庫,可設置對應倉庫的auths,如使用 kk 創建的 docker registry 倉庫,則無需配置該參數。
    # 注意:如使用 kk 部署 harbor,該參數請於 harbor 啟動后設置。
    #auths:
    #  "dockerhub.kubekey.local":
    #    username: admin
    #    password: Harbor12345
    plainHTTP: false
    # 設置集群部署時使用的私有倉庫
    privateRegistry: "dockerhub.kubekey.local"
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []

4. 方式一:執行腳本創建 harbor 項目

4.1 下載指定腳本初始化 harbor 倉庫

$ curl https://github.com/kubesphere/ks-installer/blob/master/scripts/create_project_harbor.sh

4.2 修改腳本配置文件

說明:

  1. 修改 url 的值為 https://dockerhub.kubekey.local
  2. 需要指定倉庫項目名稱和鏡像列表的項目名稱保持一致
  3. 腳本末尾 curl 命令末尾加上 -k
$ vim create_project_harbor.sh
#!/usr/bin/env bash

# Copyright 2018 The KubeSphere Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

url="https://dockerhub.kubekey.local"  #修改url的值為https://dockerhub.kubekey.local
user="admin"
passwd="Harbor12345"

harbor_projects=(library
    kubesphereio  #需要指定倉庫項目名稱和鏡像列表的項目名稱保持一致
)

for project in "${harbor_projects[@]}"; do
    echo "creating $project"
    curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k #curl命令末尾加上 -k
done
$ chmod +x create_project_harbor.sh
$ ./create_project_harbor.sh

4.3 方式二:登錄 harbor 倉庫創建項目

5. 使用 KK 安裝鏡像倉庫

說明:
1.config-sample.yaml(離線環境集群的配置文件)
2.kubesphere.tar.gz(源集群打包出來的 tar 包鏡像)
3.harbor 安裝文件在 /opt/harbor , 如需運維 harbor,可至該目 錄下。

$ ./kk init registry -f config-sample.yaml -a kubesphere.tar.gz

6. 再次修改集群配置文件

說明:

  1. 新增 auths 配置增加 dockerhub.kubekey.local、賬號密碼

2.privateRegistry 增加 dockerhub.kubekey.local

3.namespaceOverride 增加 kubesphereio(對應倉庫里新建的項目)

$ vim config-sample.yaml
  ...
  registry:
    type: harbor  
    auths: 
      "dockerhub.kubekey.local":
        username: admin
        password: Harbor12345
    plainHTTP: false
    privateRegistry: "dockerhub.kubekey.local"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
  addons: []

7. 安裝 kubesphere 集群

說明 :
1.config-sample.yaml(離線環境集群的配置文件)
2.kubesphere.tar.gz(源集群打包出來的 tar 包鏡像)

  1. 指定 k8s 版本、kubepshere 版本
    4.--with-packages(必須添加否則 ISO 依賴安裝失敗)
$ ./kk create cluster -f config-sample1.yaml -a kubesphere.tar.gz --with-kubernetes v1.21.5 --with-kubesphere v3.2.1 --with-packages

8. 查看集群集群狀態

$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
**************************************************
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.149.133:30880
Account: admin
Password: P@88w0rd

NOTES:
1. After you log into the console, please check the
monitoring status of service components in
the "Cluster Management". If any service is not
ready, please wait patiently until all components
are up and running.
2. Please change the default password after login.

#####################################################
https://kubesphere.io             2022-02-28 23:30:06
#####################################################

9. 登錄 kubesphere 控制台

四、結尾

本教程使用 KK 2.0.0 作為部署工具來實現 kubesphere 集群在離線環境中的部署,當然 KK 也支持 kubernetes 的部署。希望 KK 能幫助大家實現離線閃電交付的目的。如果大家有好的想法和建議可以到 Kubekey 倉庫中提交 issue 幫助解決。

本文由博客一文多發平台 OpenWrite 發布!


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM