參考: https://kubesphere.com.cn/docs/quick-start/minimal-kubesphere-on-k8s/ https://kubesphere.io/docs/
https://v2-1.docs.kubesphere.io/docs/zh-CN/introduction/what-is-kubesphere
本次安裝的版本是: kubesphere3.2.1
kubessphere是一個分布式操作系統,用於雲本地應用程序管理,使用Kubernetes作為內核。它提供了即插即用的架構,允許第三方應用程序無縫集成到它的生態系統中。
1. 安裝
1. 前置條件
1. kubernetes 版本大於1.19.x
我的版本如下:
[root@k8smaster01 storageclass]# kubectl version Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:59:07Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"} [root@k8smaster01 storageclass]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:03:28Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
2. cpu > 1核, 內存 > 2GB
3. 安裝之前,需要配置集群中的默認的storageclass (參考https://www.cnblogs.com/qlqwjy/p/15817294.html)
2. 安裝
執行如下命令:
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
安裝過程中遇到一些難以處理的錯誤,解決辦法:
1. 錯誤一:
failed: [localhost] (item={'ns': 'kubesphere-system', 'kind': 'users.iam.kubesphere.io', 'resource': 'admin', 'release': 'ks-core'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "/usr/local/bin/kubectl -n kubesphere-system annotate --overwrite users.iam.kubesphere.io admin meta.helm.sh/release-name=ks-core && /usr/local/bin/kubectl -n kubesphere-system annotate --overwrite users.iam.kubesphere.io admin meta.helm.sh/release-namespace=kubesphere-system && /usr/local/bin/kubectl -n kubesphere-system label --overwrite users.iam.kubesphere.io admin app.kubernetes.io/managed-by=Helm\n", "delta": "0:00:00.675675", "end": "2022-02-10 04:53:09.022419", "failed_when_result": true, "item": {"kind": "users.iam.kubesphere.io", "ns": "kubesphere-system", "release": "ks-core", "resource": "admin"}, "msg": "non-zero return code", "rc": 1, "start": "2022-02-10 04:53:08.346744", "stderr": "Error from server (InternalError): Internal error occurred: failed calling webhook \"users.iam.kubesphere.io\": Post \"https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2?timeout=30s\": service \"ks-controller-manager\" not found", "stderr_lines": ["Error from server (InternalError): Internal error occurred: failed calling webhook \"users.iam.kubesphere.io\": Post \"https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2?timeout=30s\": service \"ks-controller-manager\" not found"], "stdout": "", "stdout_lines": []}
解決辦法:
參考 https://github.com/kubesphere/ks-installer/blob/master/scripts/kubesphere-delete.sh 將sh文件下載到master節點,然后刪除后重新安裝
2. 錯誤二:
報錯:
Failed to ansible-playbook result-info.yaml
現象: 可以訪問 30880 端口,並且svc 也建立成功, 登錄報錯如下:
解決辦法:
(1) 查看pods 如下:
[root@k8smaster02 ~]# kubectl get pods -n kubesphere-system NAME READY STATUS RESTARTS AGE ks-apiserver-5866f585fc-6plkr 0/1 CrashLoopBackOff 7 14m ks-apiserver-5866f585fc-jcwpq 0/1 CrashLoopBackOff 8 21m ks-console-65f4d44d88-9qwwz 1/1 Running 0 29m ks-console-65f4d44d88-hq5pd 1/1 Running 0 29m ks-controller-manager-754947b99b-mvdmz 1/1 Running 0 21m ks-controller-manager-754947b99b-zrmj7 1/1 Running 0 22m ks-installer-85dcfff87d-4qp8v 1/1 Running 0 34m redis-ha-haproxy-868fdbddd4-j2ttx 1/1 Running 0 32m redis-ha-haproxy-868fdbddd4-qpvj7 1/1 Running 0 32m redis-ha-haproxy-868fdbddd4-zffzq 1/1 Running 0 32m redis-ha-server-0 0/2 Pending 0 32m
查看pod 啟動失敗原因:
[root@k8smaster02 ~]# kubectl logs -n kubesphere-system ks-apiserver-5866f585fc-6plkr W0210 21:25:00.736862 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. W0210 21:25:00.741192 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. W0210 21:25:00.760218 1 metricsserver.go:238] Metrics API not available. Error: failed to connect to redis service, please check redis status, error: EOF 2022/02/10 21:25:00 failed to connect to redis service, please check redis status, error: EOF
可以看到是redis 沒起來,再次查看redis 沒起來原因:
[root@k8smaster02 ~]# kubectl describe pods -n kubesphere-system redis-ha-server-0 。。。 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 3m33s (x41 over 34m) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
可以看到是pvc 原因
(2) 查看storageclass, pvc
[root@k8smaster02 ~]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE course-nfs-storage (default) fuseim.pri/ifs Delete Immediate false 40h [root@k8smaster02 ~]# kubectl get pvc -n kubesphere-system NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-redis-ha-server-0 Pending course-nfs-storage 38m
sc 是正常的,pvc 是pending 狀態。 查看pvc 可以看到相關信息如下:
persistentvolume-controller waiting for a volume to be created, either by external provisioner XXX
(3) 我選擇刪除相關storage, 然后重新部署storageclass,重新搭建成功后在,自己新建一個pvc, 可以看到自動創建pv 即正常,也就是sc 正常就可以
(4) sc 正常后再次查看pods、pv、pvc 等信息,如下:
[root@k8smaster01 storageclass]# kubectl get pods -n kubesphere-system NAME READY STATUS RESTARTS AGE ks-apiserver-5866f585fc-6plkr 1/1 Running 30 144m ks-apiserver-5866f585fc-jcwpq 1/1 Running 31 152m ks-console-65f4d44d88-9qwwz 1/1 Running 0 160m ks-console-65f4d44d88-hq5pd 1/1 Running 0 160m ks-controller-manager-754947b99b-mvdmz 1/1 Running 0 152m ks-controller-manager-754947b99b-zrmj7 1/1 Running 1 153m ks-installer-85dcfff87d-4qp8v 1/1 Running 0 165m redis-ha-haproxy-868fdbddd4-j2ttx 1/1 Running 0 163m redis-ha-haproxy-868fdbddd4-qpvj7 1/1 Running 0 163m redis-ha-haproxy-868fdbddd4-zffzq 1/1 Running 0 163m redis-ha-server-0 2/2 Running 0 163m redis-ha-server-1 2/2 Running 0 15m redis-ha-server-2 2/2 Running 0 14m [root@k8smaster01 storageclass]# kubectl get sc -n kubesphere-system NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE course-nfs-storage (default) fuseim.pri/ifs Delete Immediate false 20m [root@k8smaster01 storageclass]# kubectl get pvc -n kubesphere-system NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-redis-ha-server-0 Bound pvc-362fc90c-fc6d-4968-9179-4aba8e77e43a 2Gi RWX course-nfs-storage 92m data-redis-ha-server-1 Bound pvc-08948bd5-d8d8-4eee-aca8-0cb812b8ecbc 2Gi RWO course-nfs-storage 15m data-redis-ha-server-2 Bound pvc-b5adfa39-ba80-417a-ada6-96cfa6e2f360 2Gi RWO course-nfs-storage 15m [root@k8smaster01 storageclass]# kubectl get pv -n kubesphere-system NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-08948bd5-d8d8-4eee-aca8-0cb812b8ecbc 2Gi RWO Delete Bound kubesphere-system/data-redis-ha-server-1 course-nfs-storage 15m pvc-1b4967ab-6fe7-4cb0-a68d-88570ea994b5 20Gi RWO Delete Bound kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-1 course-nfs-storage 15m pvc-26820c0c-5499-40bb-b00a-af2a788c17fc 20Gi RWO Delete Bound kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-0 course-nfs-storage 15m pvc-362fc90c-fc6d-4968-9179-4aba8e77e43a 2Gi RWX Delete Bound kubesphere-system/data-redis-ha-server-0 course-nfs-storage 15m pvc-574ccbcc-e391-4d6f-b938-139813041e76 1Mi RWX Delete Bound default/test-pvc course-nfs-storage 15m pvc-b5adfa39-ba80-417a-ada6-96cfa6e2f360 2Gi RWO Delete Bound kubesphere-system/data-redis-ha-server-2 course-nfs-storage 15m [root@k8smaster01 storageclass]#
3. 登錄
1. 查看svc
[root@k8smaster01 storageclass]# kubectl get svc -n kubesphere-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ks-apiserver ClusterIP 10.1.132.230 <none> 80/TCP 162m ks-console NodePort 10.1.10.30 <none> 80:30880/TCP 162m ks-controller-manager ClusterIP 10.1.35.90 <none> 443/TCP 162m redis ClusterIP 10.1.133.217 <none> 6379/TCP 164m redis-ha ClusterIP None <none> 6379/TCP,26379/TCP 164m redis-ha-announce-0 ClusterIP 10.1.83.53 <none> 6379/TCP,26379/TCP 164m redis-ha-announce-1 ClusterIP 10.1.31.40 <none> 6379/TCP,26379/TCP 164m redis-ha-announce-2 ClusterIP 10.1.204.175 <none> 6379/TCP,26379/TCP 164m
2. 訪問集群任意節點的30880 端口即可登錄,使用默認帳戶和密碼 (admin/P@88w0rd), 第一次登錄后需要修改密碼
3. 登錄成功后頁面如下
4. 選擇一個集群進去后查看信息
(1) 集群節點可以查看節點信息
雙擊節點可以查看單個節點監控信息
(2)可以直接進入控制台執行kubecel 相關命令:
(3) 左邊可以查看系統相關組件,主要包括如下:
5.工作負載查看相關controller
(1) 查看deployments
可以查看現有的相關組件,右邊可以編輯,也可以新建。
(2) 雙擊進去可以查看詳情,也可以修改副本數量(副本數量是其對應的pods 的副本數量)等操作。
6.. 選擇一個pod可以查看其容器相關日志信息以及進入容器
(1) 菜單左邊選擇容器組
(2) 雙擊后進入
(3) 點擊查看日志
可以看到日志如下:
(4) 進入容器
進入后窗口如下:
(5) 查看容器監控信息
7. 查看服務service, 可以查看對應的外網映射。 也可以新建以及編輯現有的服務,點擊菜單欄的服務:
雙擊查看服務詳情如下:
還可以查看任務以及配置等信息,這里不做展示。
8. 通過界面新建一個nginx然后通過service 暴露出去
(1) 到工作負載新建一個部署 deployment, 對應信息如下
1》基本信息
2》 容器組設置,我們只選擇一個nginx
3》存儲卷設置和高級設置直接下一步
4》 查看生成的yaml
apiVersion: apps/v1 kind: Deployment metadata: namespace: default labels: app: mynginx name: mynginx annotations: kubesphere.io/alias-name: mynginx kubesphere.io/description: 測試nginx spec: replicas: 1 selector: matchLabels: app: mynginx template: metadata: labels: app: mynginx spec: containers: - name: container-1qils3 imagePullPolicy: IfNotPresent image: nginx serviceAccount: default initContainers: [] volumes: [] imagePullSecrets: null strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25%
5》 直接創建即可, 創建后查看信息如下:
(2) 新建一個service
1》基本信息如下
2》服務設置如下:
3》 高級設置設置類型為NodePort
4》查看yaml 內容如下:
apiVersion: v1 kind: Service metadata: namespace: default labels: app: mynginx-svc name: mynginx-svc annotations: kubesphere.io/alias-name: mynginx-svc kubesphere.io/description: mynginx-svc spec: sessionAffinity: None selector: app: mynginx ports: - name: port-http protocol: TCP targetPort: 80 port: 80 type: NodePort
5》 直接創建,然后查看列表如下:
6》查看創建成功之后的yaml 內容如下:
kind: Service apiVersion: v1 metadata: name: mynginx-svc namespace: default labels: app: mynginx-svc annotations: kubesphere.io/alias-name: mynginx-svc kubesphere.io/creator: admin kubesphere.io/description: mynginx-svc spec: ports: - name: port-http protocol: TCP port: 80 targetPort: 80 nodePort: 31689 selector: app: mynginx clusterIP: 10.1.222.4 clusterIPs: - 10.1.222.4 type: NodePort sessionAffinity: None externalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack
點擊查看其端口規則如下:
(3) 測試: 從節點的31689 端口訪問到nginx 即證明部署成功