官方文檔:
博客截止日期為:20201204
當前官網版本為v0.3.0,但在實踐中發現此版本使用私有倉庫無法正常工作,更新為v0.3.1可正常使用;
有一些官方文檔說的不清楚的地方,在這篇博客中進行了完善;
此處說不清楚,也許並非說不清楚,可能由於本人目前技術水平限制,經過大量實驗驗證才搞懂,並記錄下來;
已將本篇文章打包提供給用戶,可以下載使用:https://download.csdn.net/download/qq_42776455/13529342
fleet是一個輕量級的gitops工具,不管是管理單個集群還是大量集群都有很好的性能;
兩段pull工作模式:
- Fleet manager pull from git repo;
- The cluster agents will pull from the Fleet manager.
- Fleet Manager: 從git倉庫獲取k8s assets;
- Fleet controller: 在Fleet Manager上運行的控制器上協調GitOps。在實際操作中,Fleet manager和Fleet controller可以理解為一個東西;
- 單集群模式: Fleet Manager和下游集群是同一個集群,GitRepo命名空間固定是fleet-local;
- 多集群模式: 一個Fleet controller集群管理多個下游集群;
- Fleet agent: 多集群模式下,在受管理的下游集群中運行Fleet agent,用來與Fleet controller通信;
- GitRepo: Fleet所監控的git repositories,在集群中是CRD資源,
kind: GitRepo
- Bundle:
- Bundle是從git repo中獲取的,通常是Kubernetes manifests, Kustomize configuration, Helm charts;
- Bundle是Fleet中使用的基本部署單元;
- 不管哪種類型的(Kubernetes manifests, Kustomize configuration, Helm charts)最終都會被agent解釋成為helm charts並部署到下游集群中;
- Cluster Registration Token:Tokens used by agents to register a new cluster.
fleet安裝部署
單集群模式
helm -n fleet-system install --create-namespace \
fleet-crd https://github.com/rancher/fleet/releases/download/v0.3.1/fleet-crd-0.3.1.tgz
helm -n fleet-system install --create-namespace \
fleet https://github.com/rancher/fleet/releases/download/v0.3.1/fleet-0.3.1.tgz
建議修改fleet中的value.yaml
文件,使用國內鏡像源:
global:
cattle:
systemDefaultRegistry: "registry.cn-hangzhou.aliyuncs.com"
多集群模式
Fleet Controller Cluster部署
獲取fleet controller 集群的CA證書:
kubectl config view -o json --raw | jq -r '.clusters[].cluster["certificate-authority-data"]' | base64 -d > ca.pem
部屬fleet:
# 必須指定;
API_SERVER_URL="https://example.com:6443"
# Leave empty if your API server is signed by a well known CA
API_SERVER_CA="ca.pem"
helm -n fleet-system install --create-namespace --wait fleet-crd https://github.com/rancher/fleet/releases/download/v0.3.1/fleet-crd-0.3.1.tgz
helm -n fleet-system install --create-namespace --wait \
--set apiServerURL="${API_SERVER_URL}" \
--set-file apiServerCA="${API_SERVER_CA}" \
fleet https://github.com/rancher/fleet/releases/download/v0.3.1/fleet-0.3.1.tgz
Agent注冊到manager cluster的兩種方式
Agent Initiated Registration:
- 由manager創建
cluster registration token
; - agent通過
cluster registration token
獲取的values.yaml,來部署fleet-agent並向manager發起注冊;
Manager Initiated Registration:
- 使用下游集群的kubeconfig文件,在manager集群里創建
clusters.fleet.cattle.io
資源; - manager會主動向下游集群發起請求;
Agent Initiated Registration
- 在fleet controller cluster中創建一個
cluster registration token
:
kind: ClusterRegistrationToken
apiVersion: "fleet.cattle.io/v1alpha1"
metadata:
name: new-token
namespace: clusters
spec:
ttl:
# A duration string for how long this token is valid for. A value <= 0 or null means infinite time.
# ttl: 240h
kubectl -n clusters get secret new-token -o 'jsonpath={.data.values}' | base64 --decode > values.yaml
查看一下values.yaml
,確保信息正確;
在下游集群中部署fleet-agent
# Leave blank if you do not want any labels
CLUSTER_LABELS="--set-string labels.example=true --set-string labels.env=dev"
helm -n fleet-system install --create-namespace \
${CLUSTER_LABELS} \
--values values.yaml \
fleet-agent https://github.com/rancher/fleet/releases/download/v0.3.1/fleet-agent-0.3.1.tgz
確認agent和manager鏈接成功;
kubectl -n fleet-system logs -l app=fleet-controller
kubectl -n fleet-system get pods -l app=fleet-controller
在fleet controller集群中使用,查看下游集群是否注冊成功:
kubectl get clusters -n clusters
NAME BUNDLES-READY NODES-READY SAMPLE-NODE LAST-SEEN STATUS
cluster-a168d75438c9 2/2 5/5 lab5master 2020-12-03T03:51:44Z
創建一個gitrepo(需要指定target),確認下游集群成功部署了git repo中對應resources;
Manager Initiated Registration
在manager集群中創建:
kubectl create secret generic my-cluster-kubeconfig -n clusters --from-file=value=/kubeconfig
此處的kubeconfig是下游集群的,manager集群通過這個kubeconfig來控制下游集群;
在manager集群中創建下游cluster:
apiVersion: fleet.cattle.io/v1alpha1
kind: Cluster
metadata:
name: my-cluster
namespace: clusters
labels:
demo: "true"
env: dev
spec:
kubeConfigSecret: my-cluster-kubeconfig
fleet使用
單集群模式
公共倉庫
因為是公開的倉庫,所以無需任何secret,直接運行下面yaml文件即可;
kind: GitRepo
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: fleet-test-gitrepo
namespace: fleet-local
spec:
repo: https://git.tdology.com/xiaohang/gittest
paths:
- simple
私有倉庫
- 使用https登陸;
私有倉庫必須指定clientSecretName
,eg:
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
name: test
namespace: fleet-local
spec:
branch: master
clientSecretName: test
paths:
- simple/
repo: https://git.tdology.com/xiaohang/gittest
targets:
- clusterSelector: {}
需要在同一個命名空間下創建secret:
kubectl create secret generic -n fleet-local test --from-literal=username=<username> --from-literal=password=<password> --type=kubernetes.io/basic-auth
username
和password
是登陸git倉庫的用戶名和密碼;
截止20201201,安裝官方文檔helm部署版本為v0.3.0,但這個版本在實踐中是有問題的,公開的git repo可以正常工作,如果使用私有倉庫設置
clientSecretName
會失敗;查看了一個rancher ui 2.5以后的版本開啟了fleet功能,發現使用的是v0.3.1版本的,在這個集群里就可以;升級到fleet v0.3.1之后同樣的配置文件就生效了;
- 使用ssh
kubectl create secret generic test-ssh-key -n fleet-local --from-file=ssh-privatekey=/file/to/private/key --type=kubernetes.io/ssh-auth
把對應的公鑰添加到git repo中;
kind: GitRepo
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: fleet-test-gitrepo
namespace: fleet-local
spec:
clientSecretName: hang-login-git
repo: ssh://git@xxx
paths:
- simple/
⚠️:repo的地址開頭的
ssh://
不能省略;
多集群模式
多集群模式下使用,要制定target,用來選擇控制的下游集;
https://fleet.rancher.io/gitrepo-targets/
kind: GitRepo
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: myrepo
namespace: clusters
spec:
repo: https://github.com/rancher/fleet-examples
paths:
- simple
# Targets are evaluated in order and the first one to match is used. If
# no targets match then the evaluated cluster will not be deployed to.
targets:
# The name of target. This value is largely for display and logging.
# If not specified a default name of the format "target000" will be used
- name: prod
# A selector used to match clusters. The structure is the standard
# metav1.LabelSelector format. If clusterGroupSelector or clusterGroup is specified,
# clusterSelector will be used only to further refine the selection after
# clusterGroupSelector and clusterGroup is evaluated.
clusterSelector:
matchLabels:
env: prod
# A selector used to match cluster groups.
clusterGroupSelector:
matchLabels:
region: us-east
# A specific clusterGroup by name that will be selected
clusterGroup: group1