client-go系列之3---restclient的使用


0. 背景

個人主頁: https://gzh.readthedocs.io

關注容器技術、關注Kubernetes。問題或建議,請公眾號留言。

首先我通過kind創建了一個6節點的集群,本文章中所有的操作都是在這個集群中進行的。

通過本文的講解,希望您能了解如何使用client-go中的RESTClient來對資源進行操作,這里我只是舉了最簡單的例子---pod資源獲取。

文中用到的軟件的版本如下:

  • kind
[root@xxx-wsl ~/client-go-example] kind version
kind v0.9.0 go1.15.2 linux/amd64

1. 環境准備

通過kind創建多節點的k8s集群: 3個master節點 + 3個worker節點

[root@xxx-wsl ~/init_kind_clusters] kind create cluster --config=init_cluster.yaml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.19.1) 🖼
 ✓ Preparing nodes 📦 📦 📦 📦 📦 📦
 ✓ Configuring the external load balancer ⚖️
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining more control-plane nodes 🎮
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/

其中init_cluster.yaml內容如下:

[root@xxx-wsl ~/init_kind_clusters] cat init_cluster.yaml
# a cluster with 3 control-plane nodes and 3 workers
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker

2. RestClient使用示例

這段代碼的作用:從namespace為kube-system中獲取所有的pod並輸出到屏幕

main.go


package main

import (
        "context"
        "fmt"

        corev1 "k8s.io/api/core/v1"
        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"

        "k8s.io/client-go/kubernetes/scheme"
        "k8s.io/client-go/rest"
        "k8s.io/client-go/tools/clientcmd"
)

func main() {
        fmt.Println("Prepare config object.")

        // 加載k8s配置文件,生成Config對象
        config, err := clientcmd.BuildConfigFromFlags("", "/root/.kube/config")
        if err != nil {
                panic(err)
        }

        config.APIPath = "api"
        config.GroupVersion = &corev1.SchemeGroupVersion
        config.NegotiatedSerializer = scheme.Codecs

        fmt.Println("Init RESTClient.")

        // 定義RestClient,用於與k8s API server進行交互
        restClient, err := rest.RESTClientFor(config)
        if err != nil {
                panic(err)
        }

        fmt.Println("Get Pods in cluster.")

        // 獲取pod列表。這里只會從namespace為"kube-system"中獲取指定的資源(pods)
        result := &corev1.PodList{}
        if err := restClient.
                Get().
                Namespace("kube-system").
                Resource("pods").
                VersionedParams(&metav1.ListOptions{Limit: 500}, scheme.ParameterCodec).
                Do(context.TODO()).
                Into(result); err != nil {
                panic(err)
        }

        fmt.Println("Print all listed pods.")

        // 打印所有獲取到的pods資源,輸出到標准輸出
        for _, d := range result.Items {
                fmt.Printf("NAMESPACE: %v NAME: %v \t STATUS: %v \n", d.Namespace, d.Name, d.Status.Phase)
        }
}

go.mod

module main

go 1.15

require (
        github.com/go-logr/logr v0.2.1 // indirect
        github.com/google/gofuzz v1.2.0 // indirect
        github.com/imdario/mergo v0.3.11 // indirect
        golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0 // indirect
        golang.org/x/net v0.0.0-20201010224723-4f7140c49acb // indirect
        golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43 // indirect
        golang.org/x/sys v0.0.0-20201009025420-dfb3f7c4e634 // indirect
        golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e // indirect
        k8s.io/api v0.19.2
        k8s.io/apimachinery v0.19.2
        //k8s.io/client-go v11.0.0+incompatible
        //      k8s.io/client-go v11.0.0+incompatible
        k8s.io/klog v1.0.0 // indirect
        k8s.io/klog/v2 v2.3.0 // indirect
        k8s.io/utils v0.0.0-20201005171033-6301aaf42dc7 // indirect
)

require k8s.io/client-go v0.19.2

3. 輸出結果

[root@xxx-wsl ~/client-go-example] go run main.go
Prepare config object.
Init RESTClient.
Get Pods in cluster.
Print all listed pods.
NAMESPACE: kube-system NAME: coredns-f9fd979d6-rhzfd     STATUS: Running
NAMESPACE: kube-system NAME: coredns-f9fd979d6-whrj2     STATUS: Running
NAMESPACE: kube-system NAME: etcd-kind-control-plane     STATUS: Running
NAMESPACE: kube-system NAME: etcd-kind-control-plane2    STATUS: Running
NAMESPACE: kube-system NAME: etcd-kind-control-plane3    STATUS: Running
NAMESPACE: kube-system NAME: kindnet-bpsfl       STATUS: Running
NAMESPACE: kube-system NAME: kindnet-ks6zv       STATUS: Running
NAMESPACE: kube-system NAME: kindnet-pm6zl       STATUS: Running
NAMESPACE: kube-system NAME: kindnet-qfhqt       STATUS: Running
NAMESPACE: kube-system NAME: kindnet-s7qqn       STATUS: Running
NAMESPACE: kube-system NAME: kindnet-trk5l       STATUS: Running
NAMESPACE: kube-system NAME: kube-apiserver-kind-control-plane   STATUS: Running
NAMESPACE: kube-system NAME: kube-apiserver-kind-control-plane2          STATUS: Running
NAMESPACE: kube-system NAME: kube-apiserver-kind-control-plane3          STATUS: Running
NAMESPACE: kube-system NAME: kube-controller-manager-kind-control-plane          STATUS: Running
NAMESPACE: kube-system NAME: kube-controller-manager-kind-control-plane2         STATUS: Running
NAMESPACE: kube-system NAME: kube-controller-manager-kind-control-plane3         STATUS: Running
NAMESPACE: kube-system NAME: kube-proxy-7gz67    STATUS: Running
NAMESPACE: kube-system NAME: kube-proxy-bvbkk    STATUS: Running
NAMESPACE: kube-system NAME: kube-proxy-clf72    STATUS: Running
NAMESPACE: kube-system NAME: kube-proxy-d8zpb    STATUS: Running
NAMESPACE: kube-system NAME: kube-proxy-dsmsj    STATUS: Running
NAMESPACE: kube-system NAME: kube-proxy-fplkk    STATUS: Running
NAMESPACE: kube-system NAME: kube-scheduler-kind-control-plane   STATUS: Running
NAMESPACE: kube-system NAME: kube-scheduler-kind-control-plane2          STATUS: Running
NAMESPACE: kube-system NAME: kube-scheduler-kind-control-plane3          STATUS: Running

4. 調用分析

簡略調用圖

詳細調用圖(圖片較大,請耐心等待加載,或者從這里下載到本地)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM