手動部署一個單節點kubernetes


簡要說明

我們知道,kubenretes的安裝非常復雜,因為組件眾多。為此各開源社區以及一些商業公司都發布了一些針對kubernetes集成安裝組件,包括kubernetes官方的kubeadm, minikube,基於ubuntu的conjure-up,以及rancher等。但是我個人,是不太喜歡使用集成軟件來完成應用部署的。使用集成軟件部署固然簡化了部署成本,卻提升了維護成本,也不利於使用人員對軟件原理的理解。所以我寧願通過自己編寫salt或者ansible配置文件的方式來實現應用的自動化部署。但這種方式就要求我們對應用的各組件必須要有深入的了解。

本篇文檔我們就通過純手動的方式來完成kubernetes 1.11.1的單節點部署。

安裝組件如下:

  • etcd
  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
  • kubelet
  • kube-proxy
  • coredns
  • dashboard
  • heapster + influxdb + grafana

安裝環境說明

ip system role
192.168.198.135 ubuntu 18.04 etcd、master、node

部署

生成相關證書

證書類型說明

證書名稱 配置文件 用途
ca.pem ca-csr.json ca根證書
kube-proxy.pem ca-config.json kube-proxy-csr.json kube-proxy使用的證書
admin.pem admin-csr.json ca-config.json kubectl 使用的證書
kubernetes.pem ca-config.json kubernetes-csr.json apiserver使用的證書

安裝cfssl證書生成工具

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

生成CA證書

創建存放證書目錄:

mkdir /root/ssl/

創建/root/ssl/ca-config.json文件,內容如下:

{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
		            "client auth"
                ]
            }
        }
    }
}

字段說明:

  • ca-config.json:可以定義多個Profiles,分別指定不同的過期時間、使用場景等參數;后續在簽名證書的時候使用某個Profile。這里定義了兩個Profile,一個用於kubernetes,一個用於etcd,我這里etcd沒有使用證書,所以另一個不使用。
  • signing:表示該 證書可用於簽名其他證書;生成的ca.pem證書中CA=TRUE
  • server auth:表示client可以使用該ca對server提供的證書進行驗證
  • client auth:表示server可以用該ca對client提供的證書進行驗證

創建/root/ssl/ca-csr.json內容如下:

{
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Wuhan",
            "ST": "Hubei",
    	    "O": "k8s",
    	    "OU": "System"
        }
    ]
}

生成CA證書:

cfssl gencert --initca=true ca-csr.json | cfssljson --bare ca

生成Kubernetes master節點使用的證書

創建/root/ssl/kubernetes-csr.json文件,內容如下:

{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
	    "localhost",
	    "192.168.198.135"
        "10.254.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Hubei",
            "L": "Wuhan",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

  • hosts字段用於指定授權使用該證書的IP和域名列表,因為現在要生成的證書需要被Kubernetes master節點使用,所以指定了kubenetes master節點的ip和hostname

生成kubernetes證書:

cfssl gencert --ca ca.pem --ca-key ca-key.pem --config ca-config.json --profile kubernetes kubernetes-csr.json | cfssljson --bare kubernetes

生成kubectl證書

創建/root/ssl/admin-csr.json文件,內容如下:

{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
  • kube-apiserver會提取CN作為客戶端的用戶名,這里是admin,將提取O作為用戶的屬組,這里是system:masters
  • 后續kube-apiserver使用RBAC對客戶端(如kubelet、kube-proxy、pod)請求進行授權
  • apiserver預定義了一些RBAC使用的ClusterRoleBindings,例如cluster-admin將組system:masters與CluasterRole cluster-admin綁定,而cluster-admin擁有訪問apiserver的所有權限,因此admin用戶將作為集群的超級管理員。

生成kubectl證書:

cfssl gencert --ca ca.pem --ca-key ca-key.pem --config ca-config.json --profile kubernetes admin-csr.json | cfssljson --bare admin

生成kube-proxy證書

創建/root/ssl/kube-proxy-csr.json文件,內容如下:

{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

  • CN指定該證書的user為system:kube-proxy
  • kube-apiserver預定義的RoleBinding 將User system:kube-proxy與Role system:node-proxier綁定,該role授予了調用kube-apiserver Proxy相關API的權限;

生成kube-proxy證書:

cfssl gencert --ca ca.pem --ca-key ca-key.pem --config ca-config.json --profile kubernetes kube-proxy-csr.json | cfssljson --bare kube-proxy

校驗證書,我們這里以校驗kubernetes.pem為例:

{
  "subject": {
    "common_name": "kubernetes",
    "country": "CN",
    "organization": "k8s",
    "organizational_unit": "System",
    "locality": "Wuhan",
    "province": "Hubei",
    "names": [
      "CN",
      "Hubei",
      "Wuhan",
      "k8s",
      "System",
      "kubernetes"
    ]
  },
  "issuer": {
    "country": "CN",
    "organization": "k8s",
    "organizational_unit": "System",
    "locality": "Wuhan",
    "province": "Hubei",
    "names": [
      "CN",
      "Hubei",
      "Wuhan",
      "k8s",
      "System"
    ]
  },
  "serial_number": "604093852911522162752840982392649683093741969960",
  "sans": [
    "localhost",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local",
    "127.0.0.1",
    "192.168.198.135",
    "10.254.0.1"
  ],
  "not_before": "2018-08-10T05:15:00Z",
  "not_after": "2038-08-05T05:15:00Z",
  "sigalg": "SHA256WithRSA",
  "authority_key_id": "DA:57:77:4C:1F:35:8E:FE:F9:15:2:7A:25:BB:77:DC:3C:36:8A:84",
  "subject_key_id": "C2:6A:A6:75:AA:DC:4F:4A:75:D1:4C:60:B3:DF:56:68:34:A:39:15",
  "pem": "-----BEGIN CERTIFICATE-----\nMIIEZzCCA0+gAwIBAgIUadCBVqjv5DTo7Hjm80AKo3XxIigwDQYJKoZIhvcNAQEL\nBQAwTDELMAkGA1UEBhMCQ04xDjAMBgNVBAgTBUh1YmVpMQ4wDAYDVQQHEwVXdWhh\nbjEMMAoGA1UEChMDazhzMQ8wDQYDVQQLEwZTeXN0ZW0wHhcNMTgwODEwMDUxNTAw\nWhcNMzgwODA1MDUxNTAwWjBhMQswCQYDVQQGEwJDTjEOMAwGA1UECBMFSHViZWkx\nDjAMBgNVBAcTBVd1aGFuMQwwCgYDVQQKEwNrOHMxDzANBgNVBAsTBlN5c3RlbTET\nMBEGA1UEAxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC\nggEBALfgHIhOVEinORPv6ijwWeJxxM6uDORM/kWl0nwHpn3TQZF3jOsQVSs3LOuD\n6SlkRWtsYAGwm0SnEhEDefCn5xmbHSW0YmWYoBE2BerlwLqmaS2eRy4vjCHgkreb\nL+K5rRN+ZW5NOsegrCxzT3h1WWBVEmG/HztwQvrGP9mRbyfI1/pwC7iqoeAzYPx6\nuCPReRFpDpwLb8ESFtMLyUWXaXs2j1csTDTadDdigEA9UmabVYfcycw4mGXr0CcV\n+oqBz2sGGZWY52SZ7FlOS5adydplda7Jpz2C4TMu7XAjW6zvRVxup31W4pDCwjFh\n2InQUJU1dcMi5gZhEhKK3flHKDECAwEAAaOCASowggEmMA4GA1UdDwEB/wQEAwIF\noDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAd\nBgNVHQ4EFgQUwmqmdarcT0p10Uxgs99WaDQKORUwHwYDVR0jBBgwFoAU2ld3TB81\njv75FQJ6Jbt33Dw2ioQwgaYGA1UdEQSBnjCBm4IJbG9jYWxob3N0ggprdWJlcm5l\ndGVzghJrdWJlcm5ldGVzLmRlZmF1bHSCFmt1YmVybmV0ZXMuZGVmYXVsdC5zdmOC\nHmt1YmVybmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3RlcoIka3ViZXJuZXRlcy5kZWZh\ndWx0LnN2Yy5jbHVzdGVyLmxvY2FshwR/AAABhwTAqMaHhwQK/gABMA0GCSqGSIb3\nDQEBCwUAA4IBAQAa20xc+o6H9qyADwsP9Exv5xpvfUMtuQwGY2LdAB6c02hOPy2Q\nDSRBPFfD6UrM3psFnNZUqnnnsylQ9Y9ib5dGfqKJAbaN6eltEd994TKS3/+FtvP3\nIfByaT1YYI0RSOAs/37qEHv8aTfLSMDK+41+Ruch2a40K5xd1o8q3rUY9EgM9Vc+\nQ2uHmc1D9+7b/VE1VrbW3u/TNrcV4uVsRJrY40ugD4X170C8xyryaInrXg/70kmS\ntKwEdLr6l6dWb8yZpITADAhoOgRPmok6h37gfe9ef2RcY1Q646prMVOmYOd0jNij\ncyZYDvOd1FKg5JhqlRFtaSS7RHcebys3v/Dx\n-----END CERTIFICATE-----\n"
}

將所有證書復制到/etc/kubernete/ssl目錄下:

mkdir /etc/kubernetes/ssl
cp /root/ssl/*.pem /etc/kuberetes/ssl/

生成token及kubeconfig

在本次配置中,我們將會同時啟用證書認證,token認證,以及http basic認證。所以需要提前生成token認證文件,basic認證文件以及kubeconfig

以下所有操作都直接在/etc/kubernetes目錄進行

生成token文件

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > bootstrap-token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

生成http basic認證文件

cat > basic-auth.csv <<EOF
admin,admin,1
EOF

生成用於kubelet認證使用的bootstrap.kubeconfig文件

export KUBE_APISERVER="https://192.168.198.135:6443"
# 設置集群參數,即api-server的訪問方式,給集群起個名字就叫kubernetes
kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
  
# 設置客戶端認證參數,這里采用token認證
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 設置上下文參數,用於連接用戶kubelet-bootstrap與集群kubernetes
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
  
# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

生成kube-proxy使用的kube-proxy.kubeconfig文件

# 設置集群參數
kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
# 設置客戶端認證參數
kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
# 設置默認上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

部署etcd

我們已經提前下載好所有組件的二進制文件,並且全部拷貝到了/usr/local/bin目錄下。所以這里只配置各服務相關的啟動文件。

創建etcd的啟動文件/etc/systemd/system/etcd.service,內容如下:

[Unit]
Description=Etcd
After=network.target
Before=flanneld.service

[Service]
User=root
ExecStart=/usr/local/bin/etcd \
-name etcd1 \
-data-dir /var/lib/etcd \
--advertise-client-urls http://192.168.198.135:2379,http://127.0.0.1:2379 \
--listen-client-urls http://192.168.198.135:2379,http://127.0.0.1:2379
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

啟動etcd:

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

部署master

kube-apiserver

創建kube-apiserver的啟動文件/etc/systemd/system/kube-apiserver.service,內容如下:

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,NodeRestriction \
  --apiserver-count=3 \
  --bind-address=192.168.198.135 \
  --insecure-bind-address=127.0.0.1 \
  --insecure-port=8080 \
  --secure-port=6443 \
  --authorization-mode=Node,RBAC \
  --runtime-config=rbac.authorization.k8s.io/v1 \
  --kubelet-https=true \
  --anonymous-auth=false \
  --basic-auth-file=/etc/kubernetes/basic-auth.csv \
  --enable-bootstrap-token-auth \
  --token-auth-file=/etc/kubernetes/bootstrap-token.csv \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=20000-40000 \
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --etcd-servers=http://192.168.198.135:2379 \
  --etcd-quorum-read=true \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --v=2 \
  --logtostderr=true
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

啟動kube-apiserver:

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

kube-controller-manager

創建kube-controller-manager的啟動文件/etc/systemd/system/kube-controller-manager.service,內容如下:

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
  --cluster-name=kubernetes \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --service-cluster-ip-range=10.254.0.0/16 \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --node-monitor-grace-period=40s \
  --node-monitor-period=5s \
  --pod-eviction-timeout=5m0s \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=false \
  --leader-elect=true \
  --v=2 \
  --logtostderr=true

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

啟動kube-controller-manager服務:

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

kube-scheduler

創建kube-scheduler的啟動文件/etc/systemd/system/kube-scheduler.service,內容如下:

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --leader-elect=true \
  --v=2 \
  --logtostderr=true

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.targe

啟動kube-scheduler:

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-shceudler

配置rbac授權

# 綁定kubelet-bootstrap用戶到system:node-bootstrapper權限組

/usr/local/bin/kubectl create clusterrolebinding kubelet-bootstrap-clusterbinding --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

# 綁定system:nodes組到system:node權限組

/usr/local/bin/kubectl create clusterrolebinding kubelet-node-clusterbinding --clusterrole=system:node --group=system:nodes

部署Node

docker

安裝:

# step 1: 安裝必要的一些系統工具
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# step 2: 安裝GPG證書
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: 寫入軟件源信息
sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# Step 4: 更新並安裝 Docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce

配置:

修改/etc/docker/daemon.json內容如下:

{
  "registry-mirrors": ["http://5dd4061a.m.daocloud.io"]
}

啟動:

systemctl start docker
systemctl enable docker

kubelet

創建kubelet的啟動文件/etc/systemd/system/kubelet.service,內容如下:

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
#WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --address=192.168.198.135 \
  --hostname-override=192.168.198.135 \
  --cgroup-driver=cgroupfs \
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 \
  --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --cert-dir=/etc/kubernetes/ssl \
  --cluster-dns=10.254.0.100 \
  --cluster-domain=cluster.local. \
  --hairpin-mode=promiscuous-bridge \
  --allow-privileged=true \
  --fail-swap-on=false \
  --serialize-image-pulls=false \
  --max-pods=30 \
  --logtostderr=true \
  --v=2 
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

啟動kubelet:

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

在master上為kubelet頒發證書:

# node節點正常啟動以后,在master端執行kubectl get nodes看不到node節點,這是因為node節點啟動后先向master申請證書,master簽發證書以后,才能加入到集群中,如下:

# 查看 csr
➜  kubectl get csr
NAME        AGE       REQUESTOR           CONDITION
csr-l9d25   2m        kubelet-bootstrap   Pending

# 簽發證書
➜  kubectl certificate approve csr-l9d25
certificatesigningrequest "csr-l9d25" approved

這時,在master上執行kubectl get nodes就可以看到一個node節點:

# 查看 node
➜  kubectl get node
NAME          STATUS    AGE       VERSION
192.168.198.135   Ready     3m        v1.11.1

kube-proxy

創建kube-proxy的啟動文件/etc/systemd/system/kube-proxy.service,內容如下:

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
#WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --bind-address=192.168.198.135 \
  --hostname-override=192.168.198.135 \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
  --v=2 \
  --cluster-cidr=10.254.0.0/16

Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

啟動kube-proxy:

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

如果kube-proxy正常啟動,但是運行應用的時候,發現kube-proxy的網絡不通,查看日志,拋如下異常:

Failed to delete stale service IP 10.254.0.100 connections, error: error deleting connection tracking state for UDP service IP: 10.254.0.100, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH

則需要安裝如下組件:

apt install conntrack conntrackd  nfct

如果是在centos7上,則安裝:

yum install conntrack-tools

安裝Add-ons

創建/root/k8s-yamls目錄,以下所有操作都在這里進行

coredns

創建/root/k8s-yamls/coredns目錄

mkdir -p /root/k8s-yamls/coredns

獲取coredns配置示例文件:

cd /root/k8s-yamls/coredns

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/coredns/coredns.yaml.sed

mv coredns.yaml.sed coredns.yaml

修改coredns.yaml里三個地方:

  • $DNS_DOMAIN 修改為 cluster.local
  • $DNS_SREVER_IP 修改為 10.254.0.100
  • k8s.gcr.io/coredns:1.1.3修改為registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.1.3

啟動coredns:

kubectl apply -f coredns.yaml

dashboard

啟動dashboard:

# 注意:需要修改鏡像地址到國內源,我們使用阿里源
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

kubectl create -f kubernetes-dashboard.yaml

為dashboard創建授權用戶:

# 在上面我創建basic-auth.csv文件時,創建了一個用戶admin,這個用戶用於訪問dashboard時的http basic認證,通過rbac為其授權如下:

kubectl create clusterrolebinding login-on-dashboard-with-cluster-admin --clusterrole=cluster-admin --user=admin
clusterrolebinding "login-on-dashboard-with-cluster-admin" created

這個用戶用於訪問dashboard時的http basic認證

為dashboard創建授權serviceaccount:

# 要訪問dashboard,還需要令牌認證,創建一個名為admin-user的serviceaccount,並為其授權:

# 創建ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

# 創建rbac授權
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
  
kubectl create -f admin-user.yaml
kubectl create -f admin-user.rbac.yaml

獲取admin-user這個serviceaccount的token,用於令牌驗證:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

注:Kubernetes API Server新增了--anonymous-auth選項,允許匿名請求訪問secure port。沒有被其他authentication方法拒絕的請求即Anonymous requests, 這樣的匿名請求的username為”system:anonymous”, 歸屬的組為”system:unauthenticated”。並且該選線是默認的。這樣一來,當采用chrome瀏覽器訪問dashboard UI時很可能無法彈出用戶名、密碼輸入對話框,導致后續authorization失敗。為了保證用戶名、密碼輸入對話框的彈出,需要將–-anonymous-auth設置為false

訪問dashbaord:

https://192.168.198.135:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

heapster + influxdb + grafana

# 注: 所有文件中都需要修改鏡像源
wget https://raw.githubusercontent.com/kubernetes/heapster/release-1.5/deploy/kube-config/influxdb/grafana.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/release-1.5/deploy/kube-config/rbac/heapster-rbac.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/release-1.5/deploy/kube-config/influxdb/heapster.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/release-1.5/deploy/kube-config/influxdb/influxdb.yaml

kubectl create -f ./


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM