K8S的RBAC鑒權
K8S自1.6版本起默認使用基於角色的訪問控制(RBAC)
相較於ABAC(基於屬性的訪問控制)和WebHook等鑒權機制:
- 對集群中的資源的權限實現了完整覆蓋
- 支持權限的動態調整,無需重啟apiserver
基於角色的訪問控制圖
查看賬號
[root@hdss7-21 ~]# kubectl get clusterrole
NAME AGE
admin 19d
cluster-admin 19d
edit 19d
system:aggregate-to-admin 19d
system:aggregate-to-edit 19d
system:aggregate-to-view 19d
system:auth-delegator 19d
system:basic-user 19d
system:certificates.k8s.io:certificatesigningrequests:nodeclient 19d
system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 19d
system:controller:attachdetach-controller 19d
system:controller:certificate-controller 19d
system:controller:clusterrole-aggregation-controller 19d
system:controller:cronjob-controller 19d
system:controller:daemon-set-controller 19d
system:controller:deployment-controller 19d
system:controller:disruption-controller 19d
system:controller:endpoint-controller 19d
system:controller:expand-controller 19d
system:controller:generic-garbage-collector 19d
system:controller:horizontal-pod-autoscaler 19d
system:controller:job-controller 19d
system:controller:namespace-controller 19d
system:controller:node-controller 19d
system:controller:persistent-volume-binder 19d
system:controller:pod-garbage-collector 19d
system:controller:pv-protection-controller 19d
system:controller:pvc-protection-controller 19d
system:controller:replicaset-controller 19d
system:controller:replication-controller 19d
system:controller:resourcequota-controller 19d
system:controller:route-controller 19d
system:controller:service-account-controller 19d
system:controller:service-controller 19d
system:controller:statefulset-controller 19d
system:controller:ttl-controller 19d
system:coredns 22h
system:csi-external-attacher 19d
system:csi-external-provisioner 19d
system:discovery 19d
system:heapster 19d
system:kube-aggregator 19d
system:kube-controller-manager 19d
system:kube-dns 19d
system:kube-scheduler 19d
system:kubelet-api-admin 19d
system:node 19d
system:node-bootstrapper 19d
system:node-problem-detector 19d
system:node-proxier 19d
system:persistent-volume-provisioner 19d
system:public-info-viewer 19d
system:volume-scheduler 19d
traefik-ingress-controller 11h
view 19d
可以看到system開頭的都是集群角色,而traefik-ingress-controller是我們自己創建的。
再查看cluster-admin賬戶
[root@hdss7-21 ~]# kubectl get clusterrole cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2020-08-03T13:43:20Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "40"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin
uid: 49f5b99c-7cd6-4078-b2f8-ab64051de827
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
對所有組,所有資源,所有url都有權限。具有k8s集群的最高權限。
去運維主機hdss7-200
創建相關證書。
[root@hdss7-200 certs]# (umask 077;openssl genrsa -out dashboard.od.com.key 2048)
Generating RSA private key, 2048 bit long modulus
................................................+++
...................................+++
e is 65537 (0x10001)
[root@hdss7-200 certs]# openssl req -new -key dashboard.od.com.key -out dashboard.od.com.csr -subj "/CN=dashboard.od.com/C=CN/ST=BJ/L=Beijing/O=OldboyEdu/OU=ops"
對這個域進行簽發,所以cn必須對應相應的域名,而我們之前給K8S簽發的時候,對應我們的組件名字,或者集群角色。注意看集群角色是不是集群用戶,kube-proxy簽發的client證書,就是 system:kube-proxy,就是默認的k8s集群角色。
已經有了一套證書,為什么還要給kube-proxy一套證書?原因就是cn用到了默認集群角色,就不用做rolebinding了。
[root@hdss7-200 certs]# openssl x509 -req -in dashboard.od.com.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out dashboard.od.com.crt -days 3650
Signature ok
subject=/CN=dashboard.od.com/C=CN/ST=BJ/L=Beijing/O=OldboyEdu/OU=ops
Getting CA Private Key
查看一下證書
[root@hdss7-200 certs]# cfssl-certinfo -cert dashboard.od.com.crt
{
"subject": {
"common_name": "dashboard.od.com",
"country": "CN",
"organization": "OldboyEdu",
"organizational_unit": "ops",
"locality": "Beijing",
"province": "BJ",
"names": [
"dashboard.od.com",
"CN",
"BJ",
"Beijing",
"OldboyEdu",
"ops"
]
},
"issuer": {
"common_name": "OldboyEdu",
"country": "CN",
"organization": "old",
"organizational_unit": "ops",
"locality": "beijing",
"province": "beijing",
"names": [
"CN",
"beijing",
"beijing",
"old",
"ops",
"OldboyEdu"
]
},
"serial_number": "16709093817431023485",
"not_before": "2020-08-23T12:39:08Z",
"not_after": "2030-08-21T12:39:08Z",
"sigalg": "SHA256WithRSA",
"authority_key_id": "",
"subject_key_id": "",
"pem": "-----BEGIN CERTIFICATE-----\nMIIDRjCCAi4CCQDn4qBMYqbffTANBgkqhkiG9w0BAQsFADBhMQswCQYDVQQGEwJD\nTjEQMA4GA1UECBMHYmVpamluZzEQMA4GA1UEBxMHYmVpamluZzEMMAoGA1UEChMD\nb2xkMQwwCgYDVQQLEwNvcHMxEjAQBgNVBAMTCU9sZGJveUVkdTAeFw0yMDA4MjMx\nMjM5MDhaFw0zMDA4MjExMjM5MDhaMGkxGTAXBgNVBAMMEGRhc2hib2FyZC5vZC5j\nb20xCzAJBgNVBAYTAkNOMQswCQYDVQQIDAJCSjEQMA4GA1UEBwwHQmVpamluZzES\nMBAGA1UECgwJT2xkYm95RWR1MQwwCgYDVQQLDANvcHMwggEiMA0GCSqGSIb3DQEB\nAQUAA4IBDwAwggEKAoIBAQCpEMCZq2HWab2AQpLOB+qRQ+zFgs/ecmw1E85p6DsG\nNdO0N68m7/lOsZelpENdXFu1wTycAzvCci14oxWwnObnnhPclKyu2B0Fyzj5ojUO\nkXRA8iTwUg0hje2kt0krqeLr8OfN7DPeD+DIssAN8VetU9J7TLBnqbW418QVjrhU\nESSp1qr/iwVPZTjqmAvA8QgGMlzvrdQdtUfarIEChkOhF18qqtMhxwn15LpprBmw\nbs8llkOf0Q2zjUNwQIzDQIAzie4Im4dHGJqkfC5LacekrhlU15+i07/GyduLBS+B\n6gdGpn7BZijm+uB/EQbDvxVkHcwGKzOwsh811eY6D3lVAgMBAAEwDQYJKoZIhvcN\nAQELBQADggEBACKPyuWkrxeSw0Afbeg1piLcPrYci6X1REKL/piQcX/9K0NWCCUL\ntJUV1AfpeA34lQJVuUuBI9PcrLIr73XOwlv2tvACKqx77kRAAQB5s9+5yYByKaRt\nTycoIYQAFxzSRsjYr3FSvMP3aonaG8FnL4DdcPrBcT5PR82m6zUGDWZceZz3SeuY\nQTmna1AxScRVpBYRASAn61X3esi0gUOQxMUf9kQchS+00D/ZWD2jFHVH4UIVps1Z\n9PBZ5/h+kUCckxvoS8piZBcDY1hVH3evlCXlNzcNGd2mAXk5ySLXPwg5Ee+mcy8c\nQZh38KGnD/LixSBqcIp1JBVrq9OfO7ue9JQ=\n-----END CERTIFICATE-----\n"
}
然后我們只需要把證書和私鑰拷貝到nginx里(7-11主機)
[root@hdss7-11 ~]# cd /etc/nginx/
[root@hdss7-11 nginx]# mkdir certs
[root@hdss7-11 certs]# scp hdss7-200:/opt/certs/dashboard.od.com.crt .
[root@hdss7-11 certs]# scp hdss7-200:/opt/certs/dashboard.od.com.key .
[root@hdss7-11 certs]# ll
total 8
-rw-r--r-- 1 root root 1196 Aug 23 20:43 dashboard.od.com.crt
-rw------- 1 root root 1675 Aug 23 20:44 dashboard.od.com.key
因為我們之前配置了一個nginx反向代理,現在因為簽發了證書,需要用到443端口,走https協議,所以我們需要在上一層負載均衡里把證書卸載了,所以這里需要再做一個操作。
[root@hdss7-11 conf.d]# vim dashboard.od.com.conf
注意有個rewrite
。
server {
listen 80;
server_name dashboard.od.com;
rewrite ^(.*)$ https://${server_name}$1 permanent;
}
server {
listen 443 ssl;
server_name dashboard.od.com;
ssl_certificate "certs/dashboard.od.com.crt";
ssl_certificate_key "certs/dashboard.od.com.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://default_backend_traefik;
proxy_set_header Host $http_host;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
}
}
保存退出,重啟nginx
[root@hdss7-11 conf.d]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@hdss7-11 conf.d]# nginx -s reload
然后我們訪問https://dashboard.od.com
域名,瀏覽器會提示不安全,這是因為我們的證書是自簽的,不是權威機構簽發的,瀏覽器不認這個證書,所以會顯示不安全。可以看到這個證書是指定域名的證書。

查看證書詳情
[root@hdss7-21 ~]# kubectl get secret -n kube-system
[root@hdss7-21 ~]# kubectl describe secret kubernetes-dashboard-admin-token-ghtm2 -n kube-system
Name: kubernetes-dashboard-admin-token-ghtm2
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin
kubernetes.io/service-account.uid: 1ebbb4f2-cfca-494e-a18e-e8164509f45b
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1346 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1naHRtMiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjFlYmJiNGYyLWNmY2EtNDk0ZS1hMThlLWU4MTY0NTA5ZjQ1YiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.AG3Zza-JRo8Ezvhep8uMh8qanN8risnwOyTaUe2GSI9228PoBP0kfqzuQ_EzhivVQDzAiOEQX4RXP3LEvyp1kyd3lyDGqIsOYcTlIs-1fc0JV0XOvGd4rPgG8RZ6nNScTli_YzIvmf6nMFOqgsUdeeFS09texvX9RH0Bto-1u6QKmJ2acU6r_XHN1MIxTZ8hgrImgFAgm7ERlDO7_POiKEM-LvhaYaIoY30395RtDdB25hu6MNUN1jNTp5YhMNhluAefc70D8cjy-1uTELOrseTll3q-k0YS0Jrv_N1zMN8nReRS14_zEvnOMJsRqHK9IxA51D9ovlK4wVhCrB22cQ
將token復制到dashboard頁面
可以看到已經能夠進入設置頁面,如果不登陸的話,則無法訪問。
然后記得在7-12上做相應的操作
[root@hdss7-12 ~]# cd /etc/nginx/
[root@hdss7-12 nginx]# mkdir certs
[root@hdss7-12 nginx]# cd certs
[root@hdss7-12 certs]# scp hdss7-200:/opt/certs/dashboard.od.com.crt .
[root@hdss7-12 certs]# scp hdss7-200:/opt/certs/dashboard.od.com.key .
[root@hdss7-12 certs]# cd ../conf.d/
[root@hdss7-12 conf.d]# vim dashboard.od.com.conf
server {
listen 80;
server_name dashboard.od.com;
rewrite ^(.*)$ https://${server_name}$1 permanent;
}
server {
listen 443 ssl;
server_name dashboard.od.com;
ssl_certificate "certs/dashboard.od.com.crt";
ssl_certificate_key "certs/dashboard.od.com.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://default_backend_traefik;
proxy_set_header Host $http_host;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
}
}
[root@hdss7-12 nginx]# nginx -t
[root@hdss7-12 nginx]# nginx -s reload
更換鏡像
自己參考,可做可不做。
[root@hdss7-200 certs]# docker pull hexun/bubernetes-dashboard-amd64:v1.10.1
[root@hdss7-200 certs]# docker images | grep hexun
hexun/kubernetes-dashboard-amd64 v1.10.1 f9aed6605b81 20 months ago 122MB
[root@hdss7-200 certs]# docker tag f9aed6605b81 harbor.od.com/public/dashboard:v1.10.1
[root@hdss7-200 certs]# docker push !$
[root@hdss7-200 certs]# cd /data/k8s-yaml/dashboard
[root@hdss7-200 certs]# vim dp.yaml
# 將如下改成dashboard:v1.10.1
image: harbor.od.com/public/dashboard:v1.8.3
# 注意,下面是7-22主機
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dp.yaml
會發現,更換到1.10之后,登錄界面不能跳過,必須經過身份認證之后才能訪問。
一個服務賬號一定是對應唯一的服務賬號的cigarette,通過describe就可以查看具體的內容,包括token
kubectl describe secret kubernetes-dashboard-admin-token-ghtm2 -n kube-system
回到運維主機上,按照kubernetes官方的配置文件來一次操作,創建(復制)資源配置清單。可以看到這是最小化的權限。
vim rbac-minimal.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
在7-22節點上
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/rbac-minimal.yaml
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
在運維主機上,修改dp.yaml
如下
serviceAccountName: kubernetes-dashboard
這個serviceAccountName就是上面minimal的用戶賬戶。
再次回到7-22上
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dp.yaml
deployment.apps/kubernetes-dashboard configured
然后查看新創建的信息
[root@hdss7-22 ~]# kubectl describe secret kubernetes-dashboard-token-hsw9n -n kube-system
Name: kubernetes-dashboard-token-hsw9n
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: ac5e544d-7527-4e46-a8a7-1a77bb65c3a4
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1oc3c5biIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImFjNWU1NDRkLTc1MjctNGU0Ni1hOGE3LTFhNzdiYjY1YzNhNCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.YrZQz0MMy3ipP1HuKKMCUip9cA0h_c34dL70mz3tq-L0tauL0pxX94OL3VHyY6-6egCSrCVIAiEil4lMPu_hej1erhTuRMgbMALeJKgmwkrINmUb_pg2Io0KzZfxFmAPlU75Hhe1pOuZbh_haeiQKBBcrxLa2Cj-uffEG_F1LKPi3GEr0xj2krtRhn0zcRGb4c8IZ5d1jauXhyTLRiKXD3JzimJdr9kt437lLIfIs45Z21PsAy1fnBIb1arxiInrjomX20ouNo-vjKMN6LjNCeJyugMsmmXYndmhhO9rcp87CYjrzc0NI_roNAsN6Fw5xiHXbD7WFp_B_v_9zaqb_A
ca.crt: 1346 bytes
namespace: 11 bytes
打開dashboard登錄頁面,輸入令牌,進入頁面查看。
因此,通過這樣的操作,在實際生產中,可以對相應的開發、測試或其他人員賦予不同的權限,比如只能查看。如果想要登錄,那么必須先申請token,然后可以根據對應token就可以控制對應的用戶權限。
總結一句話就是:K8S就是基於角色來控制權限。
小彩蛋之部署heapster
運維主機7-200
上
docker pull quay.io/bitnami/heapster:1.5.4
mkdir heapster
cd heapster
heapster]# vi rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: heapster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:heapster
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
heapster]# vi dp.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
image: harbor.od.com/public/heapster:v1.5.4
imagePullPolicy: IfNotPresent
command:
- /opt/bitnami/heapster/bin/heapster
- --source=kubernetes:https://kubernetes.default
heapster]# vi svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/rbac.yaml
kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/dp.yaml
kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/svc.yaml