APIServer安全控制
-
Authentication:身份認證
- 這個環節它面對的輸入是整個
http request
,負責對來自client的請求進行身份校驗,支持的方法包括:
basic auth
- client證書驗證(https雙向驗證)
jwt token
(用於serviceaccount)
-
APIServer啟動時,可以指定一種Authentication方法,也可以指定多種方法。如果指定了多種方法,那么APIServer將會逐個使用這些方法對客戶端請求進行驗證, 只要請求數據通過其中一種方法的驗證,APIServer就會認為Authentication成功;
-
使用kubeadm引導啟動的k8s集群,apiserver的初始配置中,默認支持
client證書
驗證和serviceaccount
兩種身份驗證方式。 證書認證通過設置--client-ca-file
根證書以及--tls-cert-file
和--tls-private-key-file
來開啟。 -
在這個環節,apiserver會通過client證書或
http header
中的字段(比如serviceaccount的jwt token
)來識別出請求的用戶身份
,包括”user”、”group”等,這些信息將在后面的authorization
環節用到。
- 這個環節它面對的輸入是整個
-
Authorization:鑒權,你可以訪問哪些資源
-
這個環節面對的輸入是
http request context
中的各種屬性,包括:user
、group
、request path
(比如:/api/v1
、/healthz
、/version
等)、request verb
(比如:get
、list
、create
等)。 -
APIServer會將這些屬性值與事先配置好的訪問策略(
access policy
)相比較。APIServer支持多種authorization mode
,包括Node、RBAC、Webhook
等。 -
APIServer啟動時,可以指定一種
authorization mode
,也可以指定多種authorization mode
,如果是后者,只要Request通過了其中一種mode的授權, 那么該環節的最終結果就是授權成功。在較新版本kubeadm引導啟動的k8s集群的apiserver初始配置中,authorization-mode
的默認配置是”Node,RBAC”
。
-
-
Admission Control:准入控制,一個控制鏈(層層關卡),用於攔截請求的一種方式。偏集群安全控制、管理方面。
-
為什么需要?
認證與授權獲取 http 請求 header 以及證書,無法通過body內容做校驗。
Admission 運行在 API Server 的增刪改查 handler 中,可以自然地操作 API resource
-
舉個栗子
-
以NamespaceLifecycle為例, 該插件確保處於Termination狀態的Namespace不再接收新的對象創建請求,並拒絕請求不存在的Namespace。該插件還可以防止刪除系統保留的Namespace:default,kube-system,kube-public。
-
LimitRanger,若集群的命名空間設置了LimitRange對象,若Pod聲明時未設置資源值,則按照LimitRange的定義來未Pod添加默認值
apiVersion: v1 kind: LimitRange metadata: name: mem-limit-range namespace: luffy spec: limits: - default: memory: 512Mi defaultRequest: memory: 256Mi type: Container --- apiVersion: v1 kind: Pod metadata: name: default-mem-demo-2 spec: containers: - name: default-mem-demo-2-ctr image: nginx:alpine
-
NodeRestriction, 此插件限制kubelet修改Node和Pod對象,這樣的kubelets只允許修改綁定到Node的Pod API對象,以后版本可能會增加額外的限制 。開啟Node授權策略后,默認會打開該項
-
-
怎么用?
APIServer啟動時通過
--enable-admission-plugins --disable-admission-plugins
指定需要打開或者關閉的 Admission Controller -
場景
- 自動注入sidecar容器或者initContainer容器
- webhook admission,實現業務自定義的控制需求
-
kubectl的認證授權
kubectl的日志調試級別:
信息 | 描述 |
---|---|
v=0 | 通常,這對操作者來說總是可見的。 |
v=1 | 當您不想要很詳細的輸出時,這個是一個合理的默認日志級別。 |
v=2 | 有關服務和重要日志消息的有用穩定狀態信息,這些信息可能與系統中的重大更改相關。這是大多數系統推薦的默認日志級別。 |
v=3 | 關於更改的擴展信息。 |
v=4 | 調試級別信息。 |
v=6 | 顯示請求資源。 |
v=7 | 顯示 HTTP 請求頭。 |
v=8 | 顯示 HTTP 請求內容。 |
v=9 | 顯示 HTTP 請求內容,並且不截斷內容。 |
$ kubectl get nodes -v=7
I0329 20:20:08.633065 3979 loader.go:359] Config loaded from file /root/.kube/config
I0329 20:20:08.633797 3979 round_trippers.go:416] GET https://192.168.136.10:6443/api/v1/nodes?limit=500
kubeadm init
啟動完master節點后,會默認輸出類似下面的提示內容:
... ...
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
... ...
這些信息是在告知我們如何配置kubeconfig
文件。按照上述命令配置后,master節點上的kubectl
就可以直接使用$HOME/.kube/config
的信息訪問k8s cluster
了。 並且,通過這種配置方式,kubectl
也擁有了整個集群的管理員(root)權限。
- 當
kubectl
使用這種kubeconfig
方式訪問集群時,Kubernetes
的kube-apiserver
是如何對來自kubectl
的訪問進行身份驗證(authentication
)和授權(authorization
)的呢? - 為什么來自
kubectl
的請求擁有最高的管理員權限呢?
查看/root/.kube/config
文件:
前面提到過apiserver的authentication支持通過tls client certificate、basic auth、token
等方式對客戶端發起的請求進行身份校驗, 從kubeconfig信息來看,kubectl顯然在請求中使用了tls client certificate
的方式,即客戶端的證書。
證書base64解碼:
$ echo xxxxxxxxxxxxxx |base64 -d > kubectl.crt
說明在認證階段,apiserver
會首先使用--client-ca-file
配置的CA證書去驗證kubectl提供的證書的有效性,基本的方式 :
$ openssl verify -CAfile /etc/kubernetes/pki/ca.crt kubectl.crt
kubectl.crt: OK
除了認證身份,還會取出必要的信息供授權階段使用,文本形式查看證書內容:
$ openssl x509 -in kubectl.crt -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4736260165981664452 (0x41ba9386f52b74c4)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Feb 10 07:33:39 2020 GMT
Not After : Feb 9 07:33:40 2021 GMT
Subject: O=system:masters, CN=kubernetes-admin
...
認證通過后,提取出簽發證書時指定的CN(Common Name),kubernetes-admin
,作為請求的用戶名 (User Name), 從證書中提取O(Organization)字段作為請求用戶所屬的組 (Group),group = system:masters
,然后傳遞給后面的授權模塊。
kubeadm在init初始引導集群啟動過程中,創建了許多默認的RBAC規則, 在k8s有關RBAC的官方文檔中,我們看到下面一些default clusterrole
列表:
其中第一個cluster-admin這個cluster role binding綁定了system:masters group,這和authentication環節傳遞過來的身份信息不謀而合。 沿着system:masters group對應的cluster-admin clusterrolebinding“追查”下去,真相就會浮出水面。
我們查看一下這一binding:
$ kubectl describe clusterrolebinding cluster-admin
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:masters
我們看到在kube-system名字空間中,一個名為cluster-admin的clusterrolebinding將cluster-admin cluster role與system:masters Group綁定到了一起, 賦予了所有歸屬於system:masters Group中用戶cluster-admin角色所擁有的權限。
我們再來查看一下cluster-admin這個role的具體權限信息:
$ kubectl describe clusterrole cluster-admin
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]
[*] [] [*]
非資源類,如查看集群健康狀態。
RBAC
Role-Based Access Control,基於角色的訪問控制, apiserver啟動參數添加--authorization-mode=RBAC 來啟用RBAC認證模式,kubeadm安裝的集群默認已開啟。官方介紹
查看開啟:
# master節點查看apiserver進程
$ ps aux |grep apiserver
RBAC模式引入了4個資源類型:
-
Role,角色
一個Role只能授權訪問單個namespace
## 示例定義一個名為pod-reader的角色,該角色具有讀取default這個命名空間下的pods的權限 kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: pod-reader rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"] ## apiGroups: "","apps", "autoscaling", "batch", kubectl api-versions ## resources: "services", "pods","deployments"... kubectl api-resources ## verbs: "get", "list", "watch", "create", "update", "patch", "delete", "exec" ## https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/
-
ClusterRole
一個ClusterRole能夠授予和Role一樣的權限,但是它是集群范圍內的。
## 定義一個集群角色,名為secret-reader,該角色可以讀取所有的namespace中的secret資源 kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: # "namespace" omitted since ClusterRoles are not namespaced name: secret-reader rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"]
-
Rolebinding
將role中定義的權限分配給用戶和用戶組。RoleBinding包含主題(users,groups,或service accounts)和授予角色的引用。對於namespace內的授權使用RoleBinding,集群范圍內使用ClusterRoleBinding。
## 定義一個角色綁定,將pod-reader這個role的權限授予給jane這個User,使得jane可以在讀取default這個命名空間下的所有的pod數據 kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-pods namespace: default subjects: - kind: User #這里可以是User,Group,ServiceAccount name: jane apiGroup: rbac.authorization.k8s.io roleRef: kind: Role #這里可以是Role或者ClusterRole,若是ClusterRole,則權限也僅限於rolebinding的內部 name: pod-reader # match the name of the Role or ClusterRole you wish to bind to apiGroup: rbac.authorization.k8s.io
注意:rolebinding既可以綁定role,也可以綁定clusterrole,當綁定clusterrole的時候,subject的權限也會被限定於rolebinding定義的namespace內部,若想跨namespace,需要使用clusterrolebinding
## 定義一個角色綁定,將dave這個用戶和secret-reader這個集群角色綁定,雖然secret-reader是集群角色,但是因為是使用rolebinding綁定的,因此dave的權限也會被限制在development這個命名空間內 apiVersion: rbac.authorization.k8s.io/v1 # This role binding allows "dave" to read secrets in the "development" namespace. # You need to already have a ClusterRole named "secret-reader". kind: RoleBinding metadata: name: read-secrets # # The namespace of the RoleBinding determines where the permissions are granted. # This only grants permissions within the "development" namespace. namespace: development subjects: - kind: User name: dave # Name is case sensitive apiGroup: rbac.authorization.k8s.io - kind: ServiceAccount name: dave # Name is case sensitive namespace: luffy roleRef: kind: ClusterRole name: secret-reader apiGroup: rbac.authorization.k8s.io
考慮一個場景: 如果集群中有多個namespace分配給不同的管理員,每個namespace的權限是一樣的,就可以只定義一個clusterrole,然后通過rolebinding將不同的namespace綁定到管理員身上,否則就需要每個namespace定義一個Role,然后做一次rolebinding。
-
ClusterRolebingding
允許跨namespace進行授權
apiVersion: rbac.authorization.k8s.io/v1 # This cluster role binding allows anyone in the "manager" group to read secrets in any namespace. kind: ClusterRoleBinding metadata: name: read-secrets-global subjects: - kind: Group name: manager # Name is case sensitive apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: secret-reader apiGroup: rbac.authorization.k8s.io
kubelet的認證授權
查看kubelet進程
$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sun 2020-07-05 19:33:36 EDT; 1 day 12h ago
Docs: https://kubernetes.io/docs/
Main PID: 10622 (kubelet)
Tasks: 24
Memory: 60.5M
CGroup: /system.slice/kubelet.service
└─851 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf
查看/etc/kubernetes/kubelet.conf
,解析證書:
$ echo xxxxx |base64 -d >kubelet.crt
$ openssl x509 -in kubelet.crt -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 9059794385454520113 (0x7dbadafe23185731)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Feb 10 07:33:39 2020 GMT
Not After : Feb 9 07:33:40 2021 GMT
Subject: O=system:nodes, CN=system:node:master-1
得到我們期望的內容:
Subject: O=system:nodes, CN=system:node:k8s-master
我們知道,k8s會把O作為Group來進行請求,因此如果有權限綁定給這個組,肯定在clusterrolebinding的定義中可以找得到。因此嘗試去找一下綁定了system:nodes組的clusterrolebinding
$ kubectl get clusterrolebinding|awk 'NR>1{print $1}'|xargs kubectl get clusterrolebinding -oyaml|grep -n10 system:nodes
98- roleRef:
99- apiGroup: rbac.authorization.k8s.io
100- kind: ClusterRole
101- name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
102- subjects:
103- - apiGroup: rbac.authorization.k8s.io
104- kind: Group
105: name: system:nodes
106-- apiVersion: rbac.authorization.k8s.io/v1
107- kind: ClusterRoleBinding
108- metadata:
109- creationTimestamp: "2020-02-10T07:34:02Z"
110- name: kubeadm:node-proxier
111- resourceVersion: "213"
112- selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubeadm%3Anode-proxier
$ kubectl describe clusterrole system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
Name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
certificatesigningrequests.certificates.k8s.io/selfnodeclient [] [] [create]
結局有點意外,除了system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
外,沒有找到system相關的rolebindings,顯然和我們的理解不一樣。 嘗試去找資料,發現了這么一段 :
Default ClusterRole | Default ClusterRoleBinding | Description |
---|---|---|
system:kube-scheduler | system:kube-scheduler user | Allows access to the resources required by the schedulercomponent. |
system:volume-scheduler | system:kube-scheduler user | Allows access to the volume resources required by the kube-scheduler component. |
system:kube-controller-manager | system:kube-controller-manager user | Allows access to the resources required by the controller manager component. The permissions required by individual controllers are detailed in the controller roles. |
system:node | None | Allows access to resources required by the kubelet, including read access to all secrets, and write access to all pod status objects. You should use the Node authorizer and NodeRestriction admission plugin instead of the system:node role, and allow granting API access to kubelets based on the Pods scheduled to run on them. The system:node role only exists for compatibility with Kubernetes clusters upgraded from versions prior to v1.8. |
system:node-proxier | system:kube-proxy user | Allows access to the resources required by the kube-proxycomponent. |
大致意思是說:之前會定義system:node這個角色,目的是為了kubelet可以訪問到必要的資源,包括所有secret的讀權限及更新pod狀態的寫權限。如果1.8版本后,是建議使用 Node authorizer and NodeRestriction admission plugin 來代替這個角色的。
我們目前使用1.16,查看一下授權策略:
$ ps axu|grep apiserver
kube-apiserver --authorization-mode=Node,RBAC --enable-admission-plugins=NodeRestriction
查看一下官網對Node authorizer的介紹:
Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets.
In future releases, the node authorizer may add or remove permissions to ensure kubelets have the minimal set of permissions required to operate correctly.
In order to be authorized by the Node authorizer, kubelets must use a credential that identifies them as being in the system:nodes
group, with a username of system:node:<nodeName>
Service Account及K8S Api調用
前面說,認證可以通過證書,也可以通過使用ServiceAccount(服務賬戶)的方式來做認證。大多數時候,我們在基於k8s做二次開發時都是選擇通過ServiceAccount + RBAC 的方式。我們之前訪問dashboard的時候,是如何做的?
## 新建一個名為admin的serviceaccount,並且把名為cluster-admin的這個集群角色的權限授予新建的
serviceaccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: kubernetes-dashboard
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: admin
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: admin
namespace: kubernetes-dashboard
我們查看一下:
$ kubectl -n kubernetes-dashboard get sa admin -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2020-04-01T11:59:21Z"
name: admin
namespace: kubernetes-dashboard
resourceVersion: "1988878"
selfLink: /api/v1/namespaces/kubernetes-dashboard/serviceaccounts/admin
uid: 639ecc3e-74d9-11ea-a59b-000c29dfd73f
secrets:
- name: admin-token-lfsrf
注意到serviceaccount上默認綁定了一個名為admin-token-lfsrf的secret,我們查看一下secret
$ kubectl -n kubernetes-dashboard describe secret admin-token-lfsrf
Name: admin-token-lfsrf
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin
kubernetes.io/service-account.uid: 639ecc3e-74d9-11ea-a59b-000c29dfd73f
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 4 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZW1vIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImFkbWluLXRva2VuLWxmc3JmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjM5ZWNjM2UtNzRkOS0xMWVhLWE1OWItMDAwYzI5ZGZkNzNmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlbW86YWRtaW4ifQ.ffGCU4L5LxTsMx3NcNixpjT6nLBi-pmstb4I-W61nLOzNaMmYSEIwAaugKMzNR-2VwM14WbuG04dOeO67niJeP6n8-ALkl-vineoYCsUjrzJ09qpM3TNUPatHFqyjcqJ87h4VKZEqk2qCCmLxB6AGbEHpVFkoge40vHs56cIymFGZLe53JZkhu3pwYuS4jpXytV30Ad-HwmQDUu_Xqcifni6tDYPCfKz2CZlcOfwqHeGIHJjDGVBKqhEeo8PhStoofBU6Y4OjObP7HGuTY-Foo4QindNnpp0QU6vSb7kiOiQ4twpayybH8PTf73dtdFt46UF6mGjskWgevgolvmO8A
演示role的權限:
$ cat test-sa.yaml
serviceaccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: test
namespace: kubernetes-dashboard
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: test
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: test
namespace: kubernetes-dashboard
curl演示
$ curl -k -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6InhXcmtaSG5ZODF1TVJ6dUcycnRLT2c4U3ZncVdoVjlLaVRxNG1wZ0pqVmcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1xNXBueiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImViZDg2ODZjLWZkYzAtNDRlZC04NmZlLTY5ZmE0ZTE1YjBmMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.iEIVMWg2mHPD88GQ2i4uc_60K4o17e39tN0VI_Q_s3TrRS8hmpi0pkEaN88igEKZm95Qf1qcN9J5W5eqOmcK2SN83Dd9dyGAGxuNAdEwi0i73weFHHsjDqokl9_4RGbHT5lRY46BbIGADIphcTeVbCggI6T_V9zBbtl8dcmsd-lD_6c6uC2INtPyIfz1FplynkjEVLapp_45aXZ9IMy76ljNSA8Uc061Uys6PD3IXsUD5JJfdm7lAt0F7rn9SdX1q10F2lIHYCMcCcfEpLr4Vkymxb4IU4RCR8BsMOPIO_yfRVeYZkG4gU2C47KwxpLsJRrTUcUXJktSEPdeYYXf9w" https://192.168.136.10:6443/api/v1/namespaces/luffy/pods?limit=500
可以添加