ResourceQuota和LimitRange實踐指南
目的:能控制特定命名空間中的資源使用量,最終實現集群的公平使用和成本的控制
需要實現的功能如下:
- 限制運行狀態的Pod的計算資源用量
- 限制持久存儲卷的數量以控制對存儲的訪問
- 限制負載均衡器的數量以控制成本
- 防止濫用網絡端口
- 提供默認的計算資源Requests以便於系統做出更優化的調度
1. 創建命名空間
[root@t71 quota-example]# vim namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: quota-example
kubectl create -f namespace.yaml
2. 設置限定對象數據的資源配額
[root@t71 quota-example]# vim object-counts.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
spec:
hard:
persistentvolumeclaims: "2" # 持久存儲卷
services.loadbalancers: "2" # 負載均衡器
services.nodeports: "0" # NodePort
[root@t71 quota-example]# kubectl create -f object-counts.yaml --namespace=quota-example
resourcequota/object-counts created
3. 設置限定計算資源的資源配額
[root@t71 quota-example]# vim compute-resources.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
pods: "4"
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
[root@t71 quota-example]# kubectl create -f compute-resources.yaml --namespace=quota-example
resourcequota/compute-resources created
配額系統會自動防止該命名空間下同時擁有超過4個非“終止態”的pod。由於該項資源配額限制了CPU和內存的Limits和Requests的總量,因此會強制要求該命名空間下的所有容器都必須顯示地定義CPU和內存的Limits和Requests
4.配置默認Requests和Limits
使用LimitRange為命名空間下的所有Pod提供一個資源配置的默認值
[root@t71 quota-example]# vim limits.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: limits
spec:
limits:
- default:
cpu: 200m
memory: 512Mi
defaultRequest:
cpu: 100m
memory: 256Mi
type: Container
[root@t71 quota-example]# kubectl create -f limits.yaml --namespace=quota-example
5.指定資源配額的作用域
如果我們不想為某個命名空間配置默認的計算資源配額,而是希望限定在命名空間內運行的QoS的BestEffor的POD總數。例如集群中部分資源用來運行Qos為非BestEffort的服務,而將閑置的資源用來運行QoS為BestEffort的服務,可以避免集群中 所以資源僅被大量的BestEffort Pod耗盡。這可以通過創建兩個資源配額(ResourceQuota實現)
- 5.1 創建名為quota-scopes的命名空間:
[root@t71 quota-example]# kubectl create namespace quota-scopes
namespace/quota-scopes created
[root@t71 quota-example]#
- 5.2 創建名為best-effort的ResourceQuota,指定Scope為BestEffort:
[root@t71 quota-example]# vim best-effort.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: best-effort
spec:
hard:
pods: "10"
scopes:
- BestEffort
[root@t71 quota-example]# kubectl create -f best-effort.yaml --namespace=quota-scopes
resourcequota/best-effort created
- 5.3 再創建名為not-best-effort的ResourceQuota.指定Scope為NotBestEffort
[root@t71 quota-example]# vim not-best-effort.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: not-best-effort
spec:
hard:
pods: "4"
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
scopes:
- NotBestEffort
[root@t71 quota-example]# kubectl create -f not-best-effort.yaml --namespace=quota-scopes
resourcequota/not-best-effort created
- 5.4 查看創建成功的quota
[root@t71 quota-example]# kubectl get quota --namespace=quota-scopes
NAME CREATED AT
best-effort 2019-04-02T11:27:33Z
not-best-effort 2019-04-02T11:31:07Z
[root@t71 quota-example]# kubectl describe quota --namespace=quota-scopes
Name: best-effort
Namespace: quota-scopes
Scopes: BestEffort
* Matches all pods that do not have resource requirements set. These pods have a best effort quality of service.
Resource Used Hard
-------- ---- ----
pods 0 10
Name: not-best-effort
Namespace: quota-scopes
Scopes: NotBestEffort
* Matches all pods that have at least one resource requirement set. These pods have a burstable or guaranteed quality of service.
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
pods 0 4
requests.cpu 0 1
requests.memory 0 1Gi
[root@t71 quota-example]#
- 5.5 創建兩個Deployment
- 5.5.1 quota-best-effort.yaml
[root@t71 quota-example]# vim quota-best-effort.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: quota-deploy
namespace: quota-scopes
spec:
replicas: 8
template:
metadata:
labels:
app: centos
spec:
containers:
- name: centos
image: centos:7.5.1804
command: ["/usr/sbin/init"]
- 5.5.2 quota-not-best-effort.yaml
[root@t71 quota-example]# vim quota-not-best-effort.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: quota-deploy-not
namespace: quota-scopes
spec:
replicas: 2
template:
metadata:
labels:
app: centos
spec:
containers:
- name: centos
image: centos:7.5.1804
command: ["/usr/sbin/init"]
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 200m
memory: 512Mi
describe
[root@t71 quota-example]# kubectl describe quota --namespace=quota-scopes
Name: best-effort
Namespace: quota-scopes
Scopes: BestEffort
* Matches all pods that do not have resource requirements set. These pods have a best effort quality of service.
Resource Used Hard
-------- ---- ----
pods 8 10
Name: not-best-effort
Namespace: quota-scopes
Scopes: NotBestEffort
* Matches all pods that have at least one resource requirement set. These pods have a burstable or guaranteed quality of service.
Resource Used Hard
-------- ---- ----
limits.cpu 400m 2
limits.memory 1Gi 2Gi
pods 2 4
requests.cpu 200m 1
requests.memory 512Mi 1Gi
資源配額的作用域(Scopes)提供了一種資源集合分割的機制,這種機制使得集群管理員可以更加方便地監控和限制不同類型對象對於各類資源的使用,同時能為資源分配和限制提供更大的靈活度和便利性