Pod yaml
Node標簽,指定node運行pod。節點選擇器nodeSelector
Node打標簽
[root@k8s-master1 data]#kubectl label node k8s-node1 app=mynode
[root@k8s-master1 data]# cat pod-labes.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: web
name: myapp
namespace: default
spec:
nodeSelector:
app: mynode
containers:
- image: nginx
name: nginx
resources:
requests:
cpu: 500m
memory: 1500Mi
limits:
cpu: 500m
memory: 1500Mi
nodeAffinity:節點親和類似宇nodeSelector,可以根據節點的標簽約束pod可以調度到那些節點:
相比nodeSelector:
- 匹配更多的邏輯組和,不只是字符的完全相等
- 調到分為軟策略和盈策略
- 硬(required):必須滿足
- 軟(preferred):場上滿足,但不保證
- 操作: In Notin Exists DoesNotExist Gt LT
- 親和性: In
- 反親和性: Notin DoesNotExist
策略:
requiredDuringSchedulingIgnoredDuringExecution 硬策略:節點必須滿足
例:必須運行在標簽為app=mynode的節點
[root@k8s-master1 data]# cat pod-labes-nodeAffinity-required.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: web-required
name: myapp-required
namespace: default
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In#操作符
values:
- mynode
containers:
- image: nginx
name: nginx-required
resources:
requests:
cpu: 500m
memory: 1500Mi
limits:
cpu: 500m
memory: 1500Mi
preferredDuringSchedulingIgnoredDuringExecution 軟策略:不是必須的
例:選擇標簽為group=ai的節點運行該pod,如果沒有匹配次標簽,就更具調度自動分配
[root@k8s-master1 data]# cat pod-labes-nodeAffinity-preferred.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: web-preferred
name: myapp-preferred
namespace: default
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1 #權重1--100,值越大,pod調度到對應標簽的概率越高
preference:
matchExpressions:
- key: group
operator: In
values:
- ai
containers:
- image: nginx
name: nginx-preferred
resources:
requests:
cpu: 500m
memory: 1500Mi
limits:
cpu: 500m
memory: 1500Mi
給node添加角色(ROLES),打標簽:
給master節點打上master的標簽,node打上node的標簽
kubectl label node k8s-master1 node-role.kubernetes.io/master=
kubectl label node k8s-node1 node-role.kubernetes.io/node=
kubectl label node k8s-node2 node-role.kubernetes.io/node=
刪除node標簽,只需要將=換成-
kubectl label node k8s-master1 node-role.kubernetes.io/master-
kubectl label node k8s-node1 node-role.kubernetes.io/node-
kubectl label node k8s-node2 node-role.kubernetes.io/node-
Taints: 避免pod調度到特定的Node上(污點)
應用場景:
- 專用節點
- 配置了特殊硬件的節點
- 基於Taint的驅逐
設置污點:
kubectl taint node nodename key=value:[errect]
例:[root@k8s-master1 data]# kubectl taint node k8s-node1 gpu=node:NoExecuten
去掉污點:
kubectl taint node nodename key=value:[effect]-
例:[root@k8s-master1 data]# kubectl taint node k8s-node1 gpu=node:NoExecute-
查看污點:
Kubectl describe node nodemae|grep Taint
其中effect可取值:
NoSchedule: 一定不能被調度
PreferNoSchedule: 盡量不要調度
不僅不會調度,還會驅逐node上已有的pod
允許pod調度到有污點(Taints)的node上:Tolerations污點容忍。注:不是強制性
例:容許這個pod允許在污點為gpu=node:NoExecuten的node節點上
[root@k8s-master1 data]# cat pod-labes-taint.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: web-taint
name: myapp-taint
namespace: default
spec:
tolerations:
- key: gpu #污點的key例如之前打的是gpu
operator: "Equal" #就是打污點時的=
value: "node" #打污點的值
effect: "NoExecute" #effect的值
containers:
- image: nginx
name: nginx-taint
resources:
requests:
cpu: 500m
memory: 1500Mi
limits:
cpu: 500m
memory: 1500Mi
指定node名字創建pod,nodename
[root@k8s-master1 data]# cat pod-nodename.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: web-nodename
name: myapp-nodename
namespace: default
spec:
nodeName: k8s-node2
containers:
- image: nginx
name: nginx-nodename
resources:
requests:
cpu: 500m
memory: 1500Mi
limits:
cpu: 500m
memory: 1500Mi
Deployment yaml
Pod 與Controllers的關系
- Controllers: 在集群上管理和運行容器的對象
- 通過label-selector相關聯
- Pod通過控制器實現應用的運維、伸縮、滾動升級等
Deployment 功能和應用場景
- 部署無狀態應用
- 管理Pod和ReplicaSet
- 具有上線部署、副本設定、滾動升級、回滾等功能
- 提供聲明式更新,例如只更新一個新的Image
(應用場景:Web服務、微服務)
部署一個nginx pod的deployment
[root@k8s-master1 data]# cat deployment-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx-deployment
template:
metadata:
labels:
app: nginx-deployment
spec:
containers:
- name: nginx-deploy
image: nginx
通過以下命令將剛創建的pod通過service發布出去
kubectl expose --name nginx-deployment-service deployment nginx-deployment --port=80 --target-port=80 --type=NodePort
--name nginx-deployment-service service的名字
--port 80 service對外提供的端口
--target-port 80 pod的端口
--type NodePort 指定類型為nodeport node隨機分配一個端口映射到service的80端口上
查看service 然后就可以用node ip+ 32302端口訪問
[root@k8s-master1 data]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-deployment-service NodePort 10.0.0.19 <none> 80:32302/TCP 2m43s
升級、回滾、刪除
命令操作:
升級
kubectl set image deployment/nignx-deployment ningx=nginx:1.15
kubectl rollout status deployment/nignx-deployment #查看升級狀態
回滾
kubectl rollout history deployment/nignx-deployment #查看升級的版本
kubectl rollout undo deployment/nignx-deployment #默認回滾到上一個版本
kubectl rollout undo deployment/nignx-deployment --revision=1 #指定版本回滾
使用yaml文件就可以在里面直接修改,然后執行apply就可以
Service yaml
存在的意義
- 防止Pod失聯(服務發現)
- 定義一組Pod得訪問策略(負載均衡)
與pod得關系
- 通過label-selector相關聯
- 通過Service實現Pod的負載均衡(TCP/IP 4層)
三種類型
- ClusterIP 集群內部使用
- NodePort 對外暴露應用
- LoadBalancer 對外暴露應用,使用公有雲
Cluster ip 類型
標准的service yaml文件# 默認是cluster ip
[root@k8s-master1 data]# cat server-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: web
[root@k8s-master1 data]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-service ClusterIP 10.0.0.189 <none> 80/TCP 113s
NodePort 類型的ymal
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web-service-nodeport
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 32767 #指定端口,不知道系統會自動分配
selector:
app: web
type: NodePort #指定類型,不指定默認是cluster ip類型
[root@k8s-master1 data]# kubectl get svc -o name web-service-nodeport -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-service-nodeport NodePort 10.0.0.76 <none> 80:32767/TCP 3m10s app=web
Ingress 配置
Pod和Ingress的關系
- 通過service相關聯
- 通過Ingress Controller實現Pod的負責均衡
- 支持TCP/IP 4層和HTTP7層
Ingress 工作:
- 部署ingress controller
- 創建ingress規則
Ingress controller 有很多實現,我們采用官方維護的Nginx控制器
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml
vim deploy.yaml
修改鏡像:
k8s.gcr.io/ingress-nginx/controller:v0.41.2@sha256:1f4f402b9c14f3ae92b11ada1dfe9893a88f0faeb0b2f4b903e2c67a0c3bf0de
## 替換成
registry.cn-beijing.aliyuncs.com/lingshiedu/ingress-nginx-controller:0.41.2
## 因為k8s.gcr.io的鏡像國內沒法下載,所以替換成阿里雲的鏡像
配置暴漏宿主機的網絡
#建議修改直接暴漏宿主機的網絡: hostNetwork: true
kubectl apply –f deploy.yaml
查看ingress-controller pod
Kubectl get pod –n ingress-nginx
配置ingress 規則
[root@k8s-master1 data]#cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myweb-ingress
spec:
rules:
- host: example.ctnrs.com #如nginx的額server name 配置域名
http:
paths:
- path: / #如location
backend: #配置后端主機
serviceName: web #那個service得名字
servicePort: 80 #service得端口
報一下錯誤
[root@k8s-master1 data]# kubectl apply -f ingress.yaml
Error from server (InternalError): error when creating "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: context deadline exceeded
failurePolicy 定義了如何處理 admission webhook 中無法識別的錯誤和超時錯誤。允許的值為 Ignore 或 Fail。
Ignore 表示調用 webhook 的錯誤將被忽略並且允許 API 請求繼續。 Fail 表示調用 webhook 的錯誤導致准入失敗並且 API 請求被拒絕。 您可以通過 ValidatingWebhookConfiguration 或者 MutatingWebhookConfiguration 動態配置哪些資源要被哪些 admission webhook 處理。 加ignore
[root@k8s-master1 data]# kubectl get ValidatingWebhookConfiguration/ingress-nginx-admission -n ingress-nginx
NAME WEBHOOKS AGE
ingress-nginx-admission 1 5h22m
[root@k8s-master1 data]# kubectl edit ValidatingWebhookConfiguration/ingress-nginx-admission -n ingress-nginx