K8S服務發現
- 服務發現就是服務(應用)之間相互定位的過程。
- 服務發現不是非雲計算時代獨有的,傳統的單體架構時代也會用到。以下應用場景下,更需要服務發現。
- 服務(應用)的動態性強
- 服務(應用)更新發布頻繁
- 服務(應用)支持自動伸縮
- 在K8S集群里,POD的IP是不斷變化的,如何以不變應萬變?
- 抽象出了service資源,通過標簽選擇器,管理一組pod
- 抽象出了集群網絡,通過相對固定的“集群IP”,使服務接入點固定
- 那么如何自動關聯service資源的“名稱”和“集群網絡IP”,從而達到服務被集群自動發現的目的呢?
- 考慮傳統DNS的模型:hdss7-21.host.com → 10.4.7.21
- 能否在K8S里建立這樣的模型:Nginx-ds → 192.168.0.5
K8S里服務發現的方式——DNS
實現K8S里DNS功能的插件(軟件)
- kube-dns——kubereters-v1.2至kubernetes-v1.10
- Coredns——kubernetes-v1.11——至今(取代了kube-dns)作為k8s默認的DNS插件
注意:
- K8S里的DNS不是萬能的!它應該只負責自動維護“服務名”→ 集群網絡IP之間的關系
K8S服務發現插件——CoreDNS
部署K8S的內網資源配置清單HTTP服務
在運維主機
hdss7-200.host.com
上,配置一個Nginx虛擬主機,用以提供k8s統一的資源配置清單訪問入口
- 配置Nginx
[root@hdss7-200 ~]# cd /etc/nginx/conf.d/
[root@hdss7-200 conf.d]# vim /etc/nginx/conf.d/k8s-yaml.od.com.conf
server {
listen 80;
server_name k8s-yaml.od.com;
location / {
autoindex on;
default_type text/plain;
root /data/k8s-yaml;
}
}
創建目錄並檢查
[root@hdss7-200 conf.d]# mkdir /data/k8s-yaml
[root@hdss7-200 conf.d]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@hdss7-200 conf.d]# nginx -s reload
[root@hdss7-200 conf.d]# cd /data/k8s-yaml/
在7-11上
[root@hdss7-11 ~]# vim /var/named/od.com.zone
修改如下
2020080103; serial
k8s-yaml A 10.4.7.200
重啟named服務,檢查服務
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A k8s-yaml.od.com @10.4.7.11 +short
10.4.7.200
回到7-200主機,創建coredns目錄
[root@hdss7-200 k8s-yaml]# mkdir coredns
下載coredns鏡像
[root@hdss7-200 conf.d]# docker pull docker.io/coredns/coredns:1.6.1
[root@hdss7-200 conf.d]# docker images | grep coredns
coredns/coredns 1.6.1 c0f6e815079e 12 months ago 42.2MB
[root@hdss7-200 conf.d]# docker tag c0f6e815079e harbor.od.com/public/coredns:v1.6.1
[root@hdss7-200 conf.d]# docker push !$
docker push harbor.od.com/public/coredns:v1.6.1
The push refers to repository [harbor.od.com/public/coredns]
da1ec456edc8: Pushed
225df95e717c: Pushed
v1.6.1: digest: sha256:c7bf0ce4123212c87db74050d4cbab77d8f7e0b49c041e894a35ef15827cf938 size: 739
需要四個資源配置清單
- rbac.yaml
- cm.yaml
- dp.yaml
- svc.yaml
rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
log
health
ready
kubernetes cluster.local 192.168.0.0/16
forward . 10.4.7.11
cache 30
loop
reload
loadbalance
}
dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/name: "CoreDNS"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: coredns
template:
metadata:
labels:
k8s-app: coredns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
containers:
- name: coredns
image: harbor.od.com/public/coredns:v1.6.1
args:
- -conf
- /etc/coredns/Corefile
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
svc.yaml
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
clusterIP: 192.168.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
- name: metrics
port: 9153
protocol: TCP
在7-21上操作
用陳述式資源管理方法生成聲明式資源管理配置清單
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/cm.yaml
configmap/coredns created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/dp.yaml
deployment.apps/coredns created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml
service/coredns created
查看POD資源
[root@hdss7-21 ~]# kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-6b6c4f9648-ttfg8 1/1 Running 0 2m3s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/coredns ClusterIP 192.168.0.2 <none> 53/UDP,53/TCP,9153/TCP 105s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 1/1 1 1 2m3s
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-6b6c4f9648 1 1 1 2m3s
查看sh文件
[root@hdss7-21 ~]# cat /opt/kubernetes/server/bin/kubelet.sh
#!/bin/sh
./kubelet \
..........
--cluster-dns 192.168.0.2 \
..........
可以看到集群id已經固定了,是192.168.0.2這個IP,這是這個集群DNS統一接入的點
[root@hdss7-21 ~]# dig -t A www.baidu.com @192.168.0.2 +short
www.a.shifen.com.
180.101.49.12
180.101.49.11
[root@hdss7-21 ~]# dig -t A hdss7-21.host.com @192.168.0.2 +short
10.4.7.21
因為我們在cm里面已經指定了forward是10.4.7.11
,我們自建的dns是coredns的上級dns,
[root@hdss7-21 ~]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 18d <none>
[root@hdss7-21 ~]# kubectl create deployment nginx-dp --image=harbor.od.com/public/nginx:v1.7.9 -n kube-public
deployment.apps/nginx-dp created
[root@hdss7-21 ~]# kubectl expose deployment nginx-dp --port=80 -n kube-public
service/nginx-dp exposed
[root@hdss7-21 ~]# kubectl get pods -n kube-public
NAME READY STATUS RESTARTS AGE
nginx-dp-5dfc689474-vjpp8 1/1 Running 0 9s
[root@hdss7-21 ~]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 19d <none>
[root@hdss7-21 ~]# kubectl get svc -n kube-public
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-dp ClusterIP 192.168.188.157 <none> 80/TCP 3m52s
[root@hdss7-21 ~]# dig -t A nginx-dp @192.168.0.2 +short
沒有返回!
這里想通過coredns去查詢service的名稱,需要用fqdn,需要加上
[root@hdss7-21 ~]# dig -t A nginx-dp.kube-public.svc.cluster.local. @192.168.0.2 +short
192.168.188.157
[root@hdss7-21 ~]# kubectl get pod -n kube-public -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-dp-5dfc689474-vjpp8 1/1 Running 0 10m 172.7.22.3 hdss7-22.host.com <none> <none>
進入容器
[root@hdss7-21 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ds-gwswr 1/1 Running 0 4h55m 172.7.22.2 hdss7-22.host.com <none> <none>
nginx-ds-jh2x5 1/1 Running 0 4h55m 172.7.21.2 hdss7-21.host.com <none> <none>
[root@hdss7-21 ~]# kubectl exec -it nginx-ds-jh2x5 /bin/bash
root@nginx-ds-jh2x5:/# curl 192.168.188.157
<!DOCTYPE html>
...........
通過名稱空間訪問
root@nginx-ds-jh2x5:/# curl nginx-dp.kube-public.svc.cluster.local
<!DOCTYPE html>
...........
因為一個default,就是默認的名稱空間,一個是public名稱空間,在集群里面,curl service的時候可以簡寫成nginx-dp.kube-public
root@nginx-ds-jh2x5:/# curl nginx-dp.kube-public
<!DOCTYPE html>
...........
這是為什么呢?查看一下之前配好的一個配置文件。
root@nginx-ds-jh2x5:/# cat /etc/resolv.conf
nameserver 192.168.0.2
search default.svc.cluster.local svc.cluster.local cluster.local host.com
options ndots:5
可以看到,這里已經將default.svc.cluster.local
,svc.cluster.local cluster.local
都加到svc這個域里了。
把coredns安裝到k8s集群里,最重要的一點就是把service的名字和cluster做一個自動關聯。做自動關聯就實現了所謂的服務發現。(只需要記service的名字)