高可用Kubernetes集群-11. 部署kube-dns


參考文檔:

  1. Github介紹:https://github.com/kubernetes/dns
  2. Github yaml文件:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
  3. DNS for Services and Pods:https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
  4. Github示例:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
  5. Configure stub domain and upstream DNS servers:https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/ 

Kube-DNS在集群范圍內完成服務名到ClusterIP的解析,對服務進行訪問,提供了服務發現機制的基本功能。 

一.環境

1. 基礎環境

組件

版本

Remark

kubernetes

v1.9.2

 

KubeDNS

V1.4.8

服務發現機制同SkyDNS

2. 原理

  1. Kube-DNS以Pod的形式部署到kubernetes集群系統;
  2. Kube-DNS對SkyDNS進行封裝優化,由4個容器變成3個;
  3. kubedns容器:基於skydns實現;監視k8s Service資源並更新DNS記錄;替換etcd,使用TreeCache數據結構保存DNS記錄並實現SkyDNS的Backend接口;接入SkyDNS,對dnsmasq提供DNS查詢服務;
  4. dnsmasq容器:為集群提供DNS查詢服務,即簡易的dns server;設置kubedns為upstream;提供DNS緩存,降低kubedns負載,提高性能;
  5. sidecar容器:監控健康模塊,同時向外暴露metrics記錄;定期檢查kubedns和dnsmasq的健康狀態;為k8s活性檢測提供HTTP API。

二.部署Kube-DNS

Kubernetes支持kube-dns以Cluster Add-On的形式運行。Kubernetes會在集群中調度一個DNS的Pod與Service。

1. 准備images

kubernetes部署Pod服務時,為避免部署時發生pull鏡像超時的問題,建議提前將相關鏡像pull到相關所有節點(實驗),或搭建本地鏡像系統。

  1. 基礎環境已做了鏡像加速,可參考:http://www.cnblogs.com/netonline/p/7420188.html
  2. 需要從gcr.io pull的鏡像,已利用Docker Hub的"Create Auto-Build GitHub"功能(Docker Hub利用GitHub上的Dockerfile文件build鏡像),在個人的Docker Hub build成功,可直接pull到本地使用。
# Pod內namespace共享的基礎pause鏡像;
# 在kubelet的啟動參數中已指定pause鏡像,Pull到本地后修改名稱
[root@kubenode1 ~]# docker pull netonline/pause-amd64:3.0
[root@kubenode1 ~]# docker tag netonline/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
[root@kubenode1 ~]# docker images

# kubedns
[root@kubenode1 ~]# docker pull netonline/k8s-dns-kube-dns-amd64:1.14.8

# dnsmasq-nanny
[root@kubenode1 ~]# docker pull netonline/k8s-dns-dnsmasq-nanny-amd64:1.14.8

# sidecar
[root@kubenode1 ~]# docker pull netonline/k8s-dns-sidecar-amd64:1.14.8

2. 下載kube-dns范本

# https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
[root@kubenode1 ~]# mkdir -p /usr/local/src/yaml/kubedns
[root@kubenode1 ~]# cd /usr/local/src/yaml/kubedns
[root@kubenode1 kubedns]# wget -O kube-dns.yaml https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kube-dns/kube-dns.yaml.base

3. 配置kube-dns Service

# kube-dns將Service,ServiceAccount,ConfigMap,Deployment等4中服務放置在1個yaml文件中,以下章節分別針對各模塊修改,紅色加粗字體即修改部分;
# 對Pod yaml文件的編寫這里不做展開,可另參考資料,如《Kubernetes權威指南》;
# 修改后的kube-dns.yaml:https://github.com/Netonline2016/kubernetes/blob/master/addons/kubedns/kube-dns.yaml

# clusterIP與kubelet啟動參數--cluster-dns一致即可,在service cidr中預選1個地址做dns地址
[root@kubenode01 yaml]# vim kube-dns.yaml
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 169.169.0.11
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

4. 配置kube-dns ServiceAccount

# kube-dns ServiceAccount不用修改,kubernetes集群預定義的ClusterRoleBinding system:kube-dns已將kube-system(系統服務一般部署在此)namespace中的ServiceAccout kube-dns 與預定義的ClusterRole system:kube-dns綁定,而ClusterRole system:kube-dns具有訪問kube-apiserver dns的api權限。
# RBAC授權請見:https://blog.frognew.com/2017/04/kubernetes-1.6-rbac.html
[root@kubenode1 ~]# kubectl get clusterrolebinding system:kube-dns -o yaml

[root@kubenode1 ~]# kubectl get clusterrole system:kube-dns -o yaml

5. 配置kube-dns ConfigMap

ConfigMap的典型用法是:

  1. 生成容器內的環境變量;
  2. 設置容器啟動命令的啟動參數(需設置為環境變量);
  3. 以volume的形式掛載為容器內部的文件或目錄。

驗證kube-dns功能不需要做修改,如果需要自定義DNS與上游DNS服務器,可對ConfigMap進行修改,見第四章節。

6. 配置kube-dns Deployment

# 第97,148,187行的三個容器的啟動鏡像;
# 第127,168,200,201行的域名,域名同kubelet啟動參數中的”--cluster-domain”對應,注意域名”cluster.local.”后的“.”
[root@kubenode1 kubedns]# vim kube-dns.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: netonline/k8s-dns-kube-dns-amd64:1.14.8
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local. - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: netonline/k8s-dns-dnsmasq-nanny-amd64:1.14.8
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --no-negcache
        - --log-facility=-
        - --server=/cluster.local./127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: netonline/k8s-dns-sidecar-amd64:1.14.8
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,SRV
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,SRV
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.
      serviceAccountName: kube-dns

7. 啟動kube-dns

[root@kubenode1 ~]# cd /usr/local/src/yaml/kubedns/
[root@kubenode1 kubedns]# kubectl create -f kube-dns.yaml

三.驗證Kube-DNS

1. kube-dns Deployment&Service&Pod

# kube-dns Pod 3個容器已”Ready”,服務,deployment等也正常啟動
[root@kubenode1 kubedns]# kubectl get pod -n kube-system -o wide
[root@kubenode1 kubedns]# kubectl get service -n kube-system -o wide
[root@kubenode1 kubedns]# kubectl get deployment -n kube-system -o wide

2. kube-dns 查詢

# pull測試鏡像
[root@kubenode1 ~]# docker pull radial/busyboxplus:curl

# 啟動測試Pod並進入Pod容器
[root@kubenode1 ~]# kubectl run curl --image=radial/busyboxplus:curl -i --tty

# Pod容器中查看/etc/resolv.conf,dns記錄已寫入文件;
# nslookup可查詢到kubernetes集群系統的服務ip
[ root@curl-545bbf5f9c-hxml9:/ ]$ cat /etc/resolv.conf 
[ root@curl-545bbf5f9c-hxml9:/ ]$ nslookup kubernetes.default

四.自定義DNS與上游DNS服務器

從kubernetes v1.6開始,用戶可以在集群內配置私有DNS區域(一般稱為存根域Stub Domain)與外部上游域名服務。

1. 原理

  1. Pod定義中支持兩種DNS策略:Default與ClusterFirst,dnsPolicy默認為ClusterFirst;如果將dnsPolicy設置為Default,域名解析配置完全從Pod所在的節點(/etc/resolv.conf)繼承而來;
  2. 當Pod的dnsPolicy設置為ClusterFirst時,DNS查詢首先被發送到kube-dns的DNS緩存層;
  3. 在DNS緩存層檢查域名后綴,根據域名后綴發送到集群自身的DNS服務器,或者自定義的stub domain,或者上游域名服務器。

2. 自定義DNS方式

# 集群管理員可使用ConfigMap指定自定義的存根域域上游DNS服務器;
[root@kubenode1 ~]# cd /usr/local/src/yaml/kubedns/

# 直接修改kube-dns.yaml模版的ConfigMap服務部分
# stubDomains:可選項,存根域定義,json格式;key為DNS后綴,value是1個json數組,表示1組DNS服務器地址;目標域名服務器可以是kubernetes服務名;多個自定義dns記錄采用”,”分隔;
# upstreamNameservers:DNS地址組成的數組,最多指定3個ip地址,json格式;如果指定此值,從節點的域名服務設置(/etc/resolv.conf)繼承來的值會被覆蓋
[root@kubenode1 kubedns]# vim kube-dns.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
data: stubDomains: | {"out.kubernetes": ["172.20.1.201"]} upstreamNameservers: | ["114.114.114.114", "223.5.5.5"]

3. 重建kube-dns ConfigMap

# 先刪除原kube-dns,再創建新kube-dns;
# 也可以只刪除原kube-dns中的ConfigMap服務,再單獨創建新的 ConfigMap服務
[root@kubenode1 kubedns]# kubectl delete -f -n kube-dns -n kube-system
[root@kubenode1 kubedns]# kubectl create -f kube-dns.yaml

# 查看dnsmasq日志,stub domain與upstreamserver已生效;
# kubedns與sidecar兩個日志也有stub domain與upstreamserver生效的輸出
[root@kubenode1 kubedns]# kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq

4. 自定義dns服務器

# 在configmap中自定義的stub domain 172.20.1.201上安裝dnsmasq服務
[root@hanode01 ~]# yum install dnsmasq -y

# 生成自定義的DNS記錄文件
[root@hanode01 ~]# echo "192.168.100.11 server.out.kubernetes" > /tmp/hosts

# 啟動DNS服務;
# -q:輸出查詢記錄;
# -d:以debug模式啟動,前台運行,觀察輸出日志;
# -h:不使用/etc/hosts;
# -R:不使用/etc/resolv.conf;
# -H:使用自定義的DNS記錄文件;
# 啟動輸出日志中warning提示沒有設置上游DNS服務器;同時讀入自定義DNS記錄文件
[root@hanode01 ~]# dnsmasq -q -d -h -R -H /tmp/hosts

# iptables放行udp 53端口
[root@hanode01 ~]# iptables -I INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT

5. 啟動Pod

# 下載鏡像
[root@kubenode1 ~]# docker pull busybox

# 配置Pod yaml文件;
# dnsPolicy設置為ClusterFirst,默認也是ClusterFirst
[root@kubenode1 ~]# touch dnstest.yaml
[root@kubenode1 ~]# vim dnstest.yaml
apiVersion: v1
kind: Pod
metadata:
  name: dnstest
  namespace: default
spec:
  dnsPolicy: ClusterFirst
  containers:
  - name: busybox
    image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

# 創建Pod
[root@kubenode1 ~]# kubectl create -f dnstest.yaml

6. 驗證自定義的DNS配置

# nslookup查詢server.out.kubernetes,返回定義的ip地址
[root@kubenode1 ~]# kubectl exec -it dnstest -- nslookup server.out.kubernetes

觀察stub domain 172.20.1.201上dnsmasq服務的輸出:kube節點172.30.200.23(Pod所在的節點,flannel網絡,snat出節點)對server.out.kubenetes的查詢,dnsmasq返回預定義的主機地址。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM