1.構建鏡像 http.server, 並且把鏡像推送到docker hub上, 我推上去的chengfengsunce/httpserver:0.0.1
2.
通過deployment創建pod資源, deployment.yaml如下:
{ "apiVersion": "apps/v1", "kind": "Deployment", "metadata": { "labels": { "app": "httpserver" }, "name": "httpserver" }, "spec": { "progressDeadlineSeconds": 600, "replicas": 3, "revisionHistoryLimit": 10, "selector": { "matchLabels": { "app": "httpserver" } }, "strategy": { "rollingUpdate": { "maxSurge": "25%", "maxUnavailable": "25%" }, "type": "RollingUpdate" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "app": "httpserver" } }, "spec": { "containers": [ { "image": "chengfengsunce/httpserver:0.0.1", "imagePullPolicy": "IfNotPresent", "livenessProbe": { "failureThreshold": 3, "httpGet": { "path": "/healthz", "port": 8080, "scheme": "HTTP" }, "initialDelaySeconds": 5, "periodSeconds": 10, "successThreshold": 1, "timeoutSeconds": 1 }, "name": "httpserver", "readinessProbe": { "failureThreshold": 3, "httpGet": { "path": "/healthz", "port": 8080, "scheme": "HTTP" }, "initialDelaySeconds": 5, "periodSeconds": 10, "successThreshold": 1, "timeoutSeconds": 1 }, "resources": { "limits": { "cpu": "200m", "memory": "100Mi" }, "requests": { "cpu": "20m", "memory": "20Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File" } ], "dnsPolicy": "ClusterFirst", "imagePullSecrets": [ { "name": "cloudnative" } ], "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30 } } } }
用上面的yaml把文件部署上去,執行命令:kubectl create -f deployment.yaml
執行命令 kubectl create -f 文件名稱.yaml,通過查看命令,看看deployment信息,
root@iZbp12lq02mc4cz0rt0ce9Z:/homework# k get deployment
3.部署service代理
1 apiVersion: v1 2 kind: Service # 資源類型 3 metadata: 4 name: product-server # 名稱,這個名稱類似dockercompose里面定義的服務名 5 spec: 6 ports: 7 - name: psvc 8 port: 80 # 服務端口,提供給集群內部訪問的端口,外部訪問不了 9 targetPort: 80 # 容器端口 10 - name: grpc 11 port: 81 12 targetPort: 81 13 selector: 14 app: product-server # 標簽,與之對應的是deployment里面的pod標簽,它們是多對多的關系 15 type: ClusterIP # 內部網絡
執行命令 kubectl create -f 文件名稱.yaml,可以查看service信息。
1 root@kubernetes-master:/usr/local/k8s-test01/product# kubectl get service 2 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 3 kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d15h 4 mssql LoadBalancer 10.104.37.195 <pending> 1433:31227/TCP 26h 5 product-server ClusterIP 10.107.6.57 <none> 80/TCP,81/TCP 6h10m 6 rabmq NodePort 10.102.48.159 <none> 15672:31567/TCP,5672:30567/TCP 13h 7 root@kubernetes-master:/usr/local/k8s-test01/product#
deployment和service配置文件可以放在一個yaml文件里面,通過---分開就行,這里分開是為了看起來有層次。
好了,productservice部署完畢了,我們的服務是部署完畢了,但是我們在外部訪問不了,需要入口網關,這里我們先使用ingress-nginx-controller。
部署ingress
1.安裝ingress-nginx,這里我用的是最新版0.30.0,很是郁悶,下載地址被牆了,最后在github開源代碼里面找到安裝資源,其實就是一份yaml文件。
1 # 其他... 2 apiVersion: apps/v1 3 kind: Deployment 4 metadata: 5 name: nginx-ingress-controller 6 namespace: ingress-nginx 7 labels: 8 app.kubernetes.io/name: ingress-nginx 9 app.kubernetes.io/part-of: ingress-nginx 10 spec: 11 replicas: 1 12 selector: 13 matchLabels: 14 app.kubernetes.io/name: ingress-nginx 15 app.kubernetes.io/part-of: ingress-nginx 16 template: 17 metadata: 18 labels: 19 app.kubernetes.io/name: ingress-nginx 20 app.kubernetes.io/part-of: ingress-nginx 21 annotations: 22 prometheus.io/port: "10254" 23 prometheus.io/scrape: "true" 24 spec: 25 # wait up to five minutes for the drain of connections 26 terminationGracePeriodSeconds: 300 27 serviceAccountName: nginx-ingress-serviceaccount 28 nodeSelector: 29 kubernetes.io/os: linux 30 containers: 31 - name: nginx-ingress-controller 32 image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0 33 args: 34 - /nginx-ingress-controller 35 - --configmap=$(POD_NAMESPACE)/nginx-configuration 36 - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services 37 - --udp-services-configmap=$(POD_NAMESPACE)/udp-services 38 - --publish-service=$(POD_NAMESPACE)/ingress-nginx 39 - --annotations-prefix=nginx.ingress.kubernetes.io 40 # ......
安裝ingress-nginx ,執行命令 kubectl create -f ingress-nginx.yaml,隨后我們通過命令查看。
1 root@kubernetes-master:/usr/local/k8s-test01/product# kubectl get pods -n ingress-nginx 2 NAME READY STATUS RESTARTS AGE 3 nginx-ingress-controller-77db54fc46-tx6pf 1/1 Running 5 40h 4 root@kubernetes-master:/usr/local/k8s-test01/product#
接下來配置並發布ingress-nginx網關服務。
1 apiVersion: networking.k8s.io/v1beta1 2 kind: Ingress 3 metadata: 4 name: nginx-web 5 annotations: # 擴展信息,這里可以配置鑒權、認證信息 6 nginx.ingress.kubernetes.io/rewrite-target: / 7 spec: 8 # 路由規則 9 rules: 10 # 主機名,只能是域名,需要在宿主機配置hosts映射 11 - host: product.com 12 http: 13 paths: 14 - path: / 15 backend: 16 # 后台部署的 Service Name,與上面部署的service對應 17 serviceName: product-server 18 # 后台部署的 Service Port,與上面部署的service對應 19 servicePort: 80 20 - path: /grpc 21 backend: 22 # 后台部署的 Service Name,與上面部署的service對應 23 serviceName: product-server 24 # 后台部署的 Service Port,與上面部署的service對應 25 servicePort: 81
執行kubectl create -f productservice-ingress.yaml部署。接下來我們查看網關服務信息。
1 root@kubernetes-master:/usr/local/k8s-test01/product# kubectl get ingress 2 NAME CLASS HOSTS ADDRESS PORTS AGE 3 nginx-web <none> product.com 80 8h
mssql和rabbitmq的部署這里就貼代碼了,接下來我們訪問product.com這個域名,看看是否部署成功。


我們測試一個get接口試試,看看能不能通,看圖。


服務部署就到這,接下來我們簡單總結一下
1.這是測試環境所以master沒做高可用,ingress也沒做高可用,有時間再做順便補充一下;
2.外部網絡請求到ingress-nginx域名,線上環境這個域名肯定是公網地址,ingress做認證鑒權,合法的請求通過path路由到對應的后台service,如果其中一台ingress掛掉了,keepalived會把vip游離到其他slave節點,這樣就完成了ingress的高可用;
3.service代理會把請求隨機轉發到標簽匹配的pod里面的容器處理,如果其中一台node掛了或者pod異常退出了(也就是返回值不等於0),deployment會重新啟動一個pod,我們下面做個實驗,刪掉其中一個pod,看看效果怎么樣。
1 root@kubernetes-master:/usr/local/k8s-test01/product# kubectl get pods 2 NAME READY STATUS RESTARTS AGE 3 mssql-59bd4dc6df-xzxc2 1/1 Running 5 27h 4 product-server-599cfd85cc-2q7zx 1/1 Running 0 6s 5 product-server-599cfd85cc-ppmhx 1/1 Running 0 6h51m 6 rabmq-7c9748f876-9msjg 1/1 Running 0 14h 7 rabmq-7c9748f876-lggh6 1/1 Running 0 14h
我們先通過命令顯示所有default命名空間下面的所有pod,然后delete一個pod看看,它會不會重新啟動。執行刪除命令
1 kubectl delete pods product-server-599cfd85cc-ppmhx
接着馬上查看pods信息,要快知道嗎?
1 root@kubernetes-master:/usr/local/k8s-test01/product# kubectl get pods 2 NAME READY STATUS RESTARTS AGE 3 mssql-59bd4dc6df-xzxc2 1/1 Running 5 27h 4 product-server-599cfd85cc-2q7zx 1/1 Running 0 2m8s 5 product-server-599cfd85cc-9s497 1/1 Running 0 13s 6 product-server-599cfd85cc-ppmhx 0/1 Terminating 0 6h53m 7 rabmq-7c9748f876-9msjg 1/1 Running 0 14h 8 rabmq-7c9748f876-lggh6 1/1 Running 0 14h
看到沒?ppmhx這個pod正在終止,馬上就創建了新的pod。