k8s volume存儲卷


k8s volume存儲卷

 

介紹

volume存儲卷是Pod中能夠被多個容器訪問的共享目錄,kubernetes的volume概念,用途和目的與docker的volume比較類似,但兩者不能等價,首先,kubernetes中的volume被定義在Pod上,然后被一個Pod里的多個容器掛在到具體的文件目錄下;其次,kubenetes中的volume與Pod的生命周期相同,但與容器生命周期不相關,當容器終止或者重啟時,volume中的數據也不會丟失,最后Volume支持多種數據類型,比如:GlusterFS,Ceph等吸納進的分布式文件系統

 

emptyDir

emptyDir Volume是在Pod分配到node時創建的,從他的名稱就能看得出來,它的出事內容為空,並且無需指定宿主機上對應的目錄文件,因為這是kubernetes自動分配的一個目錄,當Pod從node上移除時,emptyDir中的數據也會被永久刪除emptyDir的用途有:

  • 臨時空間,例如用於某些應用程序運行時所需的臨時目錄,且無需永久保留
  • 長時間任務的中間過程checkpoint的臨時保存目錄
  • 一個容器需要從另一個容器中獲取數據庫的目錄(多容器共享目錄)

emptyDir的使用也比較簡單,在大多數情況下,我們先在Pod生命一個Volume,然后在容器里引用該Volume並掛載到容器里的某個目錄上,比如,我們在一個Pod中定義2個容器,一個容器運行nginx,一個容器運行busybox,然后我們在這個Pod上定義一個共享存儲卷,里面的內容兩個容器應該都可以看得到,拓撲圖如下:

 

 

以下標紅的要注意,共享卷的名字要一致

復制代碼
[root@master ~]# cat test.yaml 
apiVersion: v1
kind: Service
metadata:
  name: serivce-mynginx
  namespace: default
spec:
  type: NodePort
  selector:
    app: mynginx
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 30080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy
  namespace: default
spec:
  replicas: 1
  selector: 
    matchLabels:
      app: mynginx
  template:
    metadata:
      labels:
        app: mynginx
    spec:
      containers:
      - name: mynginx
        image: lizhaoqwe/nginx:v1
        volumeMounts:
        - mountPath: /usr/share/nginx/html/
          name: share
        ports:
        - name: nginx
          containerPort: 80
      - name: busybox
        image: busybox
        command:
        - "/bin/sh"
        - "-c"
        - "sleep 4444"
        volumeMounts:
        - mountPath: /data/
          name: share
      volumes:
      - name: share
        emptyDir: {}
復制代碼

 

創建Pod

[root@master ~]# kubectl create -f test.yaml

查看Pod

[root@master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
deploy-5cd657dd46-sx287   2/2     Running   0          2m1s

查看service

[root@master ~]# kubectl get svc
NAME              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes        ClusterIP   10.96.0.1      <none>        443/TCP        6d10h
serivce-mynginx   NodePort    10.99.110.43   <none>        80:30080/TCP   2m27s

我們進入到busybox容器當中創建一個index.html

[root@master ~]# kubectl exec -it deploy-5cd657dd46-sx287 -c busybox -- /bin/sh

容器內部:
/data # cd /data
/data # echo "fengzi" > index.html

打開瀏覽器驗證一下

 

 

到nginx容器中看一下有沒有index.html文件

復制代碼
[root@master ~]# kubectl exec -it deploy-5cd657dd46-sx287 -c nginx -- /bin/sh  
容器內部:     
# cd /usr/share/nginx/html
# ls -ltr
total 4
-rw-r--r-- 1 root root 7 Sep  9 17:06 index.html
復制代碼

ok,說明我們在busybox里寫入的文件被nginx讀取到了!

 

hostPath

hostPath為在Pod上掛載宿主機上的文件或目錄,它通常可以用於以下幾方面:

  • 容器應用程序生成的日志文件需要永久保存時,可以使用宿主機的告訴文件系統進行存儲
  • 需要訪問宿主機上Docker引擎內部數據結構的容器應用時,可以通過定義hostPath為宿主機/var/lib/docker目錄,使容器內部應用可以直接訪問Docker的文件系統

在使用這種類型的volume時,需要注意以下幾點:

  • 在不同的node上具有相同配置的Pod時,可能會因為宿主機上的目錄和文件不同而導致對volume上的目錄和文件訪問結果不一致
  • 如果使用了資源配置,則kubernetes無法將hostPath在宿主機上使用的資源納入管理

 

hostPath類型存儲卷架構圖

 

 

那么下面我們就定義一個hostPath看一下效果:

復制代碼
[root@master ~]# cat test.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-deploy
  namespace: default
spec:
  selector:
    app: mynginx
  type: NodePort
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 31111

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mynginx
  template:
    metadata:
      name: web
      labels:
        app: mynginx
    spec:
      containers:
      - name: mycontainer
        image: lizhaoqwe/nginx:v1
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: persistent-storage
        ports:
        - containerPort: 80
      volumes:
      - name: persistent-storage
        hostPath:
          type: DirectoryOrCreate
          path: /mydata
復制代碼

在hostPath下的type要注意一下,我們可以看一下幫助信息如下

復制代碼
[root@master data]# kubectl explain deploy.spec.template.spec.volumes.hostPath.type
KIND:     Deployment
VERSION:  extensions/v1beta1

FIELD:    type <string>

DESCRIPTION:
     Type for HostPath Volume Defaults to "" More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath
復制代碼

可以看到幫助信息並沒有太多信息,但是給我留了一個參考網站,我們打開這個網站

 

 

可以看到hostPath下的type可以有這么多的選項,意思不在解釋了,可以自己谷歌,我們這里選第一個

 

 

執行yaml文件

[root@master ~]# kubectl create -f test.yaml 
service/nginx-deploy created
deployment.apps/mydeploy created

然后我們可以去兩個節點去查看是否有/mydata這個目錄

 

 

 

 可以看到兩邊的mydata目錄都已經創建完畢,接下來我們在目錄里寫點東西

 

 

 

 兩邊節點都寫了一些東西,好了,現在我們可以驗證一下

 

 可以看到訪問沒有問題,並且還是負載均衡

 

NFS

相信NFS大家已經不陌生了,所以在這里我就不詳細說明什么NFS,我只說如何在k8s集群當中掛在nfs文件系統

基於NFS文件系統掛載的卷的架構圖為

 

 

開啟集群以外的另一台虛擬機,安裝nfs-utils安裝包

note:這里要注意的是需要在集群每個節點都安裝nfs-utils安裝包,不然掛載會失敗!

復制代碼
[root@master mnt]# yum install nfs-utils
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.cn99.com
 * extras: mirrors.cn99.com
 * updates: mirrors.cn99.com
軟件包 1:nfs-utils-1.3.0-0.61.el7.x86_64 已安裝並且是最新版本
無須任何處理
復制代碼

編輯/etc/exports文件添加以下內容

[root@localhost share]# vim /etc/exports
    /share  192.168.254.0/24(insecure,rw,no_root_squash)

重啟nfs服務

[root@localhost share]# service nfs restart
Redirecting to /bin/systemctl restart nfs.service

在/share目錄中寫一個index.html文件並且寫入內容

[root@localhost share]# echo "nfs server" > /share/index.html

在kubernetes集群的master節點中創建yaml文件並寫入

復制代碼
[root@master ~]# cat test.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-deploy
  namespace: default
spec:
  selector:
    app: mynginx
  type: NodePort
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 31111

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mynginx
  template:
    metadata:
      name: web
      labels:
        app: mynginx
    spec:
      containers:
      - name: mycontainer
        image: lizhaoqwe/nginx:v1
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: nfs
        ports:
        - containerPort: 80
      volumes:
      - name: nfs
        nfs:
          server: 192.168.254.11       #nfs服務器地址 
          path: /share          #nfs服務器共享目錄
復制代碼

創建yaml文件

[root@master ~]# kubectl create -f test.yaml               
service/nginx-deploy created
deployment.apps/mydeploy created

驗證

 

 OK,沒問題!!!

pvc

之前的volume是被定義在Pod上的,屬於計算資源的一部分,而實際上,網絡存儲是相對獨立於計算資源而存在的一種實體資源。比如在使用虛擬機的情況下,我們通常會先定義一個網絡存儲,然后從中划出一個網盤並掛載到虛擬機上,Persistent Volume(pv)和與之相關聯的Persistent Volume Claim(pvc)也起到了類似的作用

pv可以理解成為kubernetes集群中的某個網絡存儲對應的一塊存儲,它與Volume類似,但有以下區別:

  • pv只能是網絡存儲,不屬於任何Node,但可以在每個Node上訪問
  • pv並不是被定義在Pod上的,而是獨立於Pod之外定義的

 

 

 

在nfs server服務器上創建nfs卷的映射並重啟

復制代碼
[root@localhost ~]# cat /etc/exports
/share_v1  192.168.254.0/24(insecure,rw,no_root_squash)
/share_v2  192.168.254.0/24(insecure,rw,no_root_squash)
/share_v3  192.168.254.0/24(insecure,rw,no_root_squash)
/share_v4  192.168.254.0/24(insecure,rw,no_root_squash)
/share_v5  192.168.254.0/24(insecure,rw,no_root_squash)
[root@localhost ~]# service nfs restart
復制代碼

在nfs server服務器上創建響應目錄

[root@localhost /]# mkdir /share_v{1,2,3,4,5}

 

在kubernetes集群中的master節點上創建pv,我這里創建了5個pv對應nfs server當中映射出來的5個目錄

復制代碼
[root@master ~]# cat createpv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01
spec:
  nfs:               #存儲類型
    path: /share_v1       #要掛在的nfs服務器的目錄位置
    server: 192.168.254.11   #nfs server地址,也可以是域名,前提是能被解析
  accessModes:          #訪問模式:
  - ReadWriteMany          ReadWriteMany:讀寫權限,允許多個Node掛載 | ReadWriteOnce:讀寫權限,只能被單個Node掛在 | ReadOnlyMany:只讀權限,允許被多個Node掛載
  - ReadWriteOnce          
  capacity:            #存儲容量            
    storage: 10Gi         #pv存儲卷為10G
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02
spec:
  nfs:
    path: /share_v2
    server: 192.168.254.11
  accessModes: 
  - ReadWriteMany
  capacity:
    storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03
spec:
  nfs:
    path: /share_v3
    server: 192.168.254.11
  accessModes: 
  - ReadWriteMany
  - ReadWriteOnce
  capacity:
    storage: 30Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv04
spec:
  nfs:
    path: /share_v4
    server: 192.168.254.11
  accessModes: 
  - ReadWriteMany
  - ReadWriteOnce
  capacity:
    storage: 40Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv05
spec:
  nfs:
    path: /share_v5
    server: 192.168.254.11
  accessModes: 
  - ReadWriteMany
  - ReadWriteOnce
  capacity:
    storage: 50Gi
復制代碼

 

執行yaml文件

復制代碼
[root@master ~]# kubectl create -f createpv.yaml 
persistentvolume/pv01 created
persistentvolume/pv02 created
persistentvolume/pv03 created
persistentvolume/pv04 created
persistentvolume/pv05 created
復制代碼

查看pv

復制代碼
[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv01   10Gi       RWO,RWX        Retain           Available                                   5m10s
pv02   20Gi       RWX            Retain           Available                                   5m10s
pv03   30Gi       RWO,RWX        Retain           Available                                   5m9s
pv04   40Gi       RWO,RWX        Retain           Available                                   5m9s
pv05   50Gi       RWO,RWX        Retain           Available                                   5m9s


來一波解釋:
ACCESS MODES:
  RWO:ReadWriteOnly
  RWX:ReadWriteMany
  ROX:ReadOnlyMany
RECLAIM POLICY:
  Retain:保護pvc釋放的pv及其上的數據,將不會被其他pvc綁定
  recycle:保留pv但清空數據
  delete:刪除pvc釋放的pv及后端存儲volume
STATUS:
  Available:空閑狀態
  Bound:已經綁定到某個pvc上
  Released:對應的pvc已經被刪除,但是資源沒有被集群回收
  Failed:pv自動回收失敗
CLAIM:
  被綁定到了那個pvc上面格式為:NAMESPACE/PVC_NAME
  





  
復制代碼

有了pv之后我們就可以創建pvc了

復制代碼
[root@master ~]# cat test.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-deploy
  namespace: default
spec:
  selector:
    app: mynginx
  type: NodePort
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 31111

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mynginx
  template:
    metadata:
      name: web
      labels:
        app: mynginx
    spec:
      containers:
      - name: mycontainer
        image: nginx
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: html
        ports:
        - containerPort: 80
      volumes:
      - name: html
        persistentVolumeClaim:
          claimName: mypvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
  namespace: default
spec:
  accessMode:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
復制代碼

執行yaml文件

[root@master ~]# kubectl create -f test.yaml 
service/nginx-deploy created
deployment.apps/mydeploy created
persistentvolumeclaim/mypvc created

再次查看pv,已經顯示pvc被綁定到了pv02上

復制代碼
[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv01   10Gi       RWO,RWX        Retain           Available                                           22m
pv02   20Gi       RWX            Retain           Bound       default/mypvc                           22m
pv03   30Gi       RWO,RWX        Retain           Available                                           22m
pv04   40Gi       RWO,RWX        Retain           Available                                           22m
pv05   50Gi       RWO,RWX        Retain           Available                                           22m
復制代碼

查看pvc

[root@master ~]# kubectl get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    pv02     20Gi       RWX                           113s

 

驗證

在nfs server服務器上找到相應的目錄執行以下命令

[root@localhost share_v1]# echo 'test pvc' > index.html

然后打開瀏覽器

 OK,沒問題

configMap 

應用部署的一個最佳實戰是將應用所需的配置信息與程序進行分離,這樣可以使應用程序被更好的復用,通過不同的配置也能實現更靈活的功能,將應用打包為容器鏡像后,可以通過環境變量或者外掛文件的方式在創建容器時進行配置注入,但在大規模容器集群的環境中,對多個容器進行不同的配置講變得非常復雜,Kubernetes 1.2開始提供了一種統一的應用配置管理方案-configMap

ConfigMap供容器使用的典型用法如下:

  • 生成為容器內的環境變量
  • 設置容器啟動命令的啟動參數(需設置為環境變量)
  • 以volume的形式掛載為容器內部的文件或者目錄

 

configMap編寫變量注入pod中

比如我們用configmap創建兩個變量,一個是nginx_port=80,一個是nginx_server=192.168.254.13

[root@master ~]# kubectl create configmap nginx-var --from-literal=nginx_port=80 --from-literal=nginx_server=192.168.254.13
configmap/nginx-var created

查看configmap

復制代碼
[root@master ~]# kubectl get cm
NAME        DATA   AGE
nginx-var   2      5s


[root@master ~]# kubectl describe cm nginx-var
Name:         nginx-var
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
nginx_port:
----
80
nginx_server:
----
192.168.254.13
Events:  <none>
復制代碼

然后我們創建pod,把這2個變量注入到環境變量當中

復制代碼
[root@master ~]# cat test2.yaml 
apiVersion: v1
kind: Service
metadata:
  name: service-nginx
  namespace: default
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 30080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      name: web
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - name: nginx
          containerPort: 80
        volumeMounts:
          - name: html
            mountPath: /user/share/nginx/html/
        env:
        - name: TEST_PORT
          valueFrom:
            configMapKeyRef:
              name: nginx-var
              key: nginx_port
        - name: TEST_HOST
          valueFrom:
            configMapKeyRef:
              name: nginx-var
              key: nginx_server
      volumes:
      - name: html
        emptyDir: {}
復制代碼

執行pod文件

[root@master ~]# kubectl create -f test2.yaml 
service/service-nginx created

查看pod

[root@master ~]# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
mydeploy-d975ff774-fzv7g   1/1     Running   0          19s
mydeploy-d975ff774-nmmqt   1/1     Running   0          19s

進入到容器中查看環境變量

復制代碼
[root@master ~]# kubectl exec -it mydeploy-d975ff774-fzv7g -- /bin/sh


# printenv
SERVICE_NGINX_PORT_80_TCP_PORT=80
KUBERNETES_PORT=tcp://10.96.0.1:443
SERVICE_NGINX_PORT_80_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT=443
HOSTNAME=mydeploy-d975ff774-fzv7g
SERVICE_NGINX_SERVICE_PORT_NGINX=80
HOME=/root
PKG_RELEASE=1~buster
SERVICE_NGINX_PORT_80_TCP=tcp://10.99.184.186:80
TEST_HOST=192.168.254.13
TEST_PORT=80
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
NGINX_VERSION=1.17.3
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
NJS_VERSION=0.3.5
KUBERNETES_PORT_443_TCP_PROTO=tcp
SERVICE_NGINX_SERVICE_HOST=10.99.184.186
SERVICE_NGINX_PORT=tcp://10.99.184.186:80
SERVICE_NGINX_SERVICE_PORT=80
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
SERVICE_NGINX_PORT_80_TCP_ADDR=10.99.184.186
復制代碼

可以發現configMap當中的環境變量已經注入到了pod容器當中

這里要注意的是,如果是用這種環境變量的注入方式,pod啟動后,如果在去修改configMap當中的變量,對於pod是無效的,如果是以卷的方式掛載,是可的實時更新的,這一點要清楚

用configMap以存儲卷的形式掛載到pod中

上面說到了configMap以變量的形式雖然可以注入到pod當中,但是如果在修改變量的話pod是不會更新的,如果想讓configMap中的配置跟pod內部的實時更新,就需要以存儲卷的形式掛載

復制代碼
[root@master ~]# cat test2.yaml 
apiVersion: v1
kind: Service
metadata:
  name: service-nginx
  namespace: default
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 30080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      name: web
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - name: nginx
          containerPort: 80
        volumeMounts:
          - name: html-config
            mountPath: /nginx/vars/
            readOnly: true
      volumes:
      - name: html-config
        configMap:
          name: nginx-var
復制代碼

執行yaml文件

[root@master ~]# kubectl create -f test2.yaml 
service/service-nginx created
deployment.apps/mydeploy created

查看pod

[root@master ~]# kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
mydeploy-6f6b6c8d9d-pfzjs   1/1     Running   0          90s
mydeploy-6f6b6c8d9d-r9rz4   1/1     Running   0          90s

進入到容器中

[root@master ~]# kubectl exec -it mydeploy-6f6b6c8d9d-pfzjs -- /bin/bash

在容器中查看configMap對應的配置

復制代碼
root@mydeploy-6f6b6c8d9d-pfzjs:/# cd /nginx/vars
root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# ls
nginx_port  nginx_server
root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# cat nginx_port 
80
root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# 
復制代碼

修改configMap中的配置,把端口號從80修改成8080

復制代碼
[root@master ~]# kubectl edit cm nginx-var
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  nginx_port: "8080"
  nginx_server: 192.168.254.13
kind: ConfigMap
metadata:
  creationTimestamp: "2019-09-13T14:22:20Z"
  name: nginx-var
  namespace: default
  resourceVersion: "248779"
  selfLink: /api/v1/namespaces/default/configmaps/nginx-var
  uid: dfce8730-f028-4c57-b497-89b8f1854630
復制代碼

修改完稍等片刻查看文件檔中的值,已然更新成8080

root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# cat nginx_port 
8080
root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# 

configMap創建配置文件注入到pod當中

這里以nginx配置文件為例子,我們在宿主機上配置好nginx的配置文件,創建configmap,最后通過configmap注入到容器中

創建nginx配置文件

復制代碼
[root@master ~]# vim www.conf 
server {
    server_name: 192.168.254.13;
    listen: 80;
    root /data/web/html/;
}
復制代碼

創建configMap

[root@master ~]# kubectl create configmap nginx-config --from-file=/root/www.conf 
configmap/nginx-config created

查看configMap

[root@master ~]# kubectl get cm
NAME           DATA   AGE
nginx-config   1      3m3s
nginx-var      2      63m

創建pod並掛載configMap存儲卷

復制代碼
[root@master ~]# cat test2.yaml 
apiVersion: v1
kind: Service
metadata:
  name: service-nginx
  namespace: default
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 30080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      name: web
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - name: nginx
          containerPort: 80
        volumeMounts:
          - name: html-config
            mountPath: /etc/nginx/conf.d/
            readOnly: true
      volumes:
      - name: html-config
        configMap:
          name: nginx-config
復制代碼

啟動容器,並讓容器啟動的時候就加載configMap當中的配置

[root@master ~]# kubectl create -f test2.yaml 
service/service-nginx created
deployment.apps/mydeploy created

查看容器

[root@master ~]# kubectl get pods -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
mydeploy-fd46f76d6-jkq52   1/1     Running   0          22s   10.244.1.46   node1   <none>           <none>

訪問容器當中的網頁,80端口是沒問題的,8888端口訪問不同

復制代碼

[root@master ~]# curl 10.244.1.46
this is test web


[root@master ~]# curl 10.244.1.46:8888
curl: (7) Failed connect to 10.244.1.46:8888; 拒絕連接

復制代碼

接下來我們去修改configMap當中的內容,吧80端口修改成8888

復制代碼
[root@master ~]# kubectl edit cm nginx-config
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  www.conf: |
    server {
        server_name 192.168.254.13;
        listen 8888;
        root /data/web/html/;
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2019-09-13T15:22:22Z"
  name: nginx-config
  namespace: default
  resourceVersion: "252615"
  selfLink: /api/v1/namespaces/default/configmaps/nginx-config
  uid: f1881f87-5a91-4b8e-ab39-11a2f45733c2
復制代碼

進入到容器查看配置文件,可以發現配置文件已經修改過來了

復制代碼
root@mydeploy-fd46f76d6-jkq52:/usr/bin# cat /etc/nginx/conf.d/www.conf 
server {
    server_name 192.168.254.13;
    listen 8888;
    root /data/web/html/;
}
復制代碼

在去測試訪問,發現還是報錯,這是因為配置文件雖然已經修改了,但是nginx服務並沒有加載配置文件,我們手動加載一下,以后可以用腳本形式自動完成加載文件

[root@master ~]# curl 10.244.1.46
this is test web
[root@master ~]# curl 10.244.1.46:8888
curl: (7) Failed connect to 10.244.1.46:8888; 拒絕連接

在容器內部手動加載配置文件

root@mydeploy-fd46f76d6-jkq52:/usr/bin# nginx -s reload
2019/09/13 16:04:12 [notice] 34#34: signal process started

再去測試訪問,可以看到80端口已經訪問不通,反而是我們修改的8888端口可以訪問通

[root@master ~]# curl 10.244.1.46
curl: (7) Failed connect to 10.244.1.46:80; 拒絕連接
[root@master ~]# curl 10.244.1.46:8888
this is test web

完結!!


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM