1、使用configmap為nginx、mysql提供一個配置文件並驗證
1.1 nginx-configmap.yml示例
apiVersion: v1 kind: ConfigMap metadata: name: nginx-config data: default: | server { listen 80; server_name www.mysite.com; index index.html; location / { root /data/nginx/html; if (!-e $request_filename) { rewrite ^/(.*) /index.html last; } } } --- #apiVersion: extensions/v1beta1 apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 1 selector: matchLabels: app: ng-deploy-80 template: metadata: labels: app: ng-deploy-80 spec: containers: - name: ng-deploy-80 image: nginx ports: - containerPort: 80 volumeMounts: - mountPath: /data/nginx/html name: nginx-static-dir - name: nginx-config mountPath: /etc/nginx/conf.d volumes: - name: nginx-static-dir hostPath: path: /data/nginx/linux39 - name: nginx-config configMap: name: nginx-config items: - key: default path: mysite.conf --- apiVersion: v1 kind: Service metadata: name: ng-deploy-80 spec: ports: - name: http port: 81 targetPort: 80 nodePort: 30019 protocol: TCP type: NodePort selector: app: ng-deploy-80
創建pod驗證:
root@deploy:~# kubectl apply -f nginx_configmap.yml configmap/nginx-config created deployment.apps/nginx-deployment created service/ng-deploy-80 created root@deploy:~# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-deployment-6b86dd48c8-m2bfm 1/1 Running 0 28s root@nginx-deployment-6b86dd48c8-m2bfm:/# more /etc/nginx/conf.d/mysite.conf server { listen 80; server_name www.mysite.com; index index.html; location / { root /data/nginx/html; if (!-e $request_filename) { rewrite ^/(.*) /index.html last; } } } root@nginx-deployment-6b86dd48c8-m2bfm:/# echo hello > /data/nginx/html/hello.txt 查看pod所在node節點,查看節點對應的hostpath目錄; root@deploy:~# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-6b86dd48c8-m2bfm 1/1 Running 0 10m 10.200.255.206 172.31.7.112 <none> <none> root@k8s-node2:~# cat /data/nginx/linux39/hello.txt hello 訪問nginx對應的service驗證: root@deploy:~# kubectl get svc -A -o wide NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default ng-deploy-80 NodePort 10.100.46.19 <none> 81:30019/TCP 22m app=ng-deploy-80 root@deploy:~# curl http://172.31.7.112:30019/hello.txt hello root@deploy:~#
1.2 mysql-configmap.yml示例
apiVersion: v1 kind: ConfigMap metadata: name: mysql-password-config data: password: xm123456 --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - image: harbor.magedu.net/magedu/mysql:5.6.46 name: mysql env: # Use secret in real usage - name: MYSQL_ROOT_PASSWORD valueFrom: configMapKeyRef: name: mysql-password-config key: password ports: - containerPort: 3306 name: mysql volumeMounts: - mountPath: /var/lib/mysql name: mysql-data-dir volumes: - name: mysql-data-dir hostPath: path: /data/xiaoma_app1/mysql --- kind: Service apiVersion: v1 metadata: labels: app: mysql-service-label name: mysql-service spec: type: NodePort ports: - name: http port: 3306 protocol: TCP targetPort: 3306 nodePort: 43306 selector: app: mysql
創建pod驗證:
root@deploy:~# kubectl apply -f mysql_configmap.yml configmap/mysql-password-config created deployment.apps/mysql created service/mysql-service created root@deploy:~# kubectl get pod NAME READY STATUS RESTARTS AGE mysql-854d557b5d-lbhml 1/1 Running 0 51s root@deploy:~# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR mysql-service NodePort 10.100.11.35 <none> 3306:43306/TCP 115s app=mysql 連接mysql驗證: root@deploy:~# telnet 172.31.7.112 43306 Trying 172.31.7.112... Connected to 172.31.7.112. Escape character is '^]'. root@deploy:~# apt install mariadb-client root@deploy:~# mysql -h 172.31.7.112 -P 43306 -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.6.46 MySQL Community Server (GPL) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> exit Bye
2、總結PV/PVC的特性
PV/PVC實現pod和storage的解耦,這樣我們修改storage的時候不需要修改pod,也可以實現存儲和應用權限的隔離。
- PersistentVolume(PV)是由管理員配置的一段網絡存儲,它是群集的一部分,就像節點是集群中的資源一樣,PV也是集群中的資源。PV是Volume之類的卷插件,但具有獨立於使用PV的Pod的生命周期,此API對象包含存儲實現的細節,即NFS、iSCSI或特定於雲供應商的存儲系統。PV是由管理員不回的一個存儲的描述,是一個全局資源即不隸屬於任何namespace,包含存儲的類型、存儲的大小和訪問模式等,它的生命周期獨立於pod,例如當使用它的pod銷毀時對PV沒有影響。
- PersistentVolumeClaim(PVC)是用戶存儲的請求。它與Pod相似。Pod消耗節點資源,PVC消耗PV資源(存儲資源)。Pod可以請求特定級別的資源(CPU和內存)。PVC是namespace中的資源,可以設置特定的空間大小和訪問模式(例如,可以以讀/寫一次或只讀多次模式掛載)。
PV是對底層網絡存儲的抽象,即將網絡存儲定義為一種存儲資源,將一個整體的存儲資源拆分成多份后給不同的業務使用。
PVC是對PV資源的申請調用,就像POD消費node節點資源一樣,pod是通過PVC將數據保存至PV,PV在保存至存儲。
2.1 PersistentVolume參數
#kubectl explain PersistentVolume --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-datadir-pv-1 namespace: magedu spec: capacity: #當前PV空間大小,kubectl explain PersistentVolume.spec.capacity storage: 10Gi accessModes: #訪問模式,kubectl explain PersistentVolume.spec.accessModes - ReadWriteOnce #PV只能被單個節點以讀寫權限掛載,RWO; # - ReadOnlyMany #PV可以被多個節點掛載但是權限是只讀的,ROX; # - ReadWriteMany #PV可以被多個節點是讀寫方式掛載使用,RWX;使用場景多些; nfs: path: /data/k8sdata/magedu/redis-datadir-1 server: 172.31.7.109 #PersistentVolumeReclaimPolicy #刪除機制即刪除存儲卷的時候,已經創建好的存儲卷有以下刪除操作: #kubectl explain PersistentVolume.spec.PersistentVolumeReclaimPolicy # Retain 刪除PV后保持原狀,最后需要管理員手動刪除;推薦使用 # Recycle 空間回收、及刪除存儲卷上的所有數據(包括目錄和隱藏文件),目前僅支持NFS和hostPath; # Delete 自動刪除存儲卷; #volumeMode #卷類型,kubectl explain PersistentVolume.spec.volumeMode 定義存儲卷使用的文件系統是塊設備、還是文件系統,默認為文件系統 #mountOptions #附加的掛載選項列表,實現更精細的權限控制 # ro #等;
2.2 PersistentVolumeClaim創建參數
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: redis-datadir-pvc-1 namespace: magedu spec: volumeName: redis-datadir-pv-1 accessModes: - ReadWriteOnce resources: requests: storage: 10Gi #kubectl explain PersistentVolumeClaim. accessModes: #PVC訪問模式kubectl explain PersistentVolumeClaim.spec.volumeMode - ReadWriteOnce #PVC只能被單個節點以讀寫權限掛載,RWO - ReadOnlyMany #PVC可以被多個節點掛載但是權限是只讀的,ROX - ReadWriteOnce #PVC可以被多個節點是讀寫方式掛載使用,RWX resources: #定義VPC創建存儲鄭的空間大小; selector: #標簽選擇器,選擇要綁定的PV matchLabels #匹配標簽名稱 matchExpressions #基於正則表達式匹配 volumeName: #要綁定的PV名稱 volumeMode: #卷類型 #定義PVC使用的文件系統是塊設備還是文件系統,默認為文件系統
3、基於PV、PVC運行zookeeper、nginx等服務
3.1 配置zookeeper服務
3.1.1 構建zookeeper鏡像;
(1)文件准備
root@deploy:/tdq/k8s-data/dockerfile/web/magedu# tree zookeeper/ zookeeper/ ├── Dockerfile ├── KEYS ├── bin │ └── zkReady.sh ├── build-command.sh ├── conf │ ├── log4j.properties │ └── zoo.cfg ├── entrypoint.sh ├── repositories ├── zookeeper-3.12-Dockerfile.tar.gz ├── zookeeper-3.4.14.tar.gz └── zookeeper-3.4.14.tar.gz.asc
(2)Dockerfile示例1
#FROM harbor-linux38.local.com/linux38/slim_java:8 FROM harbor.magedu.net/baseimages/slim_java:8 ENV ZK_VERSION 3.4.14 ADD repositories /etc/apk/repositories # Download Zookeeper COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc COPY KEYS /tmp/KEYS RUN apk add --no-cache --virtual .build-deps \ ca-certificates \ gnupg \ tar \ wget && \ # # Install dependencies apk add --no-cache \ bash && \ # # # Verify the signature export GNUPGHOME="$(mktemp -d)" && \ gpg -q --batch --import /tmp/KEYS && \ gpg -q --batch --no-auto-key-retrieve --verify /tmp/zk.tgz.asc /tmp/zk.tgz && \ # # Set up directories # mkdir -p /zookeeper/data /zookeeper/wal /zookeeper/log && \ # # Install tar -x -C /zookeeper --strip-components=1 --no-same-owner -f /tmp/zk.tgz && \ # # Slim down cd /zookeeper && \ cp dist-maven/zookeeper-${ZK_VERSION}.jar . && \ rm -rf \ *.txt \ *.xml \ bin/README.txt \ bin/*.cmd \ conf/* \ contrib \ dist-maven \ docs \ lib/*.txt \ lib/cobertura \ lib/jdiff \ recipes \ src \ zookeeper-*.asc \ zookeeper-*.md5 \ zookeeper-*.sha1 && \ # # Clean up apk del .build-deps && \ rm -rf /tmp/* "$GNUPGHOME" COPY conf /zookeeper/conf/ COPY bin/zkReady.sh /zookeeper/bin/ COPY entrypoint.sh / ENV PATH=/zookeeper/bin:${PATH} \ ZOO_LOG_DIR=/zookeeper/log \ ZOO_LOG4J_PROP="INFO, CONSOLE, ROLLINGFILE" \ JMXPORT=9010 ENTRYPOINT [ "/entrypoint.sh" ] CMD [ "zkServer.sh", "start-foreground" ] EXPOSE 2181 2888 3888 9010
(3)entrypoint.sh示例
#!/bin/bash echo ${MYID:-1} > /zookeeper/data/myid if [ -n "$SERVERS" ]; then IFS=\, read -a servers <<<"$SERVERS" for i in "${!servers[@]}"; do printf "\nserver.%i=%s:2888:3888" "$((1 + $i))" "${servers[$i]}" >> /zookeeper/conf/zoo.cfg done fi cd /zookeeper exec "$@"
(4)repositories示例
http://mirrors.aliyun.com/alpine/v3.6/main http://mirrors.aliyun.com/alpine/v3.6/community
(5)build-common.sh
#!/bin/bash TAG=$1 docker build -t harbor.magedu.net/magedu/zookeeper:${TAG} . sleep 1 docker push harbor.magedu.net/magedu/zookeeper:${TAG}
(6)其它文件
root@deploy:/tdq/k8s-data/dockerfile/web/magedu# cat zookeeper/bin/zkReady.sh #!/bin/bash /zookeeper/bin/zkServer.sh status | egrep 'Mode: (standalone|leading|following|observing)' root@deploy:/tdq/k8s-data/dockerfile/web/magedu# cat zookeeper/conf/zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/zookeeper/data dataLogDir=/zookeeper/wal #snapCount=100000 autopurge.purgeInterval=1 clientPort=2181 quorumListenOnAllIPs=true root@deploy:/tdq/k8s-data/dockerfile/web/magedu# root@deploy:/tdq/k8s-data/dockerfile/web/magedu# cat zookeeper/conf/log4j.properties # Define some default values that can be overridden by system properties zookeeper.root.logger=INFO, CONSOLE, ROLLINGFILE zookeeper.console.threshold=INFO zookeeper.log.dir=/zookeeper/log zookeeper.log.file=zookeeper.log zookeeper.log.threshold=INFO zookeeper.tracelog.dir=/zookeeper/log zookeeper.tracelog.file=zookeeper_trace.log # # ZooKeeper Logging Configuration # # Format is "<default threshold> (, <appender>)+ # DEFAULT: console appender only log4j.rootLogger=${zookeeper.root.logger} # Example with rolling log file #log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE # Example with rolling log file and tracing #log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE # # Log INFO level and above messages to the console # log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold} log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n # # Add ROLLINGFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold} log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file} # Max log file size of 10MB log4j.appender.ROLLINGFILE.MaxFileSize=10MB # uncomment the next line to limit number of backup files log4j.appender.ROLLINGFILE.MaxBackupIndex=5 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n # # Add TRACEFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.TRACEFILE=org.apache.log4j.FileAppender log4j.appender.TRACEFILE.Threshold=TRACE log4j.appender.TRACEFILE.File=${zookeeper.tracelog.dir}/${zookeeper.tracelog.file} log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout ### Notice we are including log4j's NDC here (%x) log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L][%x] - %m%n root@deploy:/tdq/k8s-data/dockerfile/web/magedu#
(7)構建構建zookeeper:v3.4.14鏡像
1> 將slim_java:8上傳到本地Harbor; root@deploy:~# docker pull elevy/slim_java:8 root@deploy:~# docker tag elevy/slim_java:8 harbor.s209.com/baseimages/slim_java:8 root@deploy:~# docker push harbor.s209.com/baseimages/slim_java:8 2> 構建zookeeper:v3.4.14鏡像,上傳到Harbor指定項目; root@deploy:~# cd /tdq/k8s-data/dockerfile/web/magedu/zookeeper root@deploy:/tdq/k8s-data/dockerfile/web/magedu/zookeeper# root@deploy:/tdq/k8s-data/dockerfile/web/magedu/zookeeper# bash build-command.sh v3.4.14 Successfully tagged harbor.s209.com/magedu/zookeeper:v3.4.14
3.1.2 配置nfs-server
1> 在HA節點安裝nfs-server,創建和設置nfs共享目錄; root@k8s-ha1:~# apt-get update root@k8s-ha1:~# apt-get install nfs-server root@k8s-ha1:~# mkdir /data/k8sdata -pv root@k8s-ha1:~# mkdir /data/k8sdata/magedu/zookeeper-datadir-1 -pv root@k8s-ha1:~# mkdir /data/k8sdata/magedu/zookeeper-datadir-2 root@k8s-ha1:~# mkdir /data/k8sdata/magedu/zookeeper-datadir-3 root@k8s-ha1:~# vi /etc/exports /data/k8sdata *(rw,no_root_squash) root@k8s-ha1:~# systemctl restart nfs-server.service 2> 在客戶端驗證nfs共享是否生效; root@deploy:~# apt install nfs-common root@deploy:~# showmount -e 172.31.7.109 Export list for 172.31.7.109: /data/k8sdata *
3.1.3 創建zookeeper PV
--- apiVersion: v1 kind: PersistentVolume metadata: name: zookeeper-datadir-pv-1 spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce nfs: server: 172.31.7.109 path: /data/k8sdata/magedu/zookeeper-datadir-1 --- apiVersion: v1 kind: PersistentVolume metadata: name: zookeeper-datadir-pv-2 spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce nfs: server: 172.31.7.109 path: /data/k8sdata/magedu/zookeeper-datadir-2 --- apiVersion: v1 kind: PersistentVolume metadata: name: zookeeper-datadir-pv-3 spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce nfs: server: 172.31.7.109 path: /data/k8sdata/magedu/zookeeper-datadir-3
3.1.4 創建zookeeper PVC
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: zookeeper-datadir-pvc-1 namespace: magedu spec: accessModes: - ReadWriteOnce volumeName: zookeeper-datadir-pv-1 resources: requests: storage: 10Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: zookeeper-datadir-pvc-2 namespace: magedu spec: accessModes: - ReadWriteOnce volumeName: zookeeper-datadir-pv-2 resources: requests: storage: 10Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: zookeeper-datadir-pvc-3 namespace: magedu spec: accessModes: - ReadWriteOnce volumeName: zookeeper-datadir-pv-3 resources: requests: storage: 10Gi
3.1.5 創建zookeeper Pod
apiVersion: v1 kind: Service metadata: name: zookeeper namespace: magedu spec: ports: - name: client port: 2181 selector: app: zookeeper --- apiVersion: v1 kind: Service metadata: name: zookeeper1 namespace: magedu spec: type: NodePort ports: - name: client port: 2181 nodePort: 42181 - name: followers port: 2888 - name: election port: 3888 selector: app: zookeeper server-id: "1" --- apiVersion: v1 kind: Service metadata: name: zookeeper2 namespace: magedu spec: type: NodePort ports: - name: client port: 2181 nodePort: 42182 - name: followers port: 2888 - name: election port: 3888 selector: app: zookeeper server-id: "2" --- apiVersion: v1 kind: Service metadata: name: zookeeper3 namespace: magedu spec: type: NodePort ports: - name: client port: 2181 nodePort: 42183 - name: followers port: 2888 - name: election port: 3888 selector: app: zookeeper server-id: "3" --- kind: Deployment #apiVersion: extensions/v1beta1 apiVersion: apps/v1 metadata: name: zookeeper1 namespace: magedu spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper server-id: "1" spec: volumes: - name: data emptyDir: {} - name: wal emptyDir: medium: Memory containers: - name: server image: harbor.magedu.net/magedu/zookeeper:1mj8iugs-20211010_114312 imagePullPolicy: Always env: - name: MYID value: "1" - name: SERVERS value: "zookeeper1,zookeeper2,zookeeper3" - name: JVMFLAGS value: "-Xmx2G" ports: - containerPort: 2181 - containerPort: 2888 - containerPort: 3888 volumeMounts: - mountPath: "/zookeeper/data" name: zookeeper-datadir-pvc-1 volumes: - name: zookeeper-datadir-pvc-1 persistentVolumeClaim: claimName: zookeeper-datadir-pvc-1 --- kind: Deployment #apiVersion: extensions/v1beta1 apiVersion: apps/v1 metadata: name: zookeeper2 namespace: magedu spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper server-id: "2" spec: volumes: - name: data emptyDir: {} - name: wal emptyDir: medium: Memory containers: - name: server image: harbor.magedu.net/magedu/zookeeper:1mj8iugs-20211010_114312 imagePullPolicy: Always env: - name: MYID value: "2" - name: SERVERS value: "zookeeper1,zookeeper2,zookeeper3" - name: JVMFLAGS value: "-Xmx2G" ports: - containerPort: 2181 - containerPort: 2888 - containerPort: 3888 volumeMounts: - mountPath: "/zookeeper/data" name: zookeeper-datadir-pvc-2 volumes: - name: zookeeper-datadir-pvc-2 persistentVolumeClaim: claimName: zookeeper-datadir-pvc-2 --- kind: Deployment #apiVersion: extensions/v1beta1 apiVersion: apps/v1 metadata: name: zookeeper3 namespace: magedu spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper server-id: "3" spec: volumes: - name: data emptyDir: {} - name: wal emptyDir: medium: Memory containers: - name: server image: harbor.magedu.net/magedu/zookeeper:1mj8iugs-20211010_114312 imagePullPolicy: Always env: - name: MYID value: "3" - name: SERVERS value: "zookeeper1,zookeeper2,zookeeper3" - name: JVMFLAGS value: "-Xmx2G" ports: - containerPort: 2181 - containerPort: 2888 - containerPort: 3888 volumeMounts: - mountPath: "/zookeeper/data" name: zookeeper-datadir-pvc-3 volumes: - name: zookeeper-datadir-pvc-3 persistentVolumeClaim: claimName: zookeeper-datadir-pvc-3
創建PV、PVC、POD執行命令如下:
1> 創建PV; root@deploy:/tdq/k8s-data/yaml/magedu/zookeeper/pv# kubectl apply -f zookeeper-persistentvolume.yaml 2> 創建namespace,再創建PVC; root@deploy:/tdq/k8s-data/yaml/magedu/zookeeper/pv# kubectl apply -f ../../../namespaces/magedu-ns.yaml namespace/magedu created root@deploy:/tdq/k8s-data/yaml/magedu/zookeeper/pv# kubectl apply -f zookeeper-persistentvolumeclaim.yaml 3> 創建zookeeper pod; root@deploy:/tdq/k8s-data/yaml/magedu/zookeeper# kubectl apply -f zookeeper.yaml 4> 查看PV、PVC、POD、SVC狀態; root@deploy:/tdq/k8s-data/yaml/magedu/zookeeper# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE zookeeper-datadir-pv-1 20Gi RWO Retain Bound magedu/zookeeper-datadir-pvc-1 4m53s zookeeper-datadir-pv-2 20Gi RWO Retain Bound magedu/zookeeper-datadir-pvc-2 4m53s zookeeper-datadir-pv-3 20Gi RWO Retain Bound magedu/zookeeper-datadir-pvc-3 4m53s root@deploy:/tdq/k8s-data/yaml/magedu/zookeeper# kubectl get pvc No resources found in default namespace. root@deploy:/tdq/k8s-data/yaml/magedu/zookeeper# kubectl get pvc -n magedu NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE zookeeper-datadir-pvc-1 Bound zookeeper-datadir-pv-1 20Gi RWO 4m13s zookeeper-datadir-pvc-2 Bound zookeeper-datadir-pv-2 20Gi RWO 4m13s zookeeper-datadir-pvc-3 Bound zookeeper-datadir-pv-3 20Gi RWO 4m13s root@deploy:/tdq/k8s-data/yaml/magedu/zookeeper# root@deploy:/tdq/k8s-data/yaml/magedu/zookeeper# kubectl get pod -n magedu NAME READY STATUS RESTARTS AGE zookeeper1-6c4fdf9765-575bp 1/1 Running 0 10s zookeeper2-bb6dc6f78-fbc5w 1/1 Running 0 10s zookeeper3-75fb555875-4f5p7 1/1 Running 0 10s root@deploy:/tdq/k8s-data/yaml/magedu/zookeeper# kubectl get svc -n magedu NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE zookeeper ClusterIP 10.100.206.138 <none> 2181/TCP 2m41s zookeeper1 NodePort 10.100.67.217 <none> 2181:42181/TCP,2888:33144/TCP,3888:56472/TCP 2m41s zookeeper2 NodePort 10.100.125.93 <none> 2181:42182/TCP,2888:49376/TCP,3888:64393/TCP 2m41s zookeeper3 NodePort 10.100.38.215 <none> 2181:42183/TCP,2888:56291/TCP,3888:53783/TCP 2m41s 5> 查看nfs的文件寫入; root@k8s-ha1:~# tree /data/k8sdata/magedu/ /data/k8sdata/magedu/ ├── zookeeper-datadir-1 │ ├── myid │ └── version-2 │ ├── acceptedEpoch │ └── currentEpoch ├── zookeeper-datadir-2 │ ├── myid │ └── version-2 │ ├── acceptedEpoch │ ├── currentEpoch │ └── snapshot.100000000 └── zookeeper-datadir-3 ├── myid └── version-2 ├── acceptedEpoch └── currentEpoch 6> 查看zookeeper集群狀態 通過Dashboard進入zookeeper的容器內執行命令查看 bash-4.3# /zookeeper/bin/zkServer.sh status ZooKeeper JMX enabled by default ZooKeeper remote JMX Port set to 9010 ZooKeeper remote JMX authenticate set to false ZooKeeper remote JMX ssl set to false ZooKeeper remote JMX log4j set to true Using config: /zookeeper/bin/../conf/zoo.cfg Mode: follower Mode: follower Mode: leader
3.1.6 從外部訪問zookeeper服務驗證
使用ZooInspector工具連接測試
3.2 配置nginx服務
3.2.1 構建nginx鏡像
(1)centos基礎鏡像Dockerfile示例1
#Nginx Base Image FROM harbor.magedu.net/baseimages/magedu-centos-base:7.8.2003 MAINTAINER zhangshijie@magedu.net RUN yum install -y vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop ADD nginx-1.18.0.tar.gz /usr/local/src/ RUN cd /usr/local/src/nginx-1.18.0 && ./configure && make && make install && ln -sv
build創建
#!/bin/bash docker build -t harbor.magedu.net/baseimages/magedu-centos-base:7.8.2003 . docker push harbor.magedu.net/baseimages/magedu-centos-base:7.8.2003
(2)nginx鏡像示例2
root@deploy:/tdq/k8s-data/dockerfile/web/magedu# tree nginx/ nginx/ ├── Dockerfile ├── app1.tar.gz ├── build-command.sh ├── index.html ├── nginx.conf └── webapp └── index.html
Dockerfile示例2
#Nginx 1.18.0 FROM harbor.magedu.local/pub-images/nginx-base:v1.18.0 ADD nginx.conf /usr/local/nginx/conf/nginx.conf ADD app1.tar.gz /usr/local/nginx/html/webapp/ ADD index.html /usr/local/nginx/html/index.html #靜態資源掛載路徑 RUN mkdir -p /usr/local/nginx/html/webapp/static /usr/local/nginx/html/webapp/images EXPOSE 80 443 CMD ["nginx"]
nginx.conf示例
user nginx nginx; worker_processes auto; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; daemon off; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; upstream tomcat_webserver { server magedu-tomcat-app1-service.magedu.svc.magedu.local:80; } server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } location /webapp { root html; index index.html index.htm; } location /myapp { proxy_pass http://tomcat_webserver; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} }
build-common.sh示例
#!/bin/bash TAG=$1 docker build -t harbor.magedu.net/magedu/nginx-web1:${TAG} . echo "鏡像構建完成,即將上傳到harbor" sleep 1 docker push harbor.magedu.net/magedu/nginx-web1:${TAG} echo "鏡像上傳到harbor完成"
3.2.2 創建nginx PV
3.2.3 創建nginx PVC
3.2.4 創建nginx Pod
4、自定義鏡像-運行nginx與tomcat並結合PV/PVC/NFS以實現動靜分離示例
通過service進行服務之間的調用
nginx通過tomcat service調用tomcat
4.1 構建tomcat-base鏡像、tomcat-app1鏡像
(1)tomcat-base鏡像文件准備:
root@deploy:/tdq/k8s-data/dockerfile/web# tree pub-images/tomcat-base-8.5.43/ pub-images/tomcat-base-8.5.43/ ├── Dockerfile ├── apache-tomcat-8.5.43.tar.gz └── build-command.sh
(2)tomcat-base Dockerfile准備:
#Tomcat 8.5.43基礎鏡像 FROM harbor.magedu.net/pub-images/jdk-base:v8.212 MAINTAINER zhangshijie "zhangshijie@magedu.net" RUN mkdir /apps /data/tomcat/webapps /data/tomcat/logs -pv ADD apache-tomcat-8.5.43.tar.gz /apps RUN useradd tomcat -u 2022 && ln -sv /apps/apache-tomcat-8.5.43 /apps/tomcat && chown -R tomcat.tomcat /apps /data -R
(3)tomcat-base build-command.sh
#!/bin/bash docker build -t harbor.magedu.net/pub-images/tomcat-base:v8.5.43 . sleep 3 docker push harbor.magedu.net/pub-images/tomcat-base:v8.5.43
(4)tomcat-app1鏡像文件准備;
root@deploy:/tdq/k8s-data/dockerfile/web/magedu# tree tomcat-app1/ tomcat-app1/ ├── Dockerfile ├── app1.tar.gz ├── build-command.sh ├── catalina.sh ├── filebeat-7.5.1-x86_64.rpm ├── filebeat.yml ├── myapp │ └── index.html ├── run_tomcat.sh └── server.xml
(5)tomcat-app1 Dockerfile准備;
#tomcat web1 FROM harbor.magedu.net/pub-images/tomcat-base:v8.5.43 ADD catalina.sh /apps/tomcat/bin/catalina.sh ADD server.xml /apps/tomcat/conf/server.xml #ADD myapp/* /data/tomcat/webapps/myapp/ ADD app1.tar.gz /data/tomcat/webapps/myapp/ ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh #ADD filebeat.yml /etc/filebeat/filebeat.yml RUN chown -R nginx.nginx /data/ /apps/ #ADD filebeat-7.5.1-x86_64.rpm /tmp/ #RUN cd /tmp && yum localinstall -y filebeat-7.5.1-amd64.deb EXPOSE 8080 8443 CMD ["/apps/tomcat/bin/run_tomcat.sh"]
(6)tomcat-app1 build-command.sh
#!/bin/bash TAG=$1 docker build -t harbor.magedu.net/magedu/tomcat-app1:${TAG} . sleep 3 docker push harbor.magedu.net/magedu/tomcat-app1:${TAG}
(7)tomcat-app1其它文件;
catalina.sh
#!/bin/sh # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ----------------------------------------------------------------------------- # Control Script for the CATALINA Server # # Environment Variable Prerequisites # # Do not set the variables in this script. Instead put them into a script # setenv.sh in CATALINA_BASE/bin to keep your customizations separate. # # CATALINA_HOME May point at your Catalina "build" directory. # # CATALINA_BASE (Optional) Base directory for resolving dynamic portions # of a Catalina installation. If not present, resolves to # the same directory that CATALINA_HOME points to. # # CATALINA_OUT (Optional) Full path to a file where stdout and stderr # will be redirected. # Default is $CATALINA_BASE/logs/catalina.out # # CATALINA_OPTS (Optional) Java runtime options used when the "start", # "run" or "debug" command is executed. # Include here and not in JAVA_OPTS all options, that should # only be used by Tomcat itself, not by the stop process, # the version command etc. # Examples are heap size, GC logging, JMX ports etc. # # CATALINA_TMPDIR (Optional) Directory path location of temporary directory # the JVM should use (java.io.tmpdir). Defaults to # $CATALINA_BASE/temp. # # JAVA_HOME Must point at your Java Development Kit installation. # Required to run the with the "debug" argument. # # JRE_HOME Must point at your Java Runtime installation. # Defaults to JAVA_HOME if empty. If JRE_HOME and JAVA_HOME # are both set, JRE_HOME is used. # # JAVA_OPTS (Optional) Java runtime options used when any command # is executed. # Include here and not in CATALINA_OPTS all options, that # should be used by Tomcat and also by the stop process, # the version command etc. # Most options should go into CATALINA_OPTS. # # JAVA_ENDORSED_DIRS (Optional) Lists of of colon separated directories # containing some jars in order to allow replacement of APIs # created outside of the JCP (i.e. DOM and SAX from W3C). # It can also be used to update the XML parser implementation. # Note that Java 9 no longer supports this feature. # Defaults to $CATALINA_HOME/endorsed. # # JPDA_TRANSPORT (Optional) JPDA transport used when the "jpda start" # command is executed. The default is "dt_socket". # # JPDA_ADDRESS (Optional) Java runtime options used when the "jpda start" # command is executed. The default is localhost:8000. # # JPDA_SUSPEND (Optional) Java runtime options used when the "jpda start" # command is executed. Specifies whether JVM should suspend # execution immediately after startup. Default is "n". # # JPDA_OPTS (Optional) Java runtime options used when the "jpda start" # command is executed. If used, JPDA_TRANSPORT, JPDA_ADDRESS, # and JPDA_SUSPEND are ignored. Thus, all required jpda # options MUST be specified. The default is: # # -agentlib:jdwp=transport=$JPDA_TRANSPORT, # address=$JPDA_ADDRESS,server=y,suspend=$JPDA_SUSPEND # # JSSE_OPTS (Optional) Java runtime options used to control the TLS # implementation when JSSE is used. Default is: # "-Djdk.tls.ephemeralDHKeySize=2048" # # CATALINA_PID (Optional) Path of the file which should contains the pid # of the catalina startup java process, when start (fork) is # used # # LOGGING_CONFIG (Optional) Override Tomcat's logging config file # Example (all one line) # LOGGING_CONFIG="-Djava.util.logging.config.file=$CATALINA_BASE/conf/logging.properties" # # LOGGING_MANAGER (Optional) Override Tomcat's logging manager # Example (all one line) # LOGGING_MANAGER="-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager" # # USE_NOHUP (Optional) If set to the string true the start command will # use nohup so that the Tomcat process will ignore any hangup # signals. Default is "false" unless running on HP-UX in which # case the default is "true" # ----------------------------------------------------------------------------- JAVA_OPTS="-server -Xms1g -Xmx1g -Xss512k -Xmn1g -XX:CMSInitiatingOccupancyFraction=65 -XX:+UseFastAccessorMethods -XX:+AggressiveOpts -XX:+UseBiasedLocking -XX:+DisableExplicitGC -XX:MaxTenuringThreshold=10 -XX:NewSize=2048M -XX:MaxNewSize=2048M -XX:NewRatio=2 -XX:PermSize=128m -XX:MaxPermSize=512m -XX:CMSFullGCsBeforeCompaction=5 -XX:+ExplicitGCInvokesConcurrent -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled" # OS specific support. $var _must_ be set to either true or false. cygwin=false darwin=false os400=false hpux=false case "`uname`" in CYGWIN*) cygwin=true;; Darwin*) darwin=true;; OS400*) os400=true;; HP-UX*) hpux=true;; esac # resolve links - $0 may be a softlink PRG="$0" while [ -h "$PRG" ]; do ls=`ls -ld "$PRG"` link=`expr "$ls" : '.*-> \(.*\)$'` if expr "$link" : '/.*' > /dev/null; then PRG="$link" else PRG=`dirname "$PRG"`/"$link" fi done # Get standard environment variables PRGDIR=`dirname "$PRG"` # Only set CATALINA_HOME if not already set [ -z "$CATALINA_HOME" ] && CATALINA_HOME=`cd "$PRGDIR/.." >/dev/null; pwd` # Copy CATALINA_BASE from CATALINA_HOME if not already set [ -z "$CATALINA_BASE" ] && CATALINA_BASE="$CATALINA_HOME" # Ensure that any user defined CLASSPATH variables are not used on startup, # but allow them to be specified in setenv.sh, in rare case when it is needed. CLASSPATH= if [ -r "$CATALINA_BASE/bin/setenv.sh" ]; then . "$CATALINA_BASE/bin/setenv.sh" elif [ -r "$CATALINA_HOME/bin/setenv.sh" ]; then . "$CATALINA_HOME/bin/setenv.sh" fi # For Cygwin, ensure paths are in UNIX format before anything is touched if $cygwin; then [ -n "$JAVA_HOME" ] && JAVA_HOME=`cygpath --unix "$JAVA_HOME"` [ -n "$JRE_HOME" ] && JRE_HOME=`cygpath --unix "$JRE_HOME"` [ -n "$CATALINA_HOME" ] && CATALINA_HOME=`cygpath --unix "$CATALINA_HOME"` [ -n "$CATALINA_BASE" ] && CATALINA_BASE=`cygpath --unix "$CATALINA_BASE"` [ -n "$CLASSPATH" ] && CLASSPATH=`cygpath --path --unix "$CLASSPATH"` fi # Ensure that neither CATALINA_HOME nor CATALINA_BASE contains a colon # as this is used as the separator in the classpath and Java provides no # mechanism for escaping if the same character appears in the path. case $CATALINA_HOME in *:*) echo "Using CATALINA_HOME: $CATALINA_HOME"; echo "Unable to start as CATALINA_HOME contains a colon (:) character"; exit 1; esac case $CATALINA_BASE in *:*) echo "Using CATALINA_BASE: $CATALINA_BASE"; echo "Unable to start as CATALINA_BASE contains a colon (:) character"; exit 1; esac # For OS400 if $os400; then # Set job priority to standard for interactive (interactive - 6) by using # the interactive priority - 6, the helper threads that respond to requests # will be running at the same priority as interactive jobs. COMMAND='chgjob job('$JOBNAME') runpty(6)' system $COMMAND # Enable multi threading export QIBM_MULTI_THREADED=Y fi # Get standard Java environment variables if $os400; then # -r will Only work on the os400 if the files are: # 1. owned by the user # 2. owned by the PRIMARY group of the user # this will not work if the user belongs in secondary groups . "$CATALINA_HOME"/bin/setclasspath.sh else if [ -r "$CATALINA_HOME"/bin/setclasspath.sh ]; then . "$CATALINA_HOME"/bin/setclasspath.sh else echo "Cannot find $CATALINA_HOME/bin/setclasspath.sh" echo "This file is needed to run this program" exit 1 fi fi # Add on extra jar files to CLASSPATH if [ ! -z "$CLASSPATH" ] ; then CLASSPATH="$CLASSPATH": fi CLASSPATH="$CLASSPATH""$CATALINA_HOME"/bin/bootstrap.jar if [ -z "$CATALINA_OUT" ] ; then CATALINA_OUT="$CATALINA_BASE"/logs/catalina.out fi if [ -z "$CATALINA_TMPDIR" ] ; then # Define the java.io.tmpdir to use for Catalina CATALINA_TMPDIR="$CATALINA_BASE"/temp fi # Add tomcat-juli.jar to classpath # tomcat-juli.jar can be over-ridden per instance if [ -r "$CATALINA_BASE/bin/tomcat-juli.jar" ] ; then CLASSPATH=$CLASSPATH:$CATALINA_BASE/bin/tomcat-juli.jar else CLASSPATH=$CLASSPATH:$CATALINA_HOME/bin/tomcat-juli.jar fi # Bugzilla 37848: When no TTY is available, don't output to console have_tty=0 if [ "`tty`" != "not a tty" ]; then have_tty=1 fi # For Cygwin, switch paths to Windows format before running java if $cygwin; then JAVA_HOME=`cygpath --absolute --windows "$JAVA_HOME"` JRE_HOME=`cygpath --absolute --windows "$JRE_HOME"` CATALINA_HOME=`cygpath --absolute --windows "$CATALINA_HOME"` CATALINA_BASE=`cygpath --absolute --windows "$CATALINA_BASE"` CATALINA_TMPDIR=`cygpath --absolute --windows "$CATALINA_TMPDIR"` CLASSPATH=`cygpath --path --windows "$CLASSPATH"` JAVA_ENDORSED_DIRS=`cygpath --path --windows "$JAVA_ENDORSED_DIRS"` fi if [ -z "$JSSE_OPTS" ] ; then JSSE_OPTS="-Djdk.tls.ephemeralDHKeySize=2048" fi JAVA_OPTS="$JAVA_OPTS $JSSE_OPTS" # Register custom URL handlers # Do this here so custom URL handles (specifically 'war:...') can be used in the security policy JAVA_OPTS="$JAVA_OPTS -Djava.protocol.handler.pkgs=org.apache.catalina.webresources" # Set juli LogManager config file if it is present and an override has not been issued if [ -z "$LOGGING_CONFIG" ]; then if [ -r "$CATALINA_BASE"/conf/logging.properties ]; then LOGGING_CONFIG="-Djava.util.logging.config.file=$CATALINA_BASE/conf/logging.properties" else # Bugzilla 45585 LOGGING_CONFIG="-Dnop" fi fi if [ -z "$LOGGING_MANAGER" ]; then LOGGING_MANAGER="-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager" fi # Java 9 no longer supports the java.endorsed.dirs # system property. Only try to use it if # JAVA_ENDORSED_DIRS was explicitly set # or CATALINA_HOME/endorsed exists. ENDORSED_PROP=ignore.endorsed.dirs if [ -n "$JAVA_ENDORSED_DIRS" ]; then ENDORSED_PROP=java.endorsed.dirs fi if [ -d "$CATALINA_HOME/endorsed" ]; then ENDORSED_PROP=java.endorsed.dirs fi # Uncomment the following line to make the umask available when using the # org.apache.catalina.security.SecurityListener #JAVA_OPTS="$JAVA_OPTS -Dorg.apache.catalina.security.SecurityListener.UMASK=`umask`" if [ -z "$USE_NOHUP" ]; then if $hpux; then USE_NOHUP="true" else USE_NOHUP="false" fi fi unset _NOHUP if [ "$USE_NOHUP" = "true" ]; then _NOHUP=nohup fi # Add the JAVA 9 specific start-up parameters required by Tomcat JDK_JAVA_OPTIONS="$JDK_JAVA_OPTIONS --add-opens=java.base/java.lang=ALL-UNNAMED" JDK_JAVA_OPTIONS="$JDK_JAVA_OPTIONS --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED" export JDK_JAVA_OPTIONS # ----- Execute The Requested Command ----------------------------------------- # Bugzilla 37848: only output this if we have a TTY if [ $have_tty -eq 1 ]; then echo "Using CATALINA_BASE: $CATALINA_BASE" echo "Using CATALINA_HOME: $CATALINA_HOME" echo "Using CATALINA_TMPDIR: $CATALINA_TMPDIR" if [ "$1" = "debug" ] ; then echo "Using JAVA_HOME: $JAVA_HOME" else echo "Using JRE_HOME: $JRE_HOME" fi echo "Using CLASSPATH: $CLASSPATH" if [ ! -z "$CATALINA_PID" ]; then echo "Using CATALINA_PID: $CATALINA_PID" fi fi if [ "$1" = "jpda" ] ; then if [ -z "$JPDA_TRANSPORT" ]; then JPDA_TRANSPORT="dt_socket" fi if [ -z "$JPDA_ADDRESS" ]; then JPDA_ADDRESS="localhost:8000" fi if [ -z "$JPDA_SUSPEND" ]; then JPDA_SUSPEND="n" fi if [ -z "$JPDA_OPTS" ]; then JPDA_OPTS="-agentlib:jdwp=transport=$JPDA_TRANSPORT,address=$JPDA_ADDRESS,server=y,suspend=$JPDA_SUSPEND" fi CATALINA_OPTS="$JPDA_OPTS $CATALINA_OPTS" shift fi if [ "$1" = "debug" ] ; then if $os400; then echo "Debug command not available on OS400" exit 1 else shift if [ "$1" = "-security" ] ; then if [ $have_tty -eq 1 ]; then echo "Using Security Manager" fi shift exec "$_RUNJDB" "$LOGGING_CONFIG" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \ -D$ENDORSED_PROP="$JAVA_ENDORSED_DIRS" \ -classpath "$CLASSPATH" \ -sourcepath "$CATALINA_HOME"/../../java \ -Djava.security.manager \ -Djava.security.policy=="$CATALINA_BASE"/conf/catalina.policy \ -Dcatalina.base="$CATALINA_BASE" \ -Dcatalina.home="$CATALINA_HOME" \ -Djava.io.tmpdir="$CATALINA_TMPDIR" \ org.apache.catalina.startup.Bootstrap "$@" start else exec "$_RUNJDB" "$LOGGING_CONFIG" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \ -D$ENDORSED_PROP="$JAVA_ENDORSED_DIRS" \ -classpath "$CLASSPATH" \ -sourcepath "$CATALINA_HOME"/../../java \ -Dcatalina.base="$CATALINA_BASE" \ -Dcatalina.home="$CATALINA_HOME" \ -Djava.io.tmpdir="$CATALINA_TMPDIR" \ org.apache.catalina.startup.Bootstrap "$@" start fi fi elif [ "$1" = "run" ]; then shift if [ "$1" = "-security" ] ; then if [ $have_tty -eq 1 ]; then echo "Using Security Manager" fi shift eval exec "\"$_RUNJAVA\"" "\"$LOGGING_CONFIG\"" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \ -D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \ -classpath "\"$CLASSPATH\"" \ -Djava.security.manager \ -Djava.security.policy=="\"$CATALINA_BASE/conf/catalina.policy\"" \ -Dcatalina.base="\"$CATALINA_BASE\"" \ -Dcatalina.home="\"$CATALINA_HOME\"" \ -Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \ org.apache.catalina.startup.Bootstrap "$@" start else eval exec "\"$_RUNJAVA\"" "\"$LOGGING_CONFIG\"" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \ -D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \ -classpath "\"$CLASSPATH\"" \ -Dcatalina.base="\"$CATALINA_BASE\"" \ -Dcatalina.home="\"$CATALINA_HOME\"" \ -Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \ org.apache.catalina.startup.Bootstrap "$@" start fi elif [ "$1" = "start" ] ; then if [ ! -z "$CATALINA_PID" ]; then if [ -f "$CATALINA_PID" ]; then if [ -s "$CATALINA_PID" ]; then echo "Existing PID file found during start." if [ -r "$CATALINA_PID" ]; then PID=`cat "$CATALINA_PID"` ps -p $PID >/dev/null 2>&1 if [ $? -eq 0 ] ; then echo "Tomcat appears to still be running with PID $PID. Start aborted." echo "If the following process is not a Tomcat process, remove the PID file and try again:" ps -f -p $PID exit 1 else echo "Removing/clearing stale PID file." rm -f "$CATALINA_PID" >/dev/null 2>&1 if [ $? != 0 ]; then if [ -w "$CATALINA_PID" ]; then cat /dev/null > "$CATALINA_PID" else echo "Unable to remove or clear stale PID file. Start aborted." exit 1 fi fi fi else echo "Unable to read PID file. Start aborted." exit 1 fi else rm -f "$CATALINA_PID" >/dev/null 2>&1 if [ $? != 0 ]; then if [ ! -w "$CATALINA_PID" ]; then echo "Unable to remove or write to empty PID file. Start aborted." exit 1 fi fi fi fi fi shift touch "$CATALINA_OUT" if [ "$1" = "-security" ] ; then if [ $have_tty -eq 1 ]; then echo "Using Security Manager" fi shift eval $_NOHUP "\"$_RUNJAVA\"" "\"$LOGGING_CONFIG\"" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \ -D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \ -classpath "\"$CLASSPATH\"" \ -Djava.security.manager \ -Djava.security.policy=="\"$CATALINA_BASE/conf/catalina.policy\"" \ -Dcatalina.base="\"$CATALINA_BASE\"" \ -Dcatalina.home="\"$CATALINA_HOME\"" \ -Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \ org.apache.catalina.startup.Bootstrap "$@" start \ >> "$CATALINA_OUT" 2>&1 "&" else eval $_NOHUP "\"$_RUNJAVA\"" "\"$LOGGING_CONFIG\"" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \ -D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \ -classpath "\"$CLASSPATH\"" \ -Dcatalina.base="\"$CATALINA_BASE\"" \ -Dcatalina.home="\"$CATALINA_HOME\"" \ -Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \ org.apache.catalina.startup.Bootstrap "$@" start \ >> "$CATALINA_OUT" 2>&1 "&" fi if [ ! -z "$CATALINA_PID" ]; then echo $! > "$CATALINA_PID" fi echo "Tomcat started." elif [ "$1" = "stop" ] ; then shift SLEEP=5 if [ ! -z "$1" ]; then echo $1 | grep "[^0-9]" >/dev/null 2>&1 if [ $? -gt 0 ]; then SLEEP=$1 shift fi fi FORCE=0 if [ "$1" = "-force" ]; then shift FORCE=1 fi if [ ! -z "$CATALINA_PID" ]; then if [ -f "$CATALINA_PID" ]; then if [ -s "$CATALINA_PID" ]; then kill -0 `cat "$CATALINA_PID"` >/dev/null 2>&1 if [ $? -gt 0 ]; then echo "PID file found but no matching process was found. Stop aborted." exit 1 fi else echo "PID file is empty and has been ignored." fi else echo "\$CATALINA_PID was set but the specified file does not exist. Is Tomcat running? Stop aborted." exit 1 fi fi eval "\"$_RUNJAVA\"" $LOGGING_MANAGER $JAVA_OPTS \ -D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \ -classpath "\"$CLASSPATH\"" \ -Dcatalina.base="\"$CATALINA_BASE\"" \ -Dcatalina.home="\"$CATALINA_HOME\"" \ -Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \ org.apache.catalina.startup.Bootstrap "$@" stop # stop failed. Shutdown port disabled? Try a normal kill. if [ $? != 0 ]; then if [ ! -z "$CATALINA_PID" ]; then echo "The stop command failed. Attempting to signal the process to stop through OS signal." kill -15 `cat "$CATALINA_PID"` >/dev/null 2>&1 fi fi if [ ! -z "$CATALINA_PID" ]; then if [ -f "$CATALINA_PID" ]; then while [ $SLEEP -ge 0 ]; do kill -0 `cat "$CATALINA_PID"` >/dev/null 2>&1 if [ $? -gt 0 ]; then rm -f "$CATALINA_PID" >/dev/null 2>&1 if [ $? != 0 ]; then if [ -w "$CATALINA_PID" ]; then cat /dev/null > "$CATALINA_PID" # If Tomcat has stopped don't try and force a stop with an empty PID file FORCE=0 else echo "The PID file could not be removed or cleared." fi fi echo "Tomcat stopped." break fi if [ $SLEEP -gt 0 ]; then sleep 1 fi if [ $SLEEP -eq 0 ]; then echo "Tomcat did not stop in time." if [ $FORCE -eq 0 ]; then echo "PID file was not removed." fi echo "To aid diagnostics a thread dump has been written to standard out." kill -3 `cat "$CATALINA_PID"` fi SLEEP=`expr $SLEEP - 1 ` done fi fi KILL_SLEEP_INTERVAL=5 if [ $FORCE -eq 1 ]; then if [ -z "$CATALINA_PID" ]; then echo "Kill failed: \$CATALINA_PID not set" else if [ -f "$CATALINA_PID" ]; then PID=`cat "$CATALINA_PID"` echo "Killing Tomcat with the PID: $PID" kill -9 $PID while [ $KILL_SLEEP_INTERVAL -ge 0 ]; do kill -0 `cat "$CATALINA_PID"` >/dev/null 2>&1 if [ $? -gt 0 ]; then rm -f "$CATALINA_PID" >/dev/null 2>&1 if [ $? != 0 ]; then if [ -w "$CATALINA_PID" ]; then cat /dev/null > "$CATALINA_PID" else echo "The PID file could not be removed." fi fi echo "The Tomcat process has been killed." break fi if [ $KILL_SLEEP_INTERVAL -gt 0 ]; then sleep 1 fi KILL_SLEEP_INTERVAL=`expr $KILL_SLEEP_INTERVAL - 1 ` done if [ $KILL_SLEEP_INTERVAL -lt 0 ]; then echo "Tomcat has not been killed completely yet. The process might be waiting on some system call or might be UNINTERRUPTIBLE." fi fi fi fi elif [ "$1" = "configtest" ] ; then eval "\"$_RUNJAVA\"" $LOGGING_MANAGER $JAVA_OPTS \ -D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \ -classpath "\"$CLASSPATH\"" \ -Dcatalina.base="\"$CATALINA_BASE\"" \ -Dcatalina.home="\"$CATALINA_HOME\"" \ -Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \ org.apache.catalina.startup.Bootstrap configtest result=$? if [ $result -ne 0 ]; then echo "Configuration error detected!" fi exit $result elif [ "$1" = "version" ] ; then "$_RUNJAVA" \ -classpath "$CATALINA_HOME/lib/catalina.jar" \ org.apache.catalina.util.ServerInfo else echo "Usage: catalina.sh ( commands ... )" echo "commands:" if $os400; then echo " debug Start Catalina in a debugger (not available on OS400)" echo " debug -security Debug Catalina with a security manager (not available on OS400)" else echo " debug Start Catalina in a debugger" echo " debug -security Debug Catalina with a security manager" fi echo " jpda start Start Catalina under JPDA debugger" echo " run Start Catalina in the current window" echo " run -security Start in the current window with security manager" echo " start Start Catalina in a separate window" echo " start -security Start in a separate window with security manager" echo " stop Stop Catalina, waiting up to 5 seconds for the process to end" echo " stop n Stop Catalina, waiting up to n seconds for the process to end" echo " stop -force Stop Catalina, wait up to 5 seconds and then use kill -KILL if still running" echo " stop n -force Stop Catalina, wait up to n seconds and then use kill -KILL if still running" echo " configtest Run a basic syntax check on server.xml - check exit code for result" echo " version What version of tomcat are you running?" echo "Note: Waiting for the process to end and use of the -force option require that \$CATALINA_PID is defined" exit 1 fi
server.xml
<?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.startup.VersionLoggerListener" /> <!-- Security listener. Documentation at /docs/config/listeners.html <Listener className="org.apache.catalina.security.SecurityListener" /> --> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL/TLS HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443 This connector uses the NIO implementation that requires the JSSE style configuration. When using the APR/native implementation, the OpenSSL style configuration is required as described in the APR/native documentation --> <!-- <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- Use the LockOutRealm to prevent attempts to guess user passwords via a brute-force attack --> <Realm className="org.apache.catalina.realm.LockOutRealm"> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="/data/tomcat/webapps" unpackWARs="false" autoDeploy="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log" suffix=".txt" pattern="%h %l %u %t "%r" %s %b" /> </Host> </Engine> </Service> </Server>
run_tomcat.sh
#!/bin/bash #echo "nameserver 223.6.6.6" > /etc/resolv.conf #echo "192.168.7.248 k8s-vip.example.com" >> /etc/hosts #/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat & su - nginx -c "/apps/tomcat/bin/catalina.sh start" tail -f /etc/hosts
filebeat.yml
filebeat.inputs: - type: log enabled: true paths: - /apps/tomcat/logs/catalina.out fields: type: tomcat-catalina filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 1 setup.kibana: output.redis: hosts: ["172.31.2.105:6379"] key: "k8s-magedu-app1" db: 1 timeout: 5 password: "123456"
4.2 構建nginx鏡像
從nginx pod內訪問tomcat pod svc,在nginx.conf配置upstream到tomcat;
4.3 寫tomcat yaml文件
ff
kind: Deployment #apiVersion: extensions/v1beta1 apiVersion: apps/v1 metadata: labels: app: magedu-tomcat-app1-deployment-label name: magedu-tomcat-app1-deployment namespace: magedu spec: replicas: 1 selector: matchLabels: app: magedu-tomcat-app1-selector template: metadata: labels: app: magedu-tomcat-app1-selector spec: containers: - name: magedu-tomcat-app1-container image: harbor.magedu.net/magedu/tomcat-app1:v2 #command: ["/apps/tomcat/bin/run_tomcat.sh"] #imagePullPolicy: IfNotPresent imagePullPolicy: Always ports: - containerPort: 8080 protocol: TCP name: http env: - name: "password" value: "123456" - name: "age" value: "18" resources: limits: cpu: 1 memory: "512Mi" requests: cpu: 500m memory: "512Mi" volumeMounts: - name: magedu-images mountPath: /usr/local/nginx/html/webapp/images readOnly: false - name: magedu-static mountPath: /usr/local/nginx/html/webapp/static readOnly: false volumes: - name: magedu-images nfs: server: 172.31.7.109 path: /data/k8sdata/magedu/images - name: magedu-static nfs: server: 172.31.7.109 path: /data/k8sdata/magedu/static # nodeSelector: # project: magedu # app: tomcat --- kind: Service apiVersion: v1 metadata: labels: app: magedu-tomcat-app1-service-label name: magedu-tomcat-app1-service namespace: magedu spec: type: NodePort ports: - name: http port: 80 protocol: TCP targetPort: 8080 nodePort: 40003 selector: app: magedu-tomcat-app1-selector
4.4 寫nginx yaml文件
fff
kind: Deployment apiVersion: apps/v1 metadata: labels: app: magedu-nginx-deployment-label name: magedu-nginx-deployment namespace: magedu spec: replicas: 1 selector: matchLabels: app: magedu-nginx-selector template: metadata: labels: app: magedu-nginx-selector spec: containers: - name: magedu-nginx-container image: harbor.magedu.net/magedu/nginx-web1:v3 #command: ["/apps/tomcat/bin/run_tomcat.sh"] #imagePullPolicy: IfNotPresent imagePullPolicy: Always ports: - containerPort: 80 protocol: TCP name: http - containerPort: 443 protocol: TCP name: https env: - name: "password" value: "123456" - name: "age" value: "20" resources: limits: cpu: 2 memory: 2Gi requests: cpu: 500m memory: 1Gi volumeMounts: - name: magedu-images mountPath: /usr/local/nginx/html/webapp/images readOnly: false - name: magedu-static mountPath: /usr/local/nginx/html/webapp/static readOnly: false volumes: - name: magedu-images nfs: server: 172.31.7.109 path: /data/k8sdata/magedu/images - name: magedu-static nfs: server: 172.31.7.109 path: /data/k8sdata/magedu/static #nodeSelector: # group: magedu --- kind: Service apiVersion: v1 metadata: labels: app: magedu-nginx-service-label name: magedu-nginx-service namespace: magedu spec: type: NodePort ports: - name: http port: 80 protocol: TCP targetPort: 80 nodePort: 40002 - name: https port: 443 protocol: TCP targetPort: 443 nodePort: 40443 selector: app: magedu-nginx-selector
5、k8s結合ceph rbd、cephfs實現數據的持久化和共享
兩種認證方式:
- kerying文件驗證
- k8s secret驗證
5.1 創建ceph存儲池和image鏡像
(1)創建快存儲池
magedu@ceph-deploy:~/ceph-cluster$ ceph osd pool create shijie-rbd-pool1 64 64 pool 'shijie-rbd-pool1' created
(2)啟用塊存儲功能
magedu@ceph-deploy:~/ceph-cluster$ ceph osd pool application enable shijie-rbd-pool1 rbd enabled application 'rbd' on pool 'myrdb1'
(3)初始化;
magedu@ceph-deploy:~/ceph-cluster$ rbd pool init -p shijie-rbd-pool1
(4)創建鏡像;(一個鏡像給一個服務使用)
查看鏡像;
magedu@ceph-deploy:~/ceph-cluster$ rbd create shijie-img-img1 --size 5G --pool shijie-rbd-pool1 magedu@ceph-deploy:~/ceph-cluster$ rbd ls --pool shijie-rbd-pool1 myimg1
(5)創建指定鏡像特性/信息;
magedu@ceph-deploy:~/ceph-cluster$ rbd create myimg2 --size 3G --pool shijie-rbd-pool1 --image-format 2 --image-feature layering //只開啟部分特性,因為其它特性需要高版本內核支持; magedu@ceph-deploy:~/ceph-cluster$ rbd ls --pool shijie-rbd-pool1 myimg1 myimg2
(6)創建k8s訪問ceph使用的普通用戶
導入keyring文件;
(7)拷貝認證文件到k8s master和node節點
(8)在k8s master和node節點(ceph客戶端)驗證ceph權限;
ceph -id magedu-shijie -s
5.2 客戶端安裝ceph-common
(1)在master節點和node節點安裝ceph-common
(2)並在客戶端驗證ceph權限;(k8s master和node節點內驗證)
(3)配置k8s master和node節點的/etc/hosts;
5.3 使用keyring文件掛載rbd
├── case1-busybox-keyring.yaml
├── case2-nginx-keyring.yaml
case1-busybox-keyring.yaml
apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - image: busybox command: - sleep - "3600" imagePullPolicy: Always name: busybox #restartPolicy: Always volumeMounts: - name: rbd-data1 mountPath: /data volumes: - name: rbd-data1 rbd: monitors: - '172.31.6.101:6789' - '172.31.6.102:6789' - '172.31.6.103:6789' pool: shijie-rbd-pool1 image: shijie-img-img1 fsType: ext4 readOnly: false user: magedu-shijie keyring: /etc/ceph/ceph.client.magedu-shijie.keyring
case2-nginx-keyring.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 1 selector: matchLabels: #rs or deployment app: ng-deploy-80 template: metadata: labels: app: ng-deploy-80 spec: containers: - name: ng-deploy-80 image: nginx ports: - containerPort: 80 volumeMounts: - name: rbd-data1 mountPath: /data volumes: - name: rbd-data1 rbd: monitors: - '172.31.6.101:6789' - '172.31.6.102:6789' - '172.31.6.103:6789' pool: shijie-rbd-pool1 image: shijie-img-img1 fsType: ext4 readOnly: false user: magedu-shijie keyring: /etc/ceph/ceph.client.magedu-shijie.keyring
5.4 使用secret掛載rbd
├── case3-secret-client-shijie.yaml
├── case4-nginx-secret.yaml
case3-secret-client-shijie.yaml
apiVersion: v1 kind: Secret metadata: name: ceph-secret-magedu-shijie type: "kubernetes.io/rbd" data: key: QVFDbm1HSmg2L0dCTGhBQWtXQlRUTmg2R1RHWGpreXFtdFo5RHc9PQo=
case4-nginx-secret.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 1 selector: matchLabels: #rs or deployment app: ng-deploy-80 template: metadata: labels: app: ng-deploy-80 spec: containers: - name: ng-deploy-80 image: nginx ports: - containerPort: 80 volumeMounts: - name: rbd-data1 mountPath: /data volumes: - name: rbd-data1 rbd: monitors: - '172.31.6.101:6789' - '172.31.6.102:6789' - '172.31.6.103:6789' pool: shijie-rbd-pool1 image: shijie-img-img1 fsType: ext4 readOnly: false user: magedu-shijie secretRef: name: ceph-secret-magedu-shijie
5.5 動態存儲類,使用secret掛載ceph rbd
├── case5-secret-admin.yaml
├── case6-ceph-storage-class.yaml
├── case7-mysql-pvc.yaml
├── case8-mysql-single.yaml
case5-secret-admin.yaml
apiVersion: v1 kind: Secret metadata: name: ceph-secret-admin type: "kubernetes.io/rbd" data: key: QVFBM2RoZGhNZC9VQUJBQXIyU05wSitoY0sxZEQ1bDJIajVYTWc9PQo=
case6-ceph-storage-class.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph-storage-class-shijie annotations: storageclass.kubernetes.io/is-default-class: "true" #設置為默認存儲類 provisioner: kubernetes.io/rbd parameters: monitors: 172.31.6.101:6789,172.31.6.102:6789,172.31.6.103:6789 adminId: admin adminSecretName: ceph-secret-admin adminSecretNamespace: default pool: shijie-rbd-pool1 userId: magedu-shijie userSecretName: ceph-secret-magedu-shijie
case7-mysql-pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-data-pvc spec: accessModes: - ReadWriteOnce storageClassName: ceph-storage-class-shijie resources: requests: storage: '5Gi'
case8-mysql-single.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - image: harbor.magedu.net/magedu/mysql:5.6.46 name: mysql env: # Use secret in real usage - name: MYSQL_ROOT_PASSWORD value: magedu123456 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-data-pvc --- kind: Service apiVersion: v1 metadata: labels: app: mysql-service-label name: mysql-service spec: type: NodePort ports: - name: http port: 3306 protocol: TCP targetPort: 3306 nodePort: 43306 selector: app: mysql
5.6 測試使用ceph-fs
├── case5-secret-admin.yaml
└── case9-nginx-cephfs.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 selector: matchLabels: #rs or deployment app: ng-deploy-80 template: metadata: labels: app: ng-deploy-80 spec: containers: - name: ng-deploy-80 image: nginx ports: - containerPort: 80 volumeMounts: - name: magedu-staticdata-cephfs mountPath: /usr/share/nginx/html/ volumes: - name: magedu-staticdata-cephfs cephfs: monitors: - '172.31.6.101:6789' - '172.31.6.102:6789' - '172.31.6.103:6789' path: / user: admin secretRef: name: ceph-secret-admin
6、實現基於探針對pod中的訪問的健康狀態檢查,並總結就緒探針和存活探針的區別
6.1 POD狀態
6.2 探針類型
- livenessProbe:
- readinessProbe:
nginx.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 1 selector: matchLabels: #rs or deployment app: ng-deploy-80 #matchExpressions: # - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]} template: metadata: labels: app: ng-deploy-80 spec: containers: - name: ng-deploy-80 image: nginx:1.17.5 ports: - containerPort: 80 #readinessProbe: livenessProbe: httpGet: #path: /monitor/monitor.html path: /index1.html port: 80 initialDelaySeconds: 5 periodSeconds: 3 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 --- apiVersion: v1 kind: Service metadata: name: ng-deploy-80 spec: ports: - name: http port: 81 targetPort: 80 nodePort: 40012 protocol: TCP type: NodePort selector: app: ng-deploy-80
redis.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: redis-deployment spec: replicas: 1 selector: matchLabels: #rs or deployment app: redis-deploy-6379 #matchExpressions: # - {key: app, operator: In, values: [redis-deploy-6379,ng-rs-81]} template: metadata: labels: app: redis-deploy-6379 spec: containers: - name: redis-deploy-6379 image: redis ports: - containerPort: 6379 readinessProbe: exec: command: - /usr/local/bin/redis-cli - quit initialDelaySeconds: 5 periodSeconds: 3 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 livenessProbe: exec: command: - /usr/local/bin/redis-cli - quit initialDelaySeconds: 5 periodSeconds: 3 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 --- apiVersion: v1 kind: Service metadata: name: redis-deploy-6379 spec: ports: - name: http port: 6379 targetPort: 6379 nodePort: 40016 protocol: TCP type: NodePort selector: app: redis-deploy-6379
tcp.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 1 selector: matchLabels: #rs or deployment app: ng-deploy-80 #matchExpressions: # - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]} template: metadata: labels: app: ng-deploy-80 spec: containers: - name: ng-deploy-80 image: nginx:1.17.5 ports: - containerPort: 80 livenessProbe: tcpSocket: port: 80 initialDelaySeconds: 5 periodSeconds: 3 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: tcpSocket: port: 80 initialDelaySeconds: 5 periodSeconds: 3 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 --- apiVersion: v1 kind: Service metadata: name: ng-deploy-80 spec: ports: - name: http port: 81 targetPort: 80 nodePort: 40012 protocol: TCP type: NodePort selector: app: ng-deploy-80