主机配置规划:
服务器名称(hostname) | 系统版本 | 配置 | 内网IP | 外网IP(模拟) | 说明 |
JumpSrv | CentOS 7.5 | 2C/4G/40G | 192.168.1.252 | 124.70.***.*** | 跳板机,公网发布 |
registry | CentOS 7.5 | 2C/4G/40G | 192.168.1.100 | 镜像仓库,公网下载的镜像会上传至此 | |
master | CentOS 7.5 | 2C/4G/40G | 192.168.1.21 | master节点 | |
node-0001 | CentOS 7.5 | 2C/4G/40G | 192.168.1.31 | node节点 | |
node-0002 | CentOS 7.5 | 2C/4G/40G | 192.168.1.32 | node节点 | |
node-0003 | CentOS 7.5 | 2C/4G/40G | 192.168.1.33 | node节点 |
架构图:
简要说明:
- Ingress的作用是将容器提供的服务发布至集群外访问,他是对集群中服务的外部访问进行管理的 API 对象, Ingress 可以提供负载均衡、SSL 和基于名称的虚拟托管。必须具有 ingress 控制器【例如 ingress-nginx】才能满足 Ingress 的要求。仅创建 Ingress 资源无效。
- Service的作用和负载均衡的作用及其相似。
- 将nginx容器和php容器同时部署在同一个pod里,解析php动态页面.(同一个pod里面的容器共享主机名和网络命名空间)
- 使用hostPath卷分别在node中存放日志文件,使用emptyDir卷存放缓存文件(也是在pode所在的node中存放),使用NFS结合PV,PVC存放网页页面信息!
- 本文将不同部分的yaml文件拆分编写(为了理解和学习)
- 省略制作私有镜像仓库的过程,本项目的仓库列表:
[root@master ingress]# curl http://192.168.1.100:5000/v2/_catalog {"repositories":["coredns","etcd","flannel","kube-apiserver","kube-controller-manager","kube-proxy","kube-scheduler","metrics-server","myos","nginx-ingress-controller","pause"]} [root@master ingress]# curl http://192.168.1.100:5000/v2/myos/tags/list {"name":"myos","tags":["php-fpm","httpd","nginx","v1804"]} [root@master ingress]# curl http://192.168.1.100:5000/v2/nginx-ingress-controller/tags/list {"name":"nginx-ingress-controller","tags":["0.30.0"]}
- 如何从公网获得Ingress的镜像
docker pull registry.cn-beijing.aliyuncs.com/google_registry/nginx-ingress-controller:0.30.0 或者: wget https://github.com/kubernetes/ingress-nginx/archive/nginx-0.30.0.tar.gz tar xf nginx-0.30.0.tar.gz yaml文件在下载包中的位置:ingress-nginx-nginx-0.30.0/deploy/static/mandatory.yaml
步骤:
1, 安装Ingress插件:
[root@master ingress]# curl http://192.168.1.100:5000/v2/nginx-ingress-controller/tags/list {"name":"nginx-ingress-controller","tags":["0.30.0"]} [root@master ~]# vim ingress/mandatory.yaml 221: image: 192.168.1.100:5000/nginx-ingress-controller:0.30.0 [root@master ~]# kubectl apply -f ingress/mandatory.yaml [root@master ~]# kubectl -n ingress-nginx get pod NAME READY STATUS RESTARTS AGE nginx-ingress-controller-fc6766d7-ptppp 1/1 Running 0 47s [root@master ingress]#
2, 在registry,master,node-0001,node-0002,node-0003安装nfs,并在registry上面nfs共享文件夹:
[root@registry ~]# yum install -y nfs-utils [root@registry ~]# mkdir -m 777 /var/webroot [root@registry ~]# vim /etc/exports /var/webroot *(rw) [root@registry ~]# systemctl enable --now nfs #---------------------------------所有节点都需要 nfs 软件包------------------------- [root@node-0001 ~]# yum install -y nfs-utils #-------------------------------------------------------------------------------- [root@node-0002 ~]# yum install -y nfs-utils #-------------------------------------------------------------------------------- [root@node-0003 ~]# yum install -y nfs-utils #-------------------------------下面在任意其他节点测试------------------------------ [root@master ~]# yum install -y nfs-utils [root@master ~]# showmount -e 192.168.1.100 Export list for 192.168.1.100: /var/webroot *
3, 创建PV:
[root@master ~]# vim mypv.yaml --- kind: PersistentVolume apiVersion: v1 metadata: name: pv-nfs spec: volumeMode: Filesystem capacity: storage: 30Gi accessModes: - ReadWriteOnce - ReadOnlyMany - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: server: 192.168.1.100 path: /var/webroot [root@master ~]# kubectl apply -f mypv.yaml persistentvolume/pv-nfs created [root@master ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS AGE pv-nfs 30Gi RWO,ROX,RWX Retain Available 3s
4, 创建PVC:
[root@master configmap]# vim mypvc.yaml --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-nfs spec: volumeMode: Filesystem accessModes: - ReadWriteMany resources: requests: storage: 25Gi [root@master configmap]# kubectl apply -f mypvc.yaml [root@master configmap]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM pv-nfs 30Gi RWX Retain Bound default/pvc-nfs [root@master configmap]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-nfs Bound pv-nfs 30Gi RWO,ROX,RWX 27s
5,创建configMap(根据nginx的配置文件创建)
[root@master ~]# vim /var/webconf/nginx.conf ... ... location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } ... ... [root@master ~]# kubectl create configmap nginx-conf --from-file=/var/webconf/nginx.conf configmap/nginx-conf created [root@master ~]# kubectl get configmaps NAME DATA AGE nginx-conf 1 8s
6, 编写yaml文件部署nginx+php容器:
vim webnginx.yaml
--- kind: Deployment apiVersion: apps/v1 metadata: name: webnginx spec: selector: matchLabels: myapp: nginx replicas: 3 template: metadata: labels: myapp: nginx spec: volumes: - name: nginx-conf configMap: name: nginx-conf - name: cache-data emptyDir: {} - name: log-data hostPath: path: /var/weblog type: DirectoryOrCreate - name: website persistentVolumeClaim: claimName: pvc-nfs containers: - name: nginx image: 192.168.1.100:5000/myos:nginx volumeMounts: - name: nginx-conf subPath: nginx.conf mountPath: /usr/local/nginx/conf/nginx.conf - name: cache-data emptyDir: {} - name: log-data hostPath: path: /var/weblog type: DirectoryOrCreate - name: website persistentVolumeClaim: claimName: pvc-nfs containers: - name: nginx image: 192.168.1.100:5000/myos:nginx volumeMounts: - name: nginx-conf subPath: nginx.conf mountPath: /usr/local/nginx/conf/nginx.conf - name: cache-data mountPath: /var/cache - name: log-data mountPath: /usr/local/nginx/logs - name: website mountPath: /usr/local/nginx/html ports: - protocol: TCP containerPort: 80 - name: php-backend image: 192.168.1.100:5000/myos:php-fpm volumeMounts: - name: website mountPath: /usr/local/nginx/html restartPolicy: Always
7, 编写service文件:
vim clusterip.yaml --- kind: Service apiVersion: v1 metadata: name: myweb spec: ports: - protocol: TCP port: 80 targetPort: 80 selector: myapp: nginx type: ClusterIP
8, 创建ingress资源文件:
vim ingress-example.yaml --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-app annotations: kubernetes.io/ingress.class: "nginx" spec: backend: serviceName: myweb servicePort: 80
9,运行上面的yaml文件:(实际操作中是将这些yaml文件合为一个,此处为了学习理解架构!)
kubectl apply -f webnginx.yaml kubectl apply -f clusterip.yaml kubectl apply -f ingress-example.yaml
10, 将运行ingress nginx pod的node (node-0002) 发布至公网(绑定弹性公网IP,或使用 ELB 发布到互联网即可验证).
[root@master ingress]# kubectl get ingresses NAME HOSTS ADDRESS PORTS AGE my-app * 192.168.1.32 80 160m