1:k8s集群的安裝
1.1 k8s的架構
除了核心組件,還有一些推薦的Add-ons:
組件名稱 | 說明 |
---|---|
kube-dns | 負責為整個集群提供DNS服務 |
Ingress Controller | 為服務提供外網入口 |
Heapster | 提供資源監控 |
Dashboard | 提供GUI |
Federation | 提供跨可用區的集群 |
Fluentd-elasticsearch | 提供集群日志采集、存儲與查詢 |
1.2:修改IP地址、主機和host解析
10.0.0.11 k8s-master
10.0.0.12 k8s-node-1
10.0.0.13 k8s-node-2
所有節點需要做hosts解析
1.3:master節點安裝etcd
yum install etcd -y
vim /etc/etcd/etcd.conf
6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
systemctl start etcd.service
systemctl enable etcd.service
etcdctl set testdir/testkey0 0
etcdctl get testdir/testkey0
etcdctl -C http://10.0.0.11:2379 cluster-health
etcd原生支持做集群
作業1:安裝部署etcd集群,要求三個節點
1.4:master節點安裝kubernetes
yum install kubernetes-master.x86_64 -y
vim /etc/kubernetes/apiserver
8行: KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
11行:KUBE_API_PORT="--port=8080"
17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379"
23行:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
vim /etc/kubernetes/config
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service
檢查服務是否安裝正常
[root@k8s-master ~]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
1.5:node節點安裝kubernetes
yum install kubernetes-node.x86_64 -y
vim /etc/kubernetes/config
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
vim /etc/kubernetes/kubelet
5行:KUBELET_ADDRESS="--address=0.0.0.0"
8行:KUBELET_PORT="--port=10250"
11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.12"
14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl enable kube-proxy.service
systemctl start kube-proxy.service
在master節點檢查
[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE
10.0.0.12 Ready 6m
10.0.0.13 Ready 3s
1.6:所有節點配置flannel網絡
yum install flannel -y
sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
##master節點:
etcdctl mk /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'
yum install docker -y
systemctl enable flanneld.service
systemctl restart flanneld.service
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
##node節點:
systemctl enable flanneld.service
systemctl restart flanneld.service
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service
1.7:配置master為鏡像倉庫
#所有節點
vim /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.0.11:5000'
systemctl restart docker
#master節點
docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry
2:什么是k8s,k8s有什么功能?
k8s是一個docker集群的管理工具
2.1 k8s的核心功能
自愈: 重新啟動失敗的容器,在節點不可用時,替換和重新調度節點上的容器,對用戶定義的健康檢查不響應的容器會被中止,並且在容器准備好服務之前不會把其向客戶端廣播。
彈性伸縮: 通過監控容器的cpu的負載值,如果這個平均高於80%,增加容器的數量,如果這個平均低於10%,減少容器的數量
服務的自動發現和負載均衡: 不需要修改您的應用程序來使用不熟悉的服務發現機制,Kubernetes 為容器提供了自己的 IP 地址和一組容器的單個 DNS 名稱,並可以在它們之間進行負載均衡。
滾動升級和一鍵回滾: Kubernetes 逐漸部署對應用程序或其配置的更改,同時監視應用程序運行狀況,以確保它不會同時終止所有實例。 如果出現問題,Kubernetes會為您恢復更改,利用日益增長的部署解決方案的生態系統。
2.2 k8s的歷史
2014年 docker容器編排工具,立項
2015年7月 發布kubernetes 1.0, 加入cncf
2016年,kubernetes干掉兩個對手,docker swarm,mesos 1.2版
2017年
2018年 k8s 從cncf基金會 畢業
2019年: 1.13, 1.14 ,1.15
cncf cloud native compute foundation
kubernetes (k8s): 希臘語 舵手,領航 容器編排領域,
谷歌16年容器使用經驗,borg容器管理平台,使用golang重構borg,kubernetes
2.3 k8s的安裝
yum安裝 1.5 最容易安裝成功,最適合學習的
源碼編譯安裝---難度最大 可以安裝最新版
二進制安裝---步驟繁瑣 可以安裝最新版 shell,ansible,saltstack
kubeadm 安裝最容易, 網絡 可以安裝最新版
minikube 適合開發人員體驗k8s, 網絡
2.4 k8s的應用場景
k8s最適合跑微服務項目!
3:k8s常用的資源
3.1 創建pod資源
k8s yaml的主要組成
apiVersion: v1 api版本
kind: pod 資源類型
metadata: 屬性
spec: 詳細
k8s_pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: web
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
pod資源:至少由兩個容器組成,pod基礎容器和業務容器組成
pod配置文件2:
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
app: web
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
- name: busybox
image: 10.0.0.11:5000/busybox:latest
command: ["sleep","10000"]
pod是k8s最小的資源單位
3.2 ReplicationController資源
rc:保證指定數量的pod始終存活,rc通過標簽選擇器來關聯pod
k8s資源的常見操作:
kubectl create -f xxx.yaml
kubectl get pod|rc
kubectl describe pod nginx
kubectl delete pod nginx 或者kubectl delete -f xxx.yaml
kubectl edit pod nginx
創建一個rc
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 5
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
rc的滾動升級
新建一個nginx-rc1.15.yaml
升級
kubectl rolling-update nginx -f nginx-rc1.15.yaml --update-period=10s
回滾
kubectl rolling-update nginx2 -f nginx-rc.yaml --update-period=1s
3.3 service資源
service幫助pod暴露端口
創建一個service
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort #ClusterIP
ports:
- port: 80 #clusterIP
nodePort: 30000 #nodeport
targetPort: 80 #podport
selector:
app: myweb2
命令行生成svc:kubectl expose deployment nginx --type=NodePort --port=80
修改副本數量:kubectl scale rc nginx1 --replicas=2
進入容器:kubectl exec -it nginx1-1frnf /bin/bash
修改nodePort范圍
vim /etc/kubernetes/apiserver
KUBE_API_ARGS="--service-node-port-range=3000-50000"
service默認使用iptables來實現負載均衡, k8s 1.8新版本中推薦使用lvs(四層負載均衡)
3.4 deployment資源
有rc在滾動升級之后,會造成服務訪問中斷,於是k8s引入了deployment資源
創建deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
resources:
limits:
cpu: 100m
requests:
cpu: 100m
deployment升級和回滾
命令行創建deployment
kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
命令行升級版本
kubectl set image deploy nginx nginx=10.0.0.11:5000/nginx:1.15
查看deployment所有歷史版本
kubectl rollout history deployment nginx
deployment回滾到上一個版本
kubectl rollout undo deployment nginx
deployment回滾到指定版本
kubectl rollout undo deployment nginx --to-revision=2
3.5 tomcat+mysql練習
在k8s中容器之間相互訪問,通過VIP地址!
mysql的rc和svc
[root@k8s-master tomcat_daemon]# cat mysql-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: '123456'
svc
[root@k8s-master tomcat_daemon]# cat mysql-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
tomcat的rc
[root@k8s-master tomcat_daemon]# cat tomcat-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: myweb
spec:
replicas: 1
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: 10.0.0.11:5000/tomcat-app:v2
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: '10.254.36.202'
- name: MYSQL_SERVICE_PORT
value: '3306'
tomcat的svc
[root@k8s-master tomcat_daemon]# cat tomcat-svc.yml
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30008
selector:
app: myweb
3.6 wordpress+mysql
wordpress的代碼
[root@k8s-master worepress_daemon]# cat wordpress-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: mywordpress
spec:
replicas: 1
selector:
app: mywordpress
template:
metadata:
labels:
app: mywordpress
spec:
containers:
- name: mywordpress
image: 10.0.0.11:5000/wordpress:v1
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: '10.254.112.209'
- name: WORDPRESS_DB_USER
value: 'wordpress'
- name: WORDPRESS_DB_PASSWORD
value: 'wordpress'
[root@k8s-master worepress_daemon]# cat wordpress-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mywordpress
spec:
type: NodePort
ports:
- port: 80
nodePort: 30010
selector:
app: mywordpress
mysql的代碼
[root@k8s-master worepress_daemon]# cat mysql-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: 'somewordpress'
- name: MYSQL_DATABASE
value: 'wordpress'
- name: MYSQL_USER
value: 'wordpress'
- name: MYSQL_PASSWORD
value: 'wordpress'
[root@k8s-master worepress_daemon]# cat mysql-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
3.7 deployment版wordpress+mysql
[root@k8s-master wordpress_deploy]# cat wp-rc.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: wp
spec:
containers:
- name: wp
image: 10.0.0.11:5000/wordpress:v1
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: '10.254.235.122'
- name: WORDPRESS_DB_USER
value: 'wordpress'
- name: WORDPRESS_DB_PASSWORD
value: 'wordpress'
resources:
limits:
cpu: 100m
requests:
cpu: 100m
[root@k8s-master wordpress_deploy]# cat wp-svc.yml
apiVersion: v1
kind: Service
metadata:
name: wp
spec:
type: NodePort
ports:
- port: 80
nodePort: 30011
selector:
app: wp
[root@k8s-master wordpress_deploy]# cat mysql-wp-rc.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql-wp
spec:
replicas: 1
template:
metadata:
labels:
app: mysql-wp
spec:
containers:
- name: mysql-wp
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: 'somewordpress'
- name: MYSQL_DATABASE
value: 'wordpress'
- name: MYSQL_USER
value: 'wordpress'
- name: MYSQL_PASSWORD
value: 'wordpress'
[root@k8s-master wordpress_deploy]# cat mysql-wp-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysql-wp
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql-wp
4:k8s的附加組件
4.1 dns服務
安裝dns服務
1:下載dns_docker鏡像包
wget http://192.168.12.201/docker_image/docker_k8s_dns.tar.gz
2:導入dns_docker鏡像包(node2節點)
3:修改skydns-rc.yaml
spec:
nodeSelector:
kubernetes.io/hostname: 10.0.0.13
containers:
4:創建dns服務
kubectl create -f skydns-rc.yaml
5:檢查
kubectl get all --namespace=kube-system
6:修改所有node節點kubelet的配置文件
vim /etc/kubernetes/kubelet
KUBELET_ARGS="--cluster_dns=10.254.230.254 --cluster_domain=cluster.local"
systemctl restart kubelet
4.2 namespace命令空間
namespace做資源隔離
4.3 健康檢查
4.3.1 探針的種類
livenessProbe:健康狀態檢查,周期性檢查服務是否存活,檢查結果失敗,將重啟容器
readinessProbe:可用性檢查,周期性檢查服務是否可用,不可用將從service的endpoints中移除
4.3.2 探針的檢測方法
- exec:執行一段命令
- httpGet:檢測某個 http 請求的返回狀態碼
- tcpSocket:測試某個端口是否能夠連接
4.3.3 liveness探針的exec使用
vi nginx_pod_exec.yaml
apiVersion: v1
kind: Pod
metadata:
name: exec
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
4.3.4 liveness探針的httpGet使用
vi nginx_pod_httpGet.yaml
apiVersion: v1
kind: Pod
metadata:
name: httpget
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 3
periodSeconds: 3
4.3.5 liveness探針的tcpSocket使用
vi nginx_pod_tcpSocket.yaml
apiVersion: v1
kind: Pod
metadata:
name: tcpSocket
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 3
periodSeconds: 3
4.3.6 readiness探針的httpGet使用
vi nginx-rc-httpGet.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: readiness
spec:
replicas: 2
selector:
app: readiness
template:
metadata:
labels:
app: readiness
spec:
containers:
- name: readiness
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /qiangge.html
port: 80
initialDelaySeconds: 3
periodSeconds: 3
4.4 dashboard服務
1:上傳並導入鏡像,打標簽
2:創建dashborad的deployment和service
3:訪問http://10.0.0.11:8080/ui/
4.5 通過apiservicer反向代理訪問service
第一種:NodePort類型
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30008
第二種:ClusterIP類型
type: ClusterIP
ports:
- port: 80
targetPort: 80
5: k8s彈性伸縮
k8s彈性伸縮,需要附加插件heapster監控
5.1 安裝heapster監控
1:上傳並導入鏡像,打標簽
ls *.tar.gz
for n in `ls *.tar.gz`;do docker load -i $n ;done
docker tag docker.io/kubernetes/heapster_grafana:v2.6.0 10.0.0.11:5000/heapster_grafana:v2.6.0
docker tag docker.io/kubernetes/heapster_influxdb:v0.5 10.0.0.11:5000/heapster_influxdb:v0.5
docker tag docker.io/kubernetes/heapster:canary 10.0.0.11:5000/heapster:canary
2:上傳配置文件,kubectl create -f .
3:打開dashboard驗證
5.2 彈性伸縮
1:修改rc的配置文件
containers:
- name: myweb
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
resources:
limits:
cpu: 100m
requests:
cpu: 100m
2:創建彈性伸縮規則
kubectl autoscale -n qiangge replicationcontroller myweb --max=8 --min=1 --cpu-percent=8
3:測試
ab -n 1000000 -c 40 http://172.16.28.6/index.html
擴容截圖
縮容:
6:持久化存儲
pv: persistent volume
pvc: persistent volume claim
6.1:安裝nfs服務端(10.0.0.11)
yum install nfs-utils.x86_64 -y
mkdir /data
vim /etc/exports
/data 10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
systemctl start rpcbind
systemctl start nfs
6.2:在node節點安裝nfs客戶端
yum install nfs-utils.x86_64 -y
showmount -e 10.0.0.11
6.3:創建pv和pvc
上傳yaml配置文件,創建pv和pvc
6.4:創建mysql-rc,pod模板里使用volume
volumeMounts:
- name: mysql
mountPath: /var/lib/mysql
volumes:
- name: mysql
persistentVolumeClaim:
claimName: tomcat-mysql
6.5: 驗證持久化
驗證方法1:刪除mysql的pod,數據庫不丟
kubectl delete pod mysql-gt054
驗證方法2:查看nfs服務端,是否有mysql的數據文件
6.6: 分布式存儲glusterfs
a: 什么是glusterfs
Glusterfs是一個開源分布式文件系統,具有強大的橫向擴展能力,可支持數PB存儲容量和數千客戶端,通過網絡互聯成一個並行的網絡文件系統。具有可擴展性、高性能、高可用性等特點。
b: 安裝glusterfs
所有節點:
yum install centos-release-gluster -y
yum install install glusterfs-server -y
systemctl start glusterd.service
systemctl enable glusterd.service
mkdir -p /gfs/test1
mkdir -p /gfs/test2
c: 添加存儲資源池
master節點:
gluster pool list
gluster peer probe k8s-node1
gluster peer probe k8s-node2
gluster pool list
d: glusterfs卷管理
創建分布式復制卷
gluster volume create qiangge replica 2 k8s-master:/gfs/test1 k8s-master:/gfs/test2 k8s-node1:/gfs/test1 k8s-node1:/gfs/test2 force
啟動卷
gluster volume start qiangge
查看卷
gluster volume info qiangge
掛載卷
mount -t glusterfs 10.0.0.11:/qiangge /mnt
e: 分布式復制卷講解
f: 分布式復制卷擴容
擴容前查看容量:
df -h
擴容命令:
gluster volume add-brick qiangge k8s-node2:/gfs/test1 k8s-node2:/gfs/test2 force
擴容后查看容量:
df -h
6.7 k8s 對接glusterfs存儲
a:創建endpoint
vi glusterfs-ep.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs
namespace: default
subsets:
- addresses:
- ip: 10.0.0.11
- ip: 10.0.0.12
- ip: 10.0.0.13
ports:
- port: 49152
protocol: TCP
b: 創建service
vi glusterfs-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: glusterfs
namespace: default
spec:
ports:
- port: 49152
protocol: TCP
targetPort: 49152
sessionAffinity: None
type: ClusterIP
c: 創建gluster類型pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster
labels:
type: glusterfs
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs"
path: "qiangge"
readOnly: false
d: 創建pvc
略
e:在pod中使用gluster
vi nginx_pod.yaml
……
volumeMounts:
- name: nfs-vol2
mountPath: /usr/share/nginx/html
volumes:
- name: nfs-vol2
persistentVolumeClaim:
claimName: gluster
7:與jenkins集成實現ci/cd
ip地址 | 服務 | 內存 |
---|---|---|
10.0.0.11 | kube-apiserver 8080 | 1G |
10.0.0.12 | jenkins(tomcat + jdk) 8080 | 1G |
10.0.0.13 | gitlab 8080,80 | 2G |
7.1: 安裝gitlab並上傳代碼
#a:安裝
wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm
yum localinstall gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm -y
#b:配置
vim /etc/gitlab/gitlab.rb
external_url 'http://10.0.0.13'
prometheus_monitoring['enable'] = false
#c:應用並啟動服務
gitlab-ctl reconfigure
#使用瀏覽器訪問http://10.0.0.13,修改root用戶密碼,創建project
#上傳代碼到git倉庫
cd /srv/
rz -E
unzip xiaoniaofeifei.zip
rm -fr xiaoniaofeifei.zip
git config --global user.name "Administrator"
git config --global user.email "admin@example.com"
git init
git remote add origin http://10.0.0.13/root/xiaoniao.git
git add .
git commit -m "Initial commit"
git push -u origin master
7.2 安裝jenkins,並自動構建docker鏡像
7.2.1:安裝jenkins
cd /opt/
rz -E
rpm -ivh jdk-8u102-linux-x64.rpm
mkdir /app
tar xf apache-tomcat-8.0.27.tar.gz -C /app
rm -fr /app/apache-tomcat-8.0.27/webapps/*
mv jenkins.war /app/apache-tomcat-8.0.27/webapps/ROOT.war
tar xf jenkin-data.tar.gz -C /root
/app/apache-tomcat-8.0.27/bin/startup.sh
netstat -lntup
7.2.2:訪問jenkins
訪問http://10.0.0.12:8080/,默認賬號密碼admin:123456
7.2.3:配置jenkins拉取gitlab代碼憑據
a:在jenkins上生成秘鑰對
ssh-keygen -t rsa
b:復制公鑰粘貼gitlab上
c:jenkins上創建全局憑據
7.2.4:拉取代碼測試
7.2.5:編寫dockerfile並測試
#vim dockerfile
FROM 10.0.0.11:5000/nginx:1.13
add . /usr/share/nginx/html
添加docker build構建時不add的文件
vim .dockerignore
dockerfile
docker build -t xiaoniao:v1 .
docker run -d -p 88:80 xiaoniao:v1
打開瀏覽器測試訪問xiaoniaofeifei的項目
7.2.6:上傳dockerfile和.dockerignore到私有倉庫
git add docker .dockerignore
git commit -m "fisrt commit"
git push -u origin master
7.2.7:點擊jenkins立即構建,自動構建docker鏡像並上傳到私有倉庫
修改jenkins 工程配置
docker build -t 10.0.0.11:5000/test:v\(BUILD_ID . docker push 10.0.0.11:5000/test:v\)BUILD_ID
7.3 jenkins自動部署應用到k8s
kubectl -s 10.0.0.11:8080 get nodes
if [ -f /tmp/xiaoniao.lock ];then
docker build -t 10.0.0.11:5000/xiaoniao:v$BUILD_ID .
docker push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
kubectl -s 10.0.0.11:8080 set image -n xiaoniao deploy xiaoniao xiaoniao=10.0.0.11:5000/xiaoniao:v$BUILD_ID
echo "更新成功"
else
docker build -t 10.0.0.11:5000/xiaoniao:v$BUILD_ID .
docker push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
kubectl -s 10.0.0.11:8080 create namespace xiaoniao
kubectl -s 10.0.0.11:8080 run xiaoniao -n xiaoniao --image=10.0.0.11:5000/xiaoniao:v$BUILD_ID --replicas=3 --record
kubectl -s 10.0.0.11:8080 expose -n xiaoniao deployment xiaoniao --port=80 --type=NodePort
port=`kubectl -s 10.0.0.11:8080 get svc -n xiaoniao|grep -oP '(?<=80:)\d+'`
echo "你的項目地址訪問是http://10.0.0.13:$port"
touch /tmp/xiaoniao.lock
fi
jenkins一鍵回滾
kubectl -s 10.0.0.11:8080 rollout undo -n xiaoniao deployment xiaoniao