閱讀目錄(Content)
- [1:k8s集群的安裝]
- [2:什么是k8s,k8s有什么功能?]
- [3:k8s常用的資源]
- [4:k8s的附加組件]
- [5: k8s彈性伸縮]
- [6:持久化存儲]
- [7:與jenkins集成實現ci/cd]
鏈接:https://pan.baidu.com/s/1gYt-Au_a1t-FBZ9GUUTeLg
提取碼:udzj
復制這段內容后打開百度網盤手機App,操作更方便哦
1:k8s集群的安裝
1.1 k8s集群的架構
master節點:etcd,api-server,scheduler,controller-manager
node節點:kubelet,kube-proxy
etcd的作用:數據庫
api-server:核心服務
controller-manager: 控制器管理 rc
scheduler: 創建新pod,選擇合適的節點
kubelet: 調用docker來創建容器
kube-proxy: 對外提供用戶訪問,對內提供一個負載均衡器
1.6 :所有節點配置flannel網絡
跨節點容器間的通信
a:安裝etcd
b:安裝配置啟動flannel
c:重啟docker生效
1.7 :配置master為docker鏡像私有倉庫
a:速度快
b:保護隱私
2:什么是k8s,k8s有什么功能?
2.1 k8s的核心功能
自愈:
彈性伸縮:
服務自動發現和負載均衡
滾動升級和一鍵回滾
密碼和配置文件管理
2.2 k8s的歷史
2015年 7月份 1.0版
2.3 k8s的安裝方式
yum
源碼編譯
二進制 生產使用
kubeadm 生產使用
minikube
2.4 k8s的應用場景
微服務:
更高的並發,更高的可用性,更快代碼更新
缺點: 管理復雜度上升
docker--k8s--彈性伸縮
3:k8s常用的資源
3.1創建pod資源
k8s最小資源單位
pod資源至少包含兩個容器,基礎容器pod+業務容器
3.2 ReplicationController資源
保證指定數量的pod運行
pod和rc是通過標簽來關聯
rc滾動升級和一鍵回滾
1:k8s集群的安裝
- docker---->管理平台---->k8s管理 在pass層
更新代碼,容易出現故障!生產環境與測試環境不一致,使用同一個docker鏡像解決問題
1.1 k8s的架構
除了核心組件,還有一些推薦的Add-ons:
組件名稱 | 說明 |
---|---|
kube-dns | 負責為整個集群提供DNS服務 |
Ingress Controller | 為服務提供外網入口 |
Heapster | 提供資源監控 |
Dashboard | 提供GUI |
Federation | 提供跨可用區的集群 |
Fluentd-elasticsearch | 提供集群日志采集、存儲與查詢 |
1.2:修改IP地址、主機和host解析
10.0.0.11 master
10.0.0.12 node-1
10.0.0.13 node-2
所有節點需要做hosts解析
1.3:master節點安裝etcd
### 配置源
[root@master ~]# rm -rf /etc/yum.repos.d/local.repo
[root@master ~]# echo "192.168.37.200 mirrors.aliyun.com" >>/etc/hosts
[root@master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@master ~]# yum install etcd -y
[root@master ~]# vim /etc/hosts
[root@master ~]# vim /etc/hosts
10.0.0.11 master
10.0.0.12 node-1
10.0.0.13 node-2
[root@master ~]# systemctl restart network
[root@master ~]# vim /etc/etcd/etcd.conf
.......
6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
.......
[root@master ~]# systemctl start etcd.service
[root@master ~]# systemctl enable etcd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@master ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 7390/etcd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 6726/sshd
tcp6 0 0 :::2379 :::* LISTEN 7390/etcd
tcp6 0 0 :::22 :::* LISTEN 6726/sshd
udp 0 0 127.0.0.1:323 0.0.0.0:* 5065/chronyd
udp6 0 0 ::1:323 :::* 5065/chronyd
### 測試etcd是否安裝成功
[root@master ~]# etcdctl set testdir/testkey0 0
0
[root@master ~]# etcdctl get testdir/testkey0
0
### 檢查健康狀態
[root@master ~]# etcdctl -C http://10.0.0.11:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://10.0.0.11:2379
cluster is healthy
etcd原生支持做集群
作業1:安裝部署etcd集群,要求三個節點
1.4:master節點安裝kubernetes
[root@master ~]# yum install kubernetes-master.x86_64 -y
[root@master ~]# vim /etc/kubernetes/apiserver
......
8行: KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
11行:KUBE_API_PORT="--port=8080"
17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379"
23行:此處是一行
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
......
[root@master ~]# vim /etc/kubernetes/config
......
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
......
[root@master ~]# systemctl enable kube-apiserver.service
[root@master ~]# systemctl restart kube-apiserver.service
[root@master ~]# systemctl enable kube-controller-manager.service
[root@master ~]# systemctl restart kube-controller-manager.service
[root@master ~]# systemctl enable kube-scheduler.service
[root@master ~]# systemctl restart kube-scheduler.service
檢查服務是否安裝正常
[root@k8s-master ~]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
1.5:node節點安裝kubernetes
[root@node-1 ~]# rm -rf /etc/yum.repos.d/local.repo
[root@node-1 ~]# echo "192.168.37.200 mirrors.aliyun.com" >>/etc/hosts
[root@node-1 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@node-1 ~]# yum install kubernetes-node.x86_64 -y
[root@node-1 ~]# vim /etc/kubernetes/config
......
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
......
[root@node-1 ~]# vim /etc/kubernetes/kubelet
......
5行:KUBELET_ADDRESS="--address=0.0.0.0"
8行:KUBELET_PORT="--port=10250"
11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.12"
14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
......
[root@node-1 ~]# systemctl enable kubelet.service
[root@node-1 ~]# systemctl start kubelet.service
[root@node-1 ~]# systemctl enable kube-proxy.service
[root@node-1 ~]# systemctl start kube-proxy.service
在master節點檢查
[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE
10.0.0.12 Ready 6m
10.0.0.13 Ready 3s
1.6:所有節點配置flannel網絡
### 所有節點安裝
[root@master ~]# yum install flannel -y
[root@master ~]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
[root@node-1 ~]# yum install flannel -y
[root@node-1 ~]# ]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
[root@node-2 ~]# yum install flannel -y
[root@node-2 ~]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
##master節點操作:
[root@master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'
[root@master ~]# yum install docker -y
[root@master ~]# systemctl enable flanneld.service
[root@master ~]# systemctl restart flanneld.service
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker
[root@master ~]# systemctl restart kube-apiserver.service
[root@master ~]# systemctl restart kube-controller-manager.service
[root@master ~]# systemctl restart kube-scheduler.service
###所有節點都上傳
[root@master ~]# rz docker_busybox.tar.gz
[root@master ~]# docker load -i docker_busybox.tar.gz
adab5d09ba79: Loading layer [==================================================>] 1.416 MB/1.416 MB
Loaded image: docker.io/busybox:latest
###所有機器都運行docker容器
[root@master ~]# docker run -it docker.io/busybox:latest
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:10:43:02
inet addr:172.16.67.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:acff:fe10:4302/64 Scope:Link
/ # ping 172.16.67.2
64 bytes from 172.16.67.2: seq=0 ttl=64 time=0.127 ms
64 bytes from 172.16.67.2: seq=1 ttl=64 time=0.062 ms
##node節點:node-1 node-2
[root@node-1 ~]# systemctl enable flanneld.service
[root@node-1 ~]# systemctl restart flanneld.service
[root@node-1 ~]# service docker restart
[root@node-1 ~]# systemctl restart kubelet.service
[root@node-1 ~]# systemctl restart kube-proxy.service
###所有節點啟動docker node-1 node-2都部署
[root@node-1 ~]# vim /usr/lib/systemd/system/docker.service
#在[Service]區域下增加一行
......
[Service]
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
......
systemctl daemon-reload
systemctl restart docker
1.7:配置master為鏡像倉庫
#所有節點
[root@master ~]# vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["10.0.0.11:5000"]
}
[root@master ~]# systemctl restart docker
### 上傳包master節點
[root@master ~]# rz
[root@node-1 ~]# ls
anaconda-ks.cfg docker_busybox.tar.gz registry.tar.gz
[root@node-1 ~]# docker load -i registry.tar.gz
ef763da74d91: Loading layer [==================================================>] 5.058 MB/5.058 MB
7683d4fcdf4e: Loading layer [==================================================>] 7.894 MB/7.894 MB
656c7684d0bd: Loading layer [==================================================>] 22.79 MB/22.79 MB
a2717186d7dd: Loading layer [==================================================>] 3.584 kB/3.584 kB
3c133a51bc00: Loading layer [==================================================>] 2.048 kB/2.048 kB
Loaded image: registry:latest
#master節點操作
[root@node-1 ~]# docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry
68ea32e2ecec3a0fb8a9223e1cc5e22b10b1c64080be020852c557dcc317526b
###測試
[root@node-2 ~]# docker tag docker.io/busybox:latest 10.0.0.11:5000/busybox:latest
[root@node-2 ~]# docker push 10.0.0.11:5000/busybox:latest
The push refers to a repository [10.0.0.11:5000/busybox]
adab5d09ba79: Pushed
latest: digest: sha256:4415a904b1aca178c2450fd54928ab362825e863c0ad5452fd020e92f7a6a47e size: 527
###在maser節點查看如下表示私有倉庫部署成功
[root@master ~]# ls /opt/myregistry/docker/registry/v2/repositories/
busybox
systemctl daemon-reload
systemctl restart docker
2:什么是k8s,k8s有什么功能?
k8s是一個docker集群的管理工具
2.1 k8s的核心功能
自愈:
重新啟動失敗的容器,在節點不可用時,替換和重新調度節點上的容器,對用戶定義的健康檢查不響應的容器會被中止,並且在容器准備好服務之前不會把其向客戶端廣播。
彈性伸縮:
通過監控容器的cpu的負載值,如果這個平均高於80%,增加容器的數量,如果這個平均低於10%,減少容器的數量
服務的自動發現和負載均衡:
不需要修改您的應用程序來使用不熟悉的服務發現機制,Kubernetes 為容器提供了自己的 IP 地址和一組容器的單個 DNS 名稱,並可以在它們之間進行負載均衡。
升級和一鍵回滾:
Kubernetes 逐漸部署對應用程序或其配置的更改,同時監視應用程序運行狀況,以確保它不會同時終止所有實例。 如果出現問題,Kubernetes會為您恢復更改,利用日益增長的部署解決方案的生態系統。
密碼管理
2.2 k8s的歷史
2014年 docker容器編排工具,立項
2015年7月 發布kubernetes 1.0, 加入cncf
2016年,kubernetes干掉兩個對手,docker swarm,mesos 1.2版
2017年
2018年 k8s 從cncf基金會 畢業
2019年: 1.13, 1.14 ,1.15
cncf cloud native compute foundation
kubernetes (k8s): 希臘語 舵手,領航 容器編排領域,
谷歌16年容器使用經驗,borg容器管理平台,使用golang重構borg,kubernetes
2.3 k8s的安裝
yum安裝 1.5 最容易安裝成功,最適合學習的
源碼編譯安裝---難度最大 可以安裝最新版
二進制安裝---步驟繁瑣 可以安裝最新版 shell,ansible,saltstack
kubeadm 安裝最容易, 網絡 可以安裝最新版
minikube 適合開發人員體驗k8s, 網絡
2.4 k8s的應用場景
k8s最適合跑微服務項目!
微服務:
網站:
MVC 開發架構 主域名,公用數據庫,當用戶多了的時候數據庫就先扛不住了
1套架構 2個負載均衡 3-4台web,1台緩存
開發環境 測試環境 預發布環境 生產環境
預備資源 20台服務器
微服務:
soa 開發架構
微服務架構
N多個小服務,每個服務都有自己的數據庫,自己的域名,web服務
提供更高的並發,更高的可用性,發布周期更短。
上百套架構,上千台服務器,ansible,自動化代碼上線,監控(一套【公共基礎設施】)
提供更高的並發,更高的可用性,發布周期更短。
docker部署 ---->k8s管理--->彈性伸縮 k8s適用於微服務
docker的出現解決了微服務的部署問題,k8s的出現解決了docker的集群問題
預備資源 20台服務器
小規模--->大規模
3:k8s常用的資源
3.1 創建pod資源(最小的資源)
k8s yaml的主要組成
apiVersion: v1 api版本
kind: pod 資源類型
metadata: 屬性
spec: 詳細
k8s_pod.yaml
[root@master ~]# mkdir k8s_yaml
[root@master ~]# cd k8s_yaml/
[root@master k8s_yaml]# mkdir pod
[root@master k8s_yaml]# cd pod/
[root@master pod]# vi k8s_pod.yaml
[root@master pod]# cat k8s_pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: web
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
#在node3操作
[root@node-2 ~]# rz
[root@node-2 ~]# ls
anaconda-ks.cfg docker_busybox.tar.gz docker_nginx1.13.tar.gz
[root@node-2 ~]# docker load -i docker_nginx1.13.tar.gz
d626a8ad97a1: Loading layer 58.46 MB/58.46 MB
82b81d779f83: Loading layer 54.21 MB/54.21 MB
7ab428981537: Loading layer 3.584 kB/3.584 kB
Loaded image: docker.io/nginx:1.13
[root@node-2 ~]# docker tag docker.io/nginx:1.13 10.0.0.11:5000/nginx:1.13
[root@node-2 ~]# docker push 10.0.0.11:5000/nginx:1.13
The push refers to a repository [10.0.0.11:5000/nginx]
7ab428981537: Pushed
82b81d779f83: Pushed
d626a8ad97a1: Pushed
1.13: digest: sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90 size: 948
###master驗證
[root@master pod]# kubectl create -f k8s_pod.yaml
pod "nginx" created
[root@master pod]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 48s
[root@master pod]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 0/1 ContainerCreating 0 1m <none> 10.0.0.13
###在node-2 10.0.0.13上傳鏡像
[root@node-2 ~]# rz
[root@node-2 ~]# ls
docker_busybox.tar.gz docker_nginx1.13.tar.gz pod-infrastructure-latest.tar.gz
[root@node-2 ~]# docker load -i pod-infrastructure-latest.tar.gz
df9d2808b9a9: Loading layer 202.3 MB/202.3 MB
0a081b45cb84: Loading layer 10.24 kB/10.24 kB
ba3d4cbbb261: Loading layer 12.51 MB/12.51 MB
Loaded image: docker.io/tianyebj/pod-infrastructure:latest
[root@node-2 ~]# docker tag docker.io/tianyebj/pod-infrastructure:latest 10.0.0.11:5000/pod-infrastructure:latest
[root@node-2 ~]# docker push 10.0.0.11:5000/pod-infrastructure:latest
The push refers to a repository [10.0.0.11:5000/pod-infrastructure]
ba3d4cbbb261: Preparing
0a081b45cb84: Preparing
df9d2808b9a9: Pushed
latest: digest: sha256:a378b2d7a92231ffb07fdd9dbd2a52c3c439f19c8d675a0d8d9ab74950b15a1b size: 948
###在node1和node2節點都添加
[root@node-1 ~]# vim /etc/kubernetes/kubelet ###添加image后面內容,防止報錯紅帽鏈接
......
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.11:5000/pod-infrastructure:latest"
......
[root@node-2 ~]# vim /etc/kubernetes/kubelet
[root@node-1 ~]# systemctl restart kubelet.service ##node1和node2所有節點都重啟
[root@node-2 ~]# systemctl restart kubelet.service
###驗證
[root@master pod]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 20m 172.16.83.2 10.0.0.13
[root@master pod]# curl -I 172.16.83.2
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Mon, 09 Dec 2019 14:26:33 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
###驗證成功
----------------------------------
[root@master pod]# cp k8s_pod.yaml k8s_pod2.yaml
[root@master pod]# vim k8s_pod2.yaml
[root@master pod]# kubectl create -f k8s_pod2.yaml
pod "nginx2" created
[root@master pod]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 27m
nginx2 0/1 ContainerCreating 0 17s
[root@master pod]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 29m 172.16.83.2 10.0.0.13
nginx2 1/1 Running 0 2m 172.16.77.2 10.0.0.12
[root@master pod]# curl -I 172.16.77.2
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Mon, 09 Dec 2019 14:35:18 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
[root@master pod]#
為什么創建一個pod資源?K8s需要啟動兩個容器
pod資源:至少由兩個容器組成
基礎容器 pod,定制化功能
業務容器組成 nginx
pod配置文件2:
[root@master pod]# vim k8s_pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
app: web
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
- name: busybox
image: 10.0.0.11:5000/busybox:latest
command: ["sleep","10000"]
[root@master pod]# kubectl create -f k8s_pod3.yaml
pod "test" created
[root@master pod]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 33m 172.16.83.2 10.0.0.13
nginx2 1/1 Running 0 6m 172.16.77.2 10.0.0.12
test 2/2 Running 0 16s 172.16.83.3 10.0.0.13
[root@node-2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2434a399924a 10.0.0.11:5000/busybox:latest "sleep 10000" About a minute ago Up 59 seconds k8s_busybox.7e7ae56a_test_default_997268fb-1a91-11ea-b30c-000c290c9463_c101f9b4
a71099237d7b 10.0.0.11:5000/nginx:1.13 "nginx -g 'daemon ..." About a minute ago Up About a minute k8s_nginx.91390390_test_default_997268fb-1a91-11ea-b30c-000c290c9463_f35f0c75
deca3d444e40 10.0.0.11:5000/pod-infrastructure:latest "/pod" About a minute ago Up About a minute k8s_POD.177f01b0_test_default_997268fb-1a91-11ea-b30c-000c290c9463_e201879a
eaf35aa1a6ca 10.0.0.11:5000/nginx:1.13 "nginx -g 'daemon ..." 15 minutes ago Up 15 minutes k8s_nginx.91390390_nginx_default_ffb61ebf-1a8c-11ea-b30c-000c290c9463_96956214
2f0a3f968c06 10.0.0.11:5000/pod-infrastructure:latest "/pod" 15 minutes ago Up 15 minutes k8s_POD.177f01b0_nginx_default_ffb61ebf-1a8c-11ea-b30c-000c290c9463_ea168898
[root@node-2 ~]#
pod是k8s最小的資源單位
驅逐
kubelet 監控本機的docker容器,啟動新的容器
k8s 集群,pod數量少了,controller-manager啟動新的pod
rc 標簽選擇器---關聯
3.2 ReplicationController資源 rc
rc:保證指定數量的pod始終存活,rc通過標簽選擇器來關聯pod
k8s資源的常見操作: (增刪改查)
kubectl create -f xxx.yaml (增 )
kubectl get pod|rc #獲取、查看資源
kubectl describe pod nginx # 查看資源的具體描述
kubectl delete pod nginx 或者kubectl delete -f xxx.yaml #刪除一個資源
kubectl edit pod nginx #改一個資源的配置文件
創建一個rc (等同於ReplicationController)
[root@master k8s_yaml]# mkdir rc
[root@master k8s_yaml]# cd rc/
[root@master rc]# vim k8s_rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 5
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
[root@master rc]# kubectl create -f k8s_rc.yml
replicationcontroller "nginx" created
[root@master rc]# kubectl get rc
NAME DESIRED CURRENT READY AGE
nginx 5 5 5 19s
[root@master rc]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 39m
nginx-3243v 1/1 Running 0 31s
nginx-9fzgc 1/1 Running 0 31s
nginx-ppgdv 1/1 Running 0 31s
nginx-sxtp0 1/1 Running 0 31s
nginx-x5mkk 1/1 Running 0 31s
nginx2 1/1 Running 0 12m
test 2/2 Running 0 6m
[root@master rc]#
rc的滾動升級
新建一個nginx-rc2.yml
升級
kubectl rolling-update nginx -f nginx-rc2.yml --update-period=10s
回滾
kubectl rolling-update nginx2 -f nginx2-rc.yml --update-period=1s
[root@master rc]# cp k8s_rc.yml k8s_rc2.yml
[root@master rc]# vim k8s_rc2
[root@master rc]# vim k8s_rc2.yml
[root@master rc]# cat k8s_rc2.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx2
spec:
replicas: 5
selector:
app: myweb2
template:
metadata:
labels:
app: myweb2
spec:
containers:
- name: myweb
image: 10.0.0.11:5000/nginx:1.15
ports:
- containerPort: 80
###node2上傳鏡像
[root@node-2 ~]# rz
[root@node-2 ~]# docker load -i docker_nginx1.15.tar.gz
Loaded image: docker.io/nginx:latest
[root@node-2 ~]# docker tag docker.io/nginx:latest 10.0.0.11:5000/nginx:1.15
[root@node-2 ~]# docker push 10.0.0.11:5000/nginx:1.15
The push refers to a repository [10.0.0.11:5000/nginx]
92b86b4e7957: Pushed
94ad191a291b: Pushed
8b15606a9e3e: Pushed
1.15: digest: sha256:204a9a8e65061b10b92ad361dd6f406248404fe60efd5d6a8f2595f18bb37aad size: 948
[root@master rc]# kubectl rolling-update nginx -f k8s_rc2.yml --update-period=5s
[root@master rc]# kubectl rolling-update nginx2 -f k8s_rc.yml --update-period=1s
3.3 service資源
service幫助pod暴露端口
創建一個service
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort #ClusterIP
ports:
- port: 80 #clusterIP
nodePort: 30000 #nodeport
targetPort: 80 #podport
selector:
app: myweb2
##### 自動暴露端口
命令行生成svc:kubectl expose deployment nginx --target-port=80 --type=NodePort --port=80
修改副本數量: kubectl scale rc nginx1 --replicas=2
進入容器: kubectl exec -it nginx1-1frnf /bin/bash
修改nodePort范圍
vim /etc/kubernetes/apiserver
KUBE_API_ARGS="--service-node-port-range=3000-50000"
service默認使用iptables來實現負載均衡, k8s 1.8新版本中推薦使用lvs(四層負載均衡)
自動發現
三種ip nodeip:api-server VIP: api-server podip:etcd /atomic.io
3.4 deployment資源
有rc在滾動升級之后,會造成服務訪問中斷,於是k8s引入了deployment資源
創建deployment
[root@master k8s_yaml]# mkdir deploy
[root@master k8s_yaml]# cd deploy/
[root@master deploy]# vim k8s_deploy.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
resources:
limits:
cpu: 100m
requests:
cpu: 100m
deployment升級和回滾
命令行創建deployment
kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record ##指哪里就去哪里pull鏡像
命令行升級版本
kubectl set image deploy nginx nginx=10.0.0.11:5000/nginx:1.15
查看deployment所有歷史版本
kubectl rollout history deployment nginx
deployment回滾到上一個版本
kubectl rollout undo deployment nginx
deployment回滾到指定版本
kubectl rollout undo deployment nginx --to-revision=2
RS支持通配符匹配標簽
3.5 tomcat+mysql練習
在k8s中容器之間相互訪問,通過VIP地址!
啟動所有服務
systemctl restart flanneld.service
systemctl restart docker
systemctl restart kubelet.service
systemctl restart kubelet.service
systemctl restart kube-proxy.service
mysql的rc和svc
[root@k8s-master tomcat_daemon]# cat mysql-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: '123456'
svc
[root@k8s-master tomcat_daemon]# cat mysql-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
tomcat的rc
[root@k8s-master tomcat_daemon]# cat tomcat-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: myweb
spec:
replicas: 1
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: 10.0.0.11:5000/tomcat-app:v2
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: '10.254.36.202'
- name: MYSQL_SERVICE_PORT
value: '3306'
tomcat的svc
[root@k8s-master tomcat_daemon]# cat tomcat-svc.yml
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30008
selector:
app: myweb
3.6 wordpress+mysql
wordpress的代碼
[root@k8s-master worepress_daemon]# cat wordpress-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: mywordpress
spec:
replicas: 1
selector:
app: mywordpress
template:
metadata:
labels:
app: mywordpress
spec:
containers:
- name: mywordpress
image: 10.0.0.11:5000/wordpress:v1
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: '10.254.112.209'
- name: WORDPRESS_DB_USER
value: 'wordpress'
- name: WORDPRESS_DB_PASSWORD
value: 'wordpress'
[root@k8s-master worepress_daemon]# cat wordpress-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mywordpress
spec:
type: NodePort
ports:
- port: 80
nodePort: 30010
selector:
app: mywordpress
mysql的代碼
[root@k8s-master worepress_daemon]# cat mysql-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: 'somewordpress'
- name: MYSQL_DATABASE
value: 'wordpress'
- name: MYSQL_USER
value: 'wordpress'
- name: MYSQL_PASSWORD
value: 'wordpress'
[root@k8s-master worepress_daemon]# cat mysql-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
3.7 deployment版wordpress+mysql
[root@k8s-master wordpress_deploy]# cat wp-rc.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: wp
spec:
containers:
- name: wp
image: 10.0.0.11:5000/wordpress:v1
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: '10.254.235.122'
- name: WORDPRESS_DB_USER
value: 'wordpress'
- name: WORDPRESS_DB_PASSWORD
value: 'wordpress'
resources:
limits:
cpu: 100m
requests:
cpu: 100m
[root@k8s-master wordpress_deploy]# cat wp-svc.yml
apiVersion: v1
kind: Service
metadata:
name: wp
spec:
type: NodePort
ports:
- port: 80
nodePort: 30011
selector:
app: wp
[root@k8s-master wordpress_deploy]# cat mysql-wp-rc.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql-wp
spec:
replicas: 1
template:
metadata:
labels:
app: mysql-wp
spec:
containers:
- name: mysql-wp
image: 10.0.0.11:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: 'somewordpress'
- name: MYSQL_DATABASE
value: 'wordpress'
- name: MYSQL_USER
value: 'wordpress'
- name: MYSQL_PASSWORD
value: 'wordpress'
[root@k8s-master wordpress_deploy]# cat mysql-wp-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysql-wp
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql-wp
4:k8s的附加組件
4.1 dns服務
安裝dns服務
1:下載dns_docker鏡像包
wget http://192.168.37.200/docker_image/docker_k8s_dns.tar.gz
2:導入dns_docker鏡像包(node2節點)
3:修改skydns-rc.yaml
spec:
nodeSelector:
kubernetes.io/hostname: 10.0.0.13
containers:
4:創建dns服務
kubectl create -f skydns-rc.yaml
5:檢查
kubectl get all --namespace=kube-system
6:修改所有node節點kubelet的配置文件
vim /etc/kubernetes/kubelet
KUBELET_ARGS="--cluster_dns=10.254.230.254 --cluster_domain=cluster.local"
systemctl restart kubelet
4.2 namespace命令空間
namespace 做資源隔離
[root@master ~]# kubectl create namespace oldqiang ###創建資源隔離
在生產上一般一個業務一個namespace,以tomcat為例,增加一個namespace。
[root@master ~]# cd k8s_yaml/tomcat_demo/ ###首先進入tomcat
[root@master tomcat_demo]# ls ###列出查看yml文件
mysql-rc.yml mysql-svc.yml tomcat-rc.yml tomcat-svc.yml
#### 在所有的第三行后面加上namespace
[root@master tomcat_demo]# sed -i '3a \ \ namespace: tomcat'
[root@master tomcat_demo]# kubectl create namespace tomcat ###創建tomcat資源隔離
[root@master tomcat_demo]# kubectl create -f . #創建所有的pod資源
[root@master tomcat_demo]# kubectl get all -n tomcat
####創建zabbix與其他資源同上
4.3 健康檢查
4.3.1 探針的種類
livenessProbe:健康狀態檢查,周期性檢查服務是否存活,檢查結果失敗,將重啟容器
readinessProbe:可用性檢查,周期性檢查服務是否可用,不可用將從service的endpoints中移除
4.3.2 探針的檢測方法
- exec:執行一段命令,探測返回值,如果健康返回0,不健康非0
- httpGet:檢測某個 http 請求的返回狀態碼, 2xx與3xx正常 4xx與5xx錯誤
- tcpSocket:測試某個端口是否能夠連接
4.3.3 liveness探針的exec使用
[root@master health]# vi nginx_pod_exec.yml
apiVersion: v1
kind: Pod
metadata:
name: exec
spec:
containers: ###容器
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
args: ###執行命令
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 1
###遇到i進入插入模式
[root@master health]# kubectl describe pod exec
4.3.4 liveness探針的httpGet使用
[root@master health]# vi nginx_pod_httpGet.yml
iapiVersion: v1
kind: Pod
metadata:
name: httpget
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 3
periodSeconds: 3
4.3.5 liveness探針的tcpSocket使用
[root@master health]# vim nginx_pod_tcpSocket.yaml
apiVersion: v1
kind: Pod
metadata:
name: tcpsocket
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 3
periodSeconds: 3
4.3.6 readiness探針的httpGet使用
[root@master health]# vim nginx-rc-httpGet.yaml
iapiVersion: v1
kind: ReplicationController
metadata:
name: readiness
spec:
replicas: 2
selector:
app: readiness
template:
metadata:
labels:
app: readiness
spec:
containers:
- name: readiness
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /qiangge.html
port: 80
initialDelaySeconds: 3
periodSeconds: 3
[root@master health]# kubectl create -f nginx-rc-httpGet.yaml
[root@master health]# kubectl expose rc readiness --port=80 --target-port=80 --type=NodePort
[root@master health]# kubectl describe svc readiness ###檢測負載均衡未發現
[root@master health]# kubectl get all
......
po/readiness-1mj49 0/1 Running 0 18m
po/readiness-s0m9s 0/1 Running 0 18m
......
[root@master health]# kubectl exec -it readiness-1mj49 /bin/bash
root@readiness-1mj49:/# echo 'ewrf' >/usr/share/nginx/html/qiangge.html
[root@master health]# kubectl describe svc readiness
......
Endpoints: 172.16.83.9:80
......
4.4 dashboard服務
1:上傳並導入鏡像,打標簽
2:創建dashborad的deployment和service
3:訪問http://10.0.0.11:8080/ui/
###node2節點
[root@node-2 opt]# ls
kubernetes-dashboard-amd64_v1.4.1.tar.gz
[root@node-2 opt]# docker load -i kubernetes-dashboard-amd64_v1.4.1.tar.gz
###master節點
[root@master health]# mkdir dashboard
[root@master health]# cd dashboard/
[root@master dashboard]# cat dashboard-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
[root@master dashboard]# cat dashboard.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
name: kubernetes-dashboard-latest
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: latest
kubernetes.io/cluster-service: "true"
spec:
nodeName: 10.0.0.13 ###在此處添加nodeName表示調到13節點
containers:
- name: kubernetes-dashboard
image: index.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.4.1
####image鏡像直接寫入yml文件
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
args:
- --apiserver-host=http://10.0.0.11:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
[root@master dashboard]# kubectl create -f .
## 瀏覽器訪問: http://10.0.0.11:8080/ui/
dockerfile:
CMD
ENTRYPOINT
資源:
deamon sets :畜生應用,無狀態的應用,沒有自己的數據,隨便殺
pot set :寵物應用,有自己的數據
jobs 一次性容器
contables定時任務
html # 錨點 定義符,定位的
4.5 通過apiservicer反向代理訪問service
第一種:NodePort類型 ##分配VIP
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30008
第二種:ClusterIP類型
type: ClusterIP
ports:
- port: 80
targetPort: 80
第三種:http://10.0.0.11:8080/api/v1/proxy/namespaces/命令空間/services/service的名字
5: k8s彈性伸縮
k8s彈性伸縮,需要附加插件heapster監控
5.1 安裝heapster監控
1:上傳並導入鏡像,打標簽
kubelet cadvisor 10.0.0.12:4194 heapster調用api-server 將調用到的數據存儲在influxdb中,用grafana出圖,然后dashboard調用grafana.
ls *.tar.gz
for n in `ls *.tar.gz`;do docker load -i $n ;done
docker tag docker.io/kubernetes/heapster_grafana:v2.6.0 10.0.0.11:5000/heapster_grafana:v2.6.0
docker tag docker.io/kubernetes/heapster_influxdb:v0.5 10.0.0.11:5000/heapster_influxdb:v0.5
docker tag docker.io/kubernetes/heapster:canary 10.0.0.11:5000/heapster:canary
2:上傳配置文件,kubectl create -f .
3:打開dashboard驗證
svc 調用pod deploment控制 在deplomen創建hpa(彈性伸縮規則),但是hpa必須借助監控
5.2 彈性伸縮
[root@master monitor]# kubectl delete pod --all ##刪除pod
[root@master monitor]# cd ../deploy/
[root@master deploy]# ls
k8s_deploy.yml
[root@master deploy]# cat k8s_deploy.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
resources:
limits:
cpu: 100m
requests:
cpu: 100m
[root@master deploy]# kubectl create -f k8s_deploy.yml
[root@master deploy]# kubectl expose deployment nginx-deployment --port=80 --target-port=80 --type=NodePort
[root@master deploy]# kubectl get all
svc/nginx-deployment 10.254.174.57 <nodes> 80:20591/TCP 1d
[root@master deploy]# curl -I http://10.0.0.12:20591
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Wed, 11 Dec 2019 14:23:50 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
[root@master deploy]# yum install httpd -y
[root@master deploy]# ab -n 1000000 -c 40 http://10.0.0.12:20591/index.html
1:修改rc的配置文件
containers:
- name: myweb
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
resources:
limits:
cpu: 100m
requests:
cpu: 100m
2:創建彈性伸縮規則
kubectl autoscale -n qiangge replicationcontroller myweb --max=8 --min=1 --cpu-percent=8
3:測試
ab -n 1000000 -c 40 http://172.16.28.6/index.html
擴容截圖
縮容:
6:持久化存儲
pv: persistent volume
pvc: persistent volume claim
為什么持久化存儲:一些用戶上傳的數據我們需要保留下來
數據持久化類型:
6.1 emptyDir:
spec:
nodeName: 10.0.0.13
volumes:
- name: mysql
emptyDir: {}
containers:
- name: wp-mysql
image: 10.0.0.11:5000/mysql:5.7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql
6.2 HostPath:
spec:
nodeName: 10.0.0.13
volumes:
- name: mysql
hostPath:
path: /data/wp_mysql
containers:
- name: wp-mysql
image: 10.0.0.11:5000/mysql:5.7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql
6.3 nfs:
volumes:
- name: mysql
nfs:
path: /data/wp_mysql
server: 10.0.0.11
6.4 pvc:
pv: persistent volume 全局資源,k8s集群
pvc: persistent volume claim, 局部資源屬於某一個namespace
[root@master ~]# kubectl explain pod.spec.volumes ##查詢語法
6.4.1:安裝nfs服務端(master--10.0.0.11)
yum install nfs-utils.x86_64 -y
mkdir /data
vim /etc/exports
/data 10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
systemctl start rpcbind
systemctl start nfs
6.2:在node節點安裝nfs客戶端10.0.0.12--10.0.0.13
yum install nfs-utils.x86_64 -y
showmount -e 10.0.0.11
6.3:創建pv和pvc
上傳yaml配置文件,創建pv和pvc
6.4:創建mysql-rc,pod模板里使用volume
volumeMounts:
- name: mysql
mountPath: /var/lib/mysql
volumes:
- name: mysql
persistentVolumeClaim:
claimName: tomcat-mysql
6.5: 驗證持久化
驗證方法1:刪除mysql的pod,數據庫不丟
kubectl delete pod mysql-gt054
驗證方法2:查看nfs服務端,是否有mysql的數據文件
6.6: 分布式存儲glusterfs
a: 什么是glusterfs
Glusterfs是一個開源分布式文件系統,具有強大的橫向擴展能力,可支持數PB存儲容量和數千客戶端,通過網絡互聯成一個並行的網絡文件系統。具有可擴展性、高性能、高可用性等特點。
b: 安裝glusterfs
所有節點:
yum install centos-release-gluster -y
yum install install glusterfs-server -y
systemctl start glusterd.service
systemctl enable glusterd.service
#為gluster集群增加存儲單元
mkdir -p /gfs/test1
mkdir -p /gfs/test2
mkdir -p /gfs/test3
###每個節點添加10g,三個10g的硬盤
echo '- - -' >/sys/class/scsi_host/host0/scan
echo '- - -' >/sys/class/scsi_host/host1/scan
echo '- - -' >/sys/class/scsi_host/host2/scan
fdisk -l
mkfs.xfs /dev/sdb /gfs/test1
mkfs.xfs /dev/sdb
mkfs.xfs /dev/sdc
mkfs.xfs /dev/sdd
mount /dev/sdb /gfs/test1
mount /dev/sdc /gfs/test2
mount /dev/sdd /gfs/test3
c: 添加存儲資源池
master節點:
gluster pool list
gluster peer probe k8s-node1
gluster peer probe k8s-node2
gluster pool list
d: glusterfs卷管理
創建分布式復制卷
gluster volume create qiangge replica 2 k8s-master:/gfs/test1 k8s-master:/gfs/test2 k8s-node1:/gfs/test1 k8s-node1:/gfs/test2 force
啟動卷
gluster volume start qiangge
查看卷
gluster volume info qiangge
掛載卷
mount -t glusterfs 10.0.0.11:/qiangge /mnt
e: 分布式復制卷講解
f: 分布式復制卷擴容
擴容前查看容量:
df -h
擴容命令:
gluster volume add-brick qiangge k8s-node2:/gfs/test1 k8s-node2:/gfs/test2 force
擴容后查看容量:
df -h
###過程實現
[root@master ~]# gluster volume create oldxu master:/gfs/test1 node-1:/gfs/test1 node-2:/gfs/test1 force
volume create: oldxu: success: please start the volume to access data
[root@master ~]# gluster volume info oldxu
Volume Name: oldxu
Type: Distribute
Volume ID: 3359e285-95ae-41a6-8791-70e4b6e0e52c
Status: Created
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: master:/gfs/test1
Brick2: node-1:/gfs/test1
Brick3: node-2:/gfs/test1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@master ~]# gluster volume start oldxu
volume start: oldxu: success
[root@master ~]# mount -t glusterfs 127.0.0.1:/oldxu /mnt
[root@master ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 48G 4.0G 45G 9% /
devtmpfs 476M 0 476M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 51M 437M 11% /run
tmpfs 487M 0 487M 0% /sys/fs/cgroup
tmpfs 98M 0 98M 0% /run/user/0
/dev/sdb 10G 33M 10G 1% /gfs/test1
/dev/sdc 10G 33M 10G 1% /gfs/test2
/dev/sdd 10G 33M 10G 1% /gfs/test3
overlay 48G 4.0G 45G 9% /var/lib/docker/overlay2/b912d985d96128c79652e7b93b67db57dab3131c586362bf91967424949db051/merged
shm 64M 0 64M 0% /var/lib/docker/containers/ca26dee7a7055b1fcb8201cb6c0f737130221c9607a6096b5615404b0d4d9a2b/shm
127.0.0.1:/oldxu 30G 404M 30G 2% /mnt
[root@master ~]# cp /data/wordpress/web/*.php /mnt
cp: cannot stat ‘/data/wordpress/web/*.php’: No such file or directory
[root@master ~]# gluster volume add-brick oldxu replica 2 master:/gfs/test2 node-1:/gfs/test2 node-2:/gfs/test2 force
volume add-brick: success
[root@master ~]# gluster volume add-brick oldxu master:/gfs/test3 node-1:/gfs/test3 force
volume add-brick: success
[root@master ~]# tree /gfs/test1
/gfs/test1
0 directories, 0 files
[root@master ~]# gluster volume info oldxu
Volume Name: oldxu
Type: Distributed-Replicate
Volume ID: 3359e285-95ae-41a6-8791-70e4b6e0e52c
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: master:/gfs/test1
Brick2: master:/gfs/test2
Brick3: node-1:/gfs/test1
Brick4: node-1:/gfs/test2
Brick5: node-2:/gfs/test1
Brick6: node-2:/gfs/test2
Brick7: master:/gfs/test3
Brick8: node-1:/gfs/test3
Options Reconfigured:
performance.client-io-threads: off
transport.address-family: inet
nfs.disable: on
[root@master ~]# gluster volume rebalance oldxu start
volume rebalance: oldxu: success: Rebalance on oldxu has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: a287b4c7-755b-46a7-b22e-8b1a3bff3d39
[root@master ~]# tree /gfs
/gfs
├── test1
├── test2
└── test3
3 directories, 0 files
6.7 k8s 對接glusterfs存儲
a:創建endpoint
vi glusterfs-ep.yaml
iapiVersion: v1
kind: Endpoints
metadata:
name: glusterfs
namespace: tomcat
subsets:
- addresses:
- ip: 10.0.0.11
- ip: 10.0.0.12
- ip: 10.0.0.13
ports:
- port: 49152
protocol: TCP
b: 創建service
vi glusterfs-svc.yaml
vi glusterfs-svc.yaml
iapiVersion: v1
kind: Service
metadata:
name: glusterfs
namespace: tomcat
spec:
ports:
- port: 49152
protocol: TCP
targetPort: 49152
sessionAffinity: None
type: ClusterIP
c: 創建gluster類型pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster
labels:
type: glusterfs
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs"
path: "qiangge"
readOnly: false
d: 創建pvc
略
e:在pod中使用gluster
vi nginx_pod.yaml
……
volumeMounts:
- name: nfs-vol2
mountPath: /usr/share/nginx/html
volumes:
- name: nfs-vol2
persistentVolumeClaim:
claimName: gluster
7:與jenkins集成實現ci/cd
ip地址 | 服務 | 內存 |
---|---|---|
10.0.0.11 | kube-apiserver 8080 | 1G |
10.0.0.12 | jenkins(tomcat + jdk) 8080 | 1G |
10.0.0.13 | gitlab 8080,80 | 2G |
7.1: 安裝gitlab並上傳代碼
#a:安裝
wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm
yum localinstall gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm -y
#b:配置
vim /etc/gitlab/gitlab.rb
external_url 'http://10.0.0.13'
prometheus_monitoring['enable'] = false
#c:應用並啟動服務
gitlab-ctl reconfigure
#使用瀏覽器訪問http://10.0.0.13,修改root用戶密碼,創建project
#上傳代碼到git倉庫
cd /srv/
rz -E
unzip xiaoniaofeifei.zip
rm -fr xiaoniaofeifei.zip
git config --global user.name "Administrator"
git config --global user.email "admin@example.com"
git init
git remote add origin http://10.0.0.13/root/xiaoniao.git
git add .
git commit -m "Initial commit"
git push -u origin master
7.2 安裝jenkins,並自動構建docker鏡像
7.2.1:安裝jenkins
cd /opt/
rz -E
rpm -ivh jdk-8u102-linux-x64.rpm
mkdir /app
tar xf apache-tomcat-8.0.27.tar.gz -C /app
rm -fr /app/apache-tomcat-8.0.27/webapps/*
mv jenkins.war /app/apache-tomcat-8.0.27/webapps/ROOT.war
tar xf jenkin-data.tar.gz -C /root
/app/apache-tomcat-8.0.27/bin/startup.sh
netstat -lntup
7.2.2:訪問jenkins
訪問http://10.0.0.12:8080/,默認賬號密碼admin:123456
7.2.3:配置jenkins拉取gitlab代碼憑據
a:在jenkins上生成秘鑰對
ssh-keygen -t rsa
b:復制公鑰粘貼gitlab上
c:jenkins上創建全局憑據
7.2.4:拉取代碼測試
7.2.5:編寫dockerfile並測試
#vim dockerfile
FROM 10.0.0.11:5000/nginx:1.13
add . /usr/share/nginx/html
添加docker build構建時不add的文件
vim .dockerignore
dockerfile
docker build -t xiaoniao:v1 .
docker run -d -p 88:80 xiaoniao:v1
打開瀏覽器測試訪問xiaoniaofeifei的項目
7.2.6:上傳dockerfile和.dockerignore到私有倉庫
git add docker .dockerignore
git commit -m "fisrt commit"
git push -u origin master
7.2.7:點擊jenkins立即構建,自動構建docker鏡像並上傳到私有倉庫
修改jenkins 工程配置
docker build -t 10.0.0.11:5000/test:vBUILDID.dockerpush10.0.0.11:5000/test:vBUILDID.dockerpush10.0.0.11:5000/test:vBUILD_ID
7.3 jenkins自動部署應用到k8s
kubectl -s 10.0.0.11:8080 get nodes
if [ -f /tmp/xiaoniao.lock ];then
docker build -t 10.0.0.11:5000/xiaoniao:v$BUILD_ID .
docker push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
kubectl -s 10.0.0.11:8080 set image -n xiaoniao deploy xiaoniao xiaoniao=10.0.0.11:5000/xiaoniao:v$BUILD_ID
echo "更新成功"
else
docker build -t 10.0.0.11:5000/xiaoniao:v$BUILD_ID .
docker push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
kubectl -s 10.0.0.11:8080 create namespace xiaoniao
kubectl -s 10.0.0.11:8080 run xiaoniao -n xiaoniao --image=10.0.0.11:5000/xiaoniao:v$BUILD_ID --replicas=3 --record
kubectl -s 10.0.0.11:8080 expose -n xiaoniao deployment xiaoniao --port=80 --type=NodePort
port=`kubectl -s 10.0.0.11:8080 get svc -n xiaoniao|grep -oP '(?<=80:)\d+'`
echo "你的項目地址訪問是http://10.0.0.13:$port"
touch /tmp/xiaoniao.lock
fi
jenkins一鍵回滾
kubectl -s 10.0.0.11:8080 rollout undo -n xiaoniao deployment xiaoniao