容器雲技術
Docker引擎的安裝
准備兩台虛擬機,一台為docker主節點,一台為docker客戶端,安裝CentOS7.5_1804系統
基礎環境配置
網卡配置(master節點)
修改docker主節點主機名
# hostnamectl set-hostname master
配置網卡
# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=099334fe-751c-4dc4-b062-d421640ceb2e
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.7.10
NETMASK=255.255.255.0
GATEWAY=192.168.7.2
DNS1=114.114.114.114
網卡配置(slave節點)
修改docker客戶端主機名
# hostnamectl set-hostname slave
配置網卡
# vi /etc/sysconfig/network-scripts/ifcfg-ens33
iTYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=53bbedb7-248e-4110-bd80-82ca6371f016
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.7.20
NETMASK=255.255.255.0
GATEWAY=192.168.7.2
DNS1=114.114.114.114
配置YUM源(兩個節點)
將提供的壓縮包Docker.tar.gz上傳至/root目錄並解壓
# tar -zxvf Docker.tar.gz
配置本地YUM源
# mv /etc/yum.repos.d/CentOS-* /media/
# vi /etc/yum.repos.d/local.repo
[kubernetes]
name=kubernetes
baseurl=file:///root/Docker
gpgcheck=0
enabled=1
升級系統內核(兩個節點)
Docker CE支持64位版本CentOS 7,並且要求內核版本不低於3.10
# yum -y upgrade
配置防火牆(兩個節點)
配置防火牆及SELinux
# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
# iptables -t filter -F
# iptables -t filter -X
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# reboot
開啟路由轉發(兩個節點)
# vi /etc/sysctl.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
# modprobe br_netfilter
# sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
Docker引擎安裝
安裝依賴包(兩個節點)
yum-utils提供了yum-config-manager的依賴包,device-mapper-persistent-data和lvm2are需要devicemapper存儲驅動
# yum install -y yum-utils device-mapper-persistent-data
安裝docker-ce(兩個節點)
Docker CE是免費的Docker產品的新名稱,Docker CE包含了完整的Docker平台,非常適合開發人員和運維團隊構建容器APP
安裝指定版本的Docker CE
# yum install -y docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io
啟動Docker(兩個節點)
啟動Docker並設置開機自啟
# systemctl daemon-reload
# systemctl restart docker
# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
查看docker的系統信息
# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 18.09.6
Storage Driver: devicemapper
Docker倉庫的使用
構建私有倉庫
官方的Docker Hub是一個用於管理公共鏡像的倉庫,用戶可以在上面找到需要的鏡像,也可以把私有鏡像推送上去。官方在Docker Hub上提供了Registry的鏡像,可以直接使用該Registry鏡像來構建一個容器,搭建私有倉庫服務
運行Registry(master節點)
將Registry鏡像運行並生成一個容器
# ./image.sh
# docker run -d -v /opt/registry:/var/lib/registry -p 5000:5000 --restart=always --name registry registry:latest
dff76b9fb042ff1ea15741a50dc81e84d7afad4cc057c79bc16e370d2ce13c2a
Registry服務默認會將上傳的鏡像保存在容器的/var/lib/registry中,將主機的/opt/registry目錄掛載到該目錄,可實現將鏡像保存到主機的/opt/registry目錄
查看運行情況(master節點)
查看容器運行情況
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dff76b9fb042 registry:latest "/entrypoint.sh /etc…" About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp registry
查看狀態
Registry容器啟動后,打開瀏覽器輸入地址http://192.168.7.10:5000/v2/
上傳鏡像(master節點)
配置私有倉庫
"insecure-registries":["192.168.7.10:5000"]
}
The push refers to repository [192.168.7.10:5000/centos]
9e607bb861a7: Pushed
latest: digest: sha256:6ab380c5a5acf71c1b6660d645d2cd79cc8ce91b38e0352cbf9561e050427baf size: 529
{"repositories":["centos"]}
拉取鏡像(slave節點)
配置私有倉庫地址
# vi /etc/docker/daemon.json
{
"insecure-registries":["192.168.7.10:5000"]
}
# systemctl restart docker
拉取鏡像並查看結果
# docker pull 192.168.7.10:5000/centos:latest
latest: Pulling from centos
729ec3a6ada3: Pull complete
Digest: sha256:6ab380c5a5acf71c1b6660d645d2cd79cc8ce91b38e0352cbf9561e050427baf
Status: Downloaded newer image for 192.168.7.10:5000/centos:latest
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.7.10:5000/centos latest 0f3e07c0138f 7 months ago 220MB
Docker鏡像和容器的使用
在docker主節點master主機上操作
鏡像的基本管理和使用
鏡像有多種生成方法:
(1)可以從無到有開始創建鏡像
(2)可以下載並使用別人創建好的現成的鏡像
(3)可以在現有鏡像上創建新的鏡像
可以將鏡像的內容和創建步驟描述在一個文本文件中,這個文件被稱作Dockerfile,通過執行docker build <docker-file>命令可以構建出Docker鏡像
查看鏡像列表
列出本地主機上的鏡像
# ./image.sh
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
httpd latest d3017f59d5e2 7 months ago 165MB
busybox latest 020584afccce 7 months ago 1.22MB
nginx latest 540a289bab6c 7 months ago 126MB
redis alpine 6f63d037b592 7 months ago 29.3MB
python 3.7-alpine b11d2a09763f 7 months ago 98.8MB
<none> <none> 4cda95efb0e4 7 months ago 80.6MB
192.168.7.10:5000/centos latest 0f3e07c0138f 7 months ago 220MB
centos latest 0f3e07c0138f 7 months ago 220MB
registry latest f32a97de94e1 14 months ago 25.8MB
swarm latest ff454b4a0e84 24 months ago 12.7MB
httpd 2.2.32 c51e86ea30d1 2 years ago 171MB
httpd 2.2.31 c8a7fb36e3ab 3 years ago 170MB
REPOSITORY:表示鏡像的倉庫源
TAG:鏡像的標簽
IMAGE ID:鏡像ID
CREATED:鏡像創建時間
SIZE:鏡像大小
運行容器
同一倉庫源可以有多個TAG,代表這個倉庫源的不同個版本
# docker run -i -t -d httpd:2.2.31 /bin/bash
be31c7adf30f88fc5d6c649311a5640601714483e8b9ba6f8db853c73fc11638
-i:交互式操作
-t:終端
-d:后台運行
httpd:2.2.31:鏡像名,使用https:2.2.31鏡像為基礎來啟動容器
/bin/bash:容器交互式Shell
如果不指定鏡像的版本標簽,則默認使用latest標簽的鏡像
獲取鏡像
當本地主機上使用一個不存在的鏡像時,Docker會自動下載這個鏡像。如果需要預先下載這個鏡像,可以使用docker pull命令來下載
格式
# docker pull [OPTIONS] NAME[:TAG|@DIGEST]
OPTIONS說明:
-a:拉取所有tagged鏡像。
--disable-content-trust:忽略鏡像的校驗,默認開啟
查找鏡像
查找鏡像一般有兩種方式,可以通過Docker Hub(https://hub.docker.com/)網站來搜索鏡像,也可以使用docker search命令來搜索鏡像
格式
# docker search [OPTIONS] TERM
OPTIONS說明:
--automated:只列出automated build類型的鏡像
--no-trunc:顯示完整的鏡像描述
--filter=stars:列出收藏數不小於指定值的鏡像
搜索httpd尋找適合的鏡像
# docker search --filter=stars=10 java
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
node Node.js is a JavaScript-based platform for s… 8863 [OK]
tomcat Apache Tomcat is an open source implementati… 2739 [OK]
openjdk OpenJDK is an open-source implementation of … 2265 [OK]
java Java is a concurrent, class-based, and objec… 1976 [OK]
ghost Ghost is a free and open source blogging pla… 1188 [OK]
couchdb CouchDB is a database that uses JSON for doc… 347 [OK]
jetty Jetty provides a Web server and javax.servle… 336 [OK]
groovy Apache Groovy is a multi-faceted language fo… 92 [OK]
lwieske/java-8 Oracle Java 8 Container - Full + Slim - Base… 46 [OK]
nimmis/java-centos This is docker images of CentOS 7 with diffe… 42 [OK]
fabric8/java-jboss-openjdk8-jdk Fabric8 Java Base Image (JBoss, OpenJDK 8) 28 [OK]
cloudbees/java-build-tools Docker image with commonly used tools to bui… 15 [OK]
frekele/java docker run --rm --name java frekele/java 12 [OK]
NAME:鏡像倉庫源的名稱
DESCRIPTION:鏡像的描述
OFFICIAL:是否是Docker官方發布
stars:類似GitHub里面的star,表示點贊、喜歡的意思
AUTOMATED:自動構建
刪除鏡像
格式
# docker rmi [OPTIONS] IMAGE [IMAGE...]
OPTIONS說明:
-f:強制刪除
--no-prune:不移除該鏡像的過程鏡像,默認移除
強制刪除本地鏡像busybox
# docker rmi -f busybox:latest
Untagged: busybox:latest
Deleted: sha256:020584afccce44678ec82676db80f68d50ea5c766b6e9d9601f7b5fc86dfb96d
Deleted: sha256:1da8e4c8d30765bea127dc2f11a17bc723b59480f4ab5292edb00eb8eb1d96b1
容器的基本管理和使用
容器是一種輕量級的、可移植的、自包含的軟件打包技術,使應用程序幾乎可以在任何地方以相同的方式運行。容器由應用程序本身和依賴兩部分組成
運行容器
運行第一個容器
# docker run -it --rm -d -p 80:80 nginx:latest
d7ab2c1aa4511f5ffc76a9bba3ab736fb9817793c6bec5af53e1cfddfb0904cd
-i:交互式操作
-t:終端
-rm:容器退出后隨之將其刪除,可以避免浪費空間
-p:端口映射
-d:容器在后台運行
過程
簡單描述
(1)下載Nginx鏡像
(2)啟動容器,並將容器的80端口映射到宿主機的80端口
使用docker run來創建容器
(1)檢查本地是否存在指定的鏡像,不存在就從公有倉庫下載
(2)利用鏡像創建並啟動一個容器
(3)分配一個文件系統,並在只讀的鏡像層外面掛載一層可讀寫層
(4)從宿主主機配置的網橋接口中橋接一個虛擬接口到容器中去
(5)從地址池配置一個IP地址給容器
(6)執行用戶指定的應用程序
驗證容器是否正常工作
在瀏覽器輸入地址 http://192.168.7.10
啟動容器
格式
# docker start [CONTAINER ID]
啟動所有的docker容器
# docker start $(docker ps -aq)
d7ab2c1aa451
be31c7adf30f
dff76b9fb042
操作容器
列出運行中的容器
# docker ps 或者 # docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d7ab2c1aa451 nginx:latest "nginx -g 'daemon of…" 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp distracted_hoover
be31c7adf30f httpd:2.2.31 "/bin/bash" 26 minutes ago Up 26 minutes 80/tcp priceless_hellman
dff76b9fb042 registry:latest "/entrypoint.sh /etc…" 21 hours ago Up 44 minutes 0.0.0.0:5000->5000/tcp registry
列出所有容器
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d7ab2c1aa451 nginx:latest "nginx -g 'daemon of…" 9 minutes ago Up 9 minutes 0.0.0.0:80->80/tcp distracted_hoover
be31c7adf30f httpd:2.2.31 "/bin/bash" 27 minutes ago Up 27 minutes 80/tcp priceless_hellman
dff76b9fb042 registry:latest "/entrypoint.sh /etc…" 21 hours ago Up About an hour 0.0.0.0:5000->5000/tcp registry
查看具體容器的信息
# docker inspect [container ID or NAMES]
查看容器的使用資源狀況
# docker stats [container ID or NAMES]
查看容器日志
# docker logs [OPTIONS] [container ID or NAMES]
OPTIONS說明:
--details:顯示更多的信息
-f,--follow:跟蹤實時日志
--sincestring:顯示自某個timestamp之后的日志,或相對時
--tailstring:從日志末尾顯示多少行日志,默認是all
-t,--timestamps:顯示時間戳
--until string:顯示自某個timestamp之前的日志,或相對時間
進入容器
格式
# docker exec -it [CONTAINER ID] bash
進入容器后,輸入exit或者按Crtl+C鍵即可退出容器
終止容器
刪除終止狀態的容器
# docker rm [CONTAINER ID]
刪除所有處於終止狀態的容器
# docker container prune
刪除未被使用的數據卷
# docker volume prune
刪除運行中的容器
# docker rm -f [CONTAINER ID]
批量停止所有的容器
# docker stop $(docker ps -aq)
批量刪除所有的容器
# docker rm $(docker ps -aq)
終止容器進程,容器進入終止狀態
# docker container stop [CONTAINER ID]
導入/導出容器
將容器快照導出為本地文件
格式
# docker export [CONTAINER ID] > [tar file]
# docker export d7ab2c1aa451 > nginx.tar
# ll
總用量 1080192
-rw-------. 1 root root 1569 5月 28 02:19 anaconda-ks.cfg
drwxr-xr-x. 4 root root 34 10月 31 2019 Docker
-rw-r--r--. 1 root root 977776539 11月 4 2019 Docker.tar.gz
drwxr-xr-x. 2 root root 4096 10月 31 2019 images
-rwxr-xr-x. 1 root root 498 10月 31 2019 image.sh
drwxr-xr-x. 2 root root 40 11月 4 2019 jdk
-rw-r--r-- 1 root root 128325632 5月 28 17:30 nginx.tar
把容器快照文件再導入為鏡像
格式
# cat [tar file] | docker import - [name:tag]
# cat nginx.tar | docker import - nginx:test
sha256:743846df0ce06109d801cb4118e9e4d3082243d6323dbaa6efdcda74f4c000bf
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx test 743846df0ce0 17 seconds ago 125MB
httpd latest d3017f59d5e2 7 months ago 165MB
nginx latest 540a289bab6c 7 months ago 126MB
redis alpine 6f63d037b592 7 months ago 29.3MB
python 3.7-alpine b11d2a09763f 7 months ago 98.8MB
<none> <none> 4cda95efb0e4 7 months ago 80.6MB
centos latest 0f3e07c0138f 7 months ago 220MB
192.168.7.10:5000/centos latest 0f3e07c0138f 7 months ago 220MB
registry latest f32a97de94e1 14 months ago 25.8MB
swarm latest ff454b4a0e84 24 months ago 12.7MB
httpd 2.2.32 c51e86ea30d1 2 years ago 171MB
httpd 2.2.31 c8a7fb36e3ab 3 years ago 170MB
使用docker import命令導入一個容器快照到本地鏡像庫時,將丟棄所有的歷史記錄和元數據信息,即僅保存容器當時的快照狀態
構建自定義鏡像
構建自定義鏡像主要有兩種方式:docker commit和Dockerfile
docker commit是在以往版本控制系統里提交變更,然后進行變更的提交docker commit
從容器創建一個新的鏡像
格式
# docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
OPTIONS說明:
-a:提交的鏡像作者
-c:使用Dockerfile指令來創建鏡像
-m:提交時的說明文字
-p:在commit時,將容器暫停
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d7ab2c1aa451 nginx:latest "nginx -g 'daemon of…" 32 minutes ago Up 32 minutes 0.0.0.0:80->80/tcp distracted_hoover
be31c7adf30f httpd:2.2.31 "/bin/bash" About an hour ago Up About an hour 80/tcp priceless_hellman
dff76b9fb042 registry:latest "/entrypoint.sh /etc…" 22 hours ago Up About an hour 0.0.0.0:5000->5000/tcp registry
sha256:0a18e29db3b007302cd0e0011b4e34a756ef44ce4939d51e599f986204ce1f34
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx v1 0a18e29db3b0 38 seconds ago 126MB
Dockerfile
Dockerfile是一個文本文檔,其中包含了組合映像的命令,可以使用在命令行中調用任何命令。Docker通過讀取Dockerfile中的指令自動生成映像
知識點
格式
# docker build -f /path/to/a/Dockerfile
Dockerfile一般分為4部分:基礎鏡像信息、維護者信息、鏡像操作指令和容器啟動時執行指令,“#”為Dockerfile中的注釋
Dockerfile主要指令:
FROM:指定基礎鏡像,必須為第一個命令
MAINTAINER:維護者信息
RUN:構建鏡像時執行的命令
ADD:將本地文件添加到容器中,tar類型文件會自動解壓(網絡壓縮資源不會被解壓),可以訪問網絡資源,類似wget
COPY:功能類似ADD,但是是不會自動解壓文件,也不能訪問網絡資源
CMD:構建容器后調用,也就是在容器啟動時才進行調用
ENTRYPOINT:配置容器,使其可執行化。配合CMD可省去“application”,只使用參數
LABEL:用於為鏡像添加元數據
ENV:設置環境變量
EXPOSE:指定與外界交互的端口
VOLUME:用於指定持久化目錄
WORKDIR:工作目錄,類似於cd命令
USER:指定運行容器時的用戶名或UID,后續的RUN也會使用指定用戶。使用USER指定用戶時,可以使用用戶名、UID或GID,或是兩者的組合。當服務不需要管理員權限時,可通過該命令指定運行用戶
ARG:用於指定傳遞給構建運行時的變量
ONBUILD:用於設置鏡像觸發器
構建准備
以centos:latest為基礎鏡像,安裝jdk1.8並構建新的鏡像centos-jdk
新建文件夾用於存放JDK安裝包和Dockerfile文件
# mkdir centos-jdk
# mv jdk-8u141-linux-x64.tar.gz ./centos-jdk/
# cd centos-jdk/
編寫Dockerfile
# vi Dockerfile
內容
# CentOS with JDK 8
# Author kei
FROM centos ##指定基礎鏡像
MAINTAINER kei ##指定作者
RUN mkdir /usr/local/java ##新建文件夾用於存放jdk文件
ADD jdk-8u141-linux-x64.tar.gz /usr/local/java ##將JDK文件復制到鏡像內並自動解壓
RUN ln -s /usr/local/java/jdk1.8.0_141 /usr/local/java/jdk ##創建軟鏈接
ENV JAVA_HOME /usr/local/java/jdk ##設置環境變量
ENV JRE_HOME ${JAVA_HOME}/jre
ENV CLASSPATH .:${JAVA_HOME}/lib:${JRE_HOME}/lib
ENV PATH ${JAVA_HOME}/bin:$PATH
構建新鏡像
# docker build -t="centos-jdk" .
Sending build context to Docker daemon 185.5MB
Step 1/9 : FROM centos
---> 0f3e07c0138f
Step 2/9 : MAINTAINER dockerzlnewbie
---> Running in 1a6a5c210531
Removing intermediate container 1a6a5c210531
---> 286d78e0b9bf
Step 3/9 : RUN mkdir /usr/local/java
---> Running in 2dbbac61b2cf
Removing intermediate container 2dbbac61b2cf
---> 369567834d80
Step 4/9 : ADD jdk-8u141-linux-x64.tar.gz /usr/local/java/
---> 8fb102032ae2
Step 5/9 : RUN ln -s /usr/local/java/jdk1.8.0_141 /usr/local/java/jdk
---> Running in d8301e932f7c
Removing intermediate container d8301e932f7c
---> 7c82ee6703c5
Step 6/9 : ENV JAVA_HOME /usr/local/java/jdk
---> Running in d8159a32efae
Removing intermediate container d8159a32efae
---> d270abf08fa2
Step 7/9 : ENV JRE_HOME ${JAVA_HOME}/jre
---> Running in 5206ba2ec963
Removing intermediate container 5206ba2ec963
---> a52dc52bae76
Step 8/9 : ENV CLASSPATH .:${JAVA_HOME}/lib:${JRE_HOME}/lib
---> Running in 41fbd969bd90
Removing intermediate container 41fbd969bd90
---> ff44f5f90877
Step 9/9 : ENV PATH ${JAVA_HOME}/bin:$PATH
---> Running in 7affe7505c82
Removing intermediate container 7affe7505c82
---> bdf402785277
Successfully built bdf402785277
Successfully tagged centos-jdk:latest
查看構建的新鏡像
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
centos-jdk latest bdf402785277 11 minutes ago 596MB
使用新構建的鏡像運行容器驗證JDK是否安裝成功
# docker run -it centos-jdk /bin/bash
java -version
java version "1.8.0_141"
Java(TM) SE Runtime Environment (build 1.8.0_141-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)
Docker容器編排
准備兩台虛擬機,一台為swarm集群Master節點master,一台為swarm集群Node節點node,所有節點已配置好主機名和網卡,並安裝好docker-ce
容器編排工具提供了有用且功能強大的解決方案,用於跨多個主機協調創建、管理和更新多個容器
Swarm是Docker自己的編排工具,現在與Docker Engine完全集成,並使用標准API和網絡
部署Swarm集群
配置主機映射(兩個節點)
修改/etc/hosts文件配置主機映射
# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.7.10 master
192.168.7.20 node
配置時間同步
安裝服務(兩個節點)
安裝chrony服務
# yum install -y chrony
master節點
修改/etc/chrony.conf文件,注釋默認NTP服務器,指定上游公共NTP服務器,並允許其他節點同步時間
# sed -i 's/^server/#&/' /etc/chrony.conf
# vi /etc/chrony.conf
local stratum 10
server master iburst
allow all
重啟chronyd服務並設為開機啟動,開啟網絡時間同步功能
# systemctl enable chronyd && systemctl restart chronyd
node節點
修改/etc/chrony.conf文件,指定內網 Master節點為上游NTP服務器
# sed -i 's/^server/#&/' /etc/chrony.conf
# echo server 192.168.7.10 iburst >> /etc/chrony.conf
重啟服務並設為開機啟動
# systemctl enable chronyd && systemctl restart chronyd
查詢同步(兩個節點)
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
==================================================================
^* master 10 6 77 7 +13ns[-2644ns] +/- 13us
配置Docker API(兩個節點)
開啟docker API
# vi /lib/systemd/system/docker.service
將
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
修改為
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
# systemctl daemon-reload
# systemctl restart docker
# ./image.sh
初始化集群(master節點)
創建swarm集群
# docker swarm init --advertise-addr 192.168.7.10
Swarm initialized: current node (jit2j1itocmsynhecj905vfwp) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-2oyrpgkp41z40zg0z6l0yppv6420vz18rr171kqv0mfsbiufii-c3ficc1qh782wo567uav16n3n 192.168.7.10:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
初始化命令中“--advertise-addr”選項表示管理節點公布它的IP是多少。其它節點必須能通過這個IP找到管理節點
輸出結果中包含3個步驟:
(1)Swarm創建成功,swarm-manager成為manager node
(2)添加worker node需要執行的命令
(3)添加 manager node需要執行的命令node節點加入集群(node節點)
復制前面的docker swarm join命令,執行以加入Swarm集群
# docker swarm join --token SWMTKN-1-2oyrpgkp41z40zg0z6l0yppv6420vz18rr171kqv0mfsbiufii-c3ficc1qh782wo567uav16n3n 192.168.7.10:2377
This node joined a swarm as a worker.
驗證集群(master節點)
查看各節點狀態
# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
jit2j1itocmsynhecj905vfwp * master Ready Active Leader 18.09.6
8mww97xnbfxfrbzqndplxv3vi node Ready Active 18.09.6
安裝portainer(master節點)
Portainer是Docker的圖形化管理工具,提供狀態顯示面板、應用模板快速部署、容器鏡像網絡數據卷的基本操作(包括上傳和下載鏡像、創建容器等操作)、事件日志顯示、容器控制台操作、Swarm集群和服務等集中管理和操作、登錄用戶管理和控制等功能
# docker volume create portainer_data
portainer_data# docker service create --name portainer --publish 9000:9000 --replicas=1 --constraint 'node.role == manager' --mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock --mount type=volume,src=portainer_data,dst=/data portainer/portainer -H unix:///var/run/docker.sock
nfgx3xci88rdcdka9j9cowv8g
overall progress: 1 out of 1 tasks
1/1: running
verify: Service converged
登錄portainer
打開瀏覽器,輸入地址 http://master_IP:9000訪問Portainer主頁運行service
運行(master節點)
部署一個運行httpd鏡像的Service
# docker service create --name web_server httpd
查看當前Swarm中的Service
# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
2g18082sfqa9 web_server replicated 1/1 httpd:latest
REPLICAS顯示當前副本信息,1/1意思是web_server這個Service期望的容器副本數量為1,目前已經啟動的副本數量為1,即當前Service已經部署完成
查看Service每個副本的狀態
# docker service ps web_server
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
4vtrynwddd7m web_server.1 httpd:latest node Running Running 27 minutes ago
Service唯一的副本被分派到node,當前的狀態是Running
service伸縮(master節點)
副本數增加到5
# docker service scale web_server=5
web_server scaled to 5
overall progress: 5 out of 5 tasks
1/5: running
2/5: running
3/5: running
4/5: running
5/5: running
verify: Service converged
查看副本的詳細信息
# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
2g18082sfqa9 web_server replicated 5/5 httpd:latest
# docker service ps web_server
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
4vtrynwddd7m web_server.1 httpd:latest node Running Running 36 minutes ago
n3iscmvv9fh5 web_server.2 httpd:latest master Running Running about a minute ago
mur6cc8k6x7e web_server.3 httpd:latest node Running Running 3 minutes ago
rx52najc1txw web_server.4 httpd:latest master Running Running about a minute ago
jl0xjv427goz web_server.5 httpd:latest node Running Running 3 minutes ago
減少副本數
# docker service scale web_server=2
web_server scaled to 2
overall progress: 2 out of 2 tasks
1/2: running
2/2: running
verify: Service converged
# docker service ps web_server
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
4vtrynwddd7m web_server.1 httpd:latest node Running Running 40 minutes ago
n3iscmvv9fh5 web_server.2 httpd:latest master Running Running 5 minutes ago
訪問service(master節點)
查看容器的網絡配置
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cde0d3489429 httpd:latest "httpd-foreground" 9 minutes ago Up 9 minutes 80/tcp web_server.2.n3iscmvv9fh590fx452ezu9hu
將Service暴露到外部
# docker service update --publish-add 8080:80 web_server
web_server
overall progress: 2 out of 2 tasks
1/2: running
2/2: running
verify: Service converged
service存儲數據(master節點)
volume NFS共享存儲模式:管理節點宿主同步到工作節點宿主,工作節點宿主同步到容器
安裝NFS服務端、配置NFS主配置文件、添加權限並啟動
# yum install -y nfs-utils
添加目錄讓相應網段可以訪問並添加讀寫權限
# vi /etc/exports
/root/share 192.168.7.10/24(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
創建共享目錄,添加權限
# mkdir -p /root/share
# chmod 777 /root/share
# exportfs -rv
exporting 192.168.7.10/24:/root/share
開啟 RPC服務並設置開機自啟# systemctl start rpcbind
# systemctl enable rpcbind啟動NFS服務並設置開機自啟
# systemctl start nfs
# systemctl enable nfs
查看NFS是否掛載成功
# cat /var/lib/nfs/etab
/root/share 192.168.7.10/24(rw,async,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=1000,anongid=1000,sec=sys,rw,insecure,no_root_squash,no_all_squash)
安裝NFS客戶端並啟動服務(node節點)
# yum install nfs-utils -y
# systemctl start rpcbind
# systemctl enable rpcbind
# systemctl start nfs
# systemctl enable nfs
創建docker volume(兩個節點)
# docker volume create --driver local --opt type=nfs --opt o=addr=10.18.4.39,rw --opt device=:/root/share foo33
查看volume。
# docker volume ls
DRIVER VOLUME NAME
local foo33
local nfs-test
local portainer_data
查看volume詳細信息
# docker volume inspect foo33
[
{
"CreatedAt": "2020-5-31T07:36:47Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/foo33/_data",
"Name": "foo33",
"Options": {
"device": ":/root/share",
"o": "addr=192.168.7.10,rw",
"type": "nfs"
},
"Scope": "local"
}
]
創建並發布服務(master節點)
# docker service create --name test-nginx-nfs --publish 80:80 --mount type=volume,source=foo33,destination=/app/share --replicas 3 nginx
otp60kfc3br7fz5tw4fymhtcy
overall progress: 3 out of 3 tasks
1/3: running
2/3: running
3/3: running
verify: Service converged
查看服務分布的節點。
# docker service ps test-nginx-nfs
ID NAME IMAGE NODE DESIRED S TATE CURRENT STATE ERROR PORTS
z661rc7h8rrn test-nginx-nfs.1 nginx:latest node Running Running about a minute ago
j2b9clk37kuc test-nginx-nfs.2 nginx:latest node Running Running about a minute ago
nqduca4andz0 test-nginx-nfs.3 nginx:latest master Running Running about a minute ago
生成一個index.html文件
# cd /root/share/
# touch index.html
# ll
total 0
-rw-r--r-- 1 root root 0 Oct 31 07:44 index.html
查看宿主機目錄掛載情況
# docker volume inspect foo33
[
{
"CreatedAt": "2020-5-31T07:44:49Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/foo33/_data",
"Name": "foo33",
"Options": {
"device": ":/root/share",
"o": "addr=192.168.7.10,rw",
"type": "nfs"
},
"Scope": "local"
}
]
# ls /var/lib/docker/volumes/foo33/_data
index.html
查看容器目錄
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a1bce967830e nginx:latest "nginx -g 'daemon of…" 6 minutes ago Up 6 minutes 80/tcp test-nginx-nfs.3.nqduca4andz0nsxus11nwd8qt
# docker exec -it a1bce967830e bash
root@a1bce967830e:/# ls app/share/
index.html
調度節點(master節點)
默認配置下Master也是worker node,所以Master上也運行了副本。如果不希望在Master上運行Service
# docker node update --availability drain master
master
查看各節點現在的狀態
# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
jit2j1itocmsynhecj905vfwp * master Ready Drain Leader 18.09.6
8mww97xnbfxfrbzqndplxv3vi node Ready Active 18.09.6
# docker service ps test-nginx-nfs
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
z661rc7h8rrn test-nginx-nfs.1 nginx:latest node Running Running 10 minutes ago
j2b9clk37kuc test-nginx-nfs.2 nginx:latest node Running Running 10 minutes ago
rawt8mtsstwd test-nginx-nfs.3 nginx:latest node Running Running 30 seconds ago
nqduca4andz0 \_ test-nginx-nfs.3 nginx:latest master Shutdown Shutdown 32 seconds ago
Master上的副本test-nginx-nfs.3已經被Shut down了,為了達到3個副本數的目標,在Node上添加了新的副本test-nginx-nfs.3
原生Kuberbetes雲平台部署
部署架構
Kubernetes(簡稱K8S)是開源的容器集群管理系統,可以實現容器集群的自動化部署、自動擴縮容、維護等功能。它既是一款容器編排工具,也是全新的基於容器技術的分布式架構領先方案。在Docker技術的基礎上,為容器化的應用提供部署運行、資源調度、服務發現和動態伸縮等功能,提高了大規模容器集群管理的便捷性
節點規划
准備兩台虛擬機,一台master節點,一台node節點,所有節點安裝CentOS_7.2.1511系統,配置網卡和主機名
基礎環境配置
配置YUM源(兩個節點)
將提供的壓縮包K8S.tar.gz上傳至/root目錄並解壓
# tar -zxvf K8S.tar.gz
配置本地YUM源。
# cat /etc/yum.repod.s/local.repo
[kubernetes]
name=kubernetes
baseurl=file:///root/Kubernetes
gpgcheck=0
enabled=1
升級系統內核(兩個節點)
# yum upgrade -y
配置主機映射(兩個節點)
修改/etc/hosts文件
# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.7.10master
192.168.7.20 node
配置防火牆(兩個節點)
配置防火牆及SELinux。
# systemctl stop firewalld && systemctl disable firewalld
# iptables -F
# iptables -X
# iptables -Z
# /usr/sbin/iptables-save
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# reboot
關閉swap(兩個節點)
# swapoff -a
# sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab
配置時間同步
安裝服務(兩個節點)
安裝chrony服務
# yum install -y chrony
master節點
修改/etc/chrony.conf文件,注釋默認NTP服務器,指定上游公共NTP服務器,並允許其他節點同步時間
# sed -i 's/^server/#&/' /etc/chrony.conf
# vi /etc/chrony.conf
local stratum 10
server master iburst
allow all
重啟chronyd服務並設為開機啟動,開啟網絡時間同步功能
# systemctl enable chronyd && systemctl restart chronyd
# timedatectl set-ntp true
node節點
修改/etc/chrony.conf文件,指定內網Master節點為上游NTP服務器,重啟服務並設為開機啟動
# sed -i 's/^server/#&/' /etc/chrony.conf
# echo server 192.168.7.10 iburst >> /etc/chrony.conf
# systemctl enable chronyd && systemctl restart chronyd
執行命令(兩個節點)
執行chronyc sources命令,查詢結果中如果存在以“^*”開頭的行,即說明已經同步成功
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
==================================================================
^* master 10 6 77 7 +13ns[-2644ns] +/- 13us
配置路由轉發(兩個節點)
# vi /etc/sysctl.d/K8S.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
# modprobe br_netfilter
# sysctl -p /etc/sysctl.d/K8S.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
配置IPVS(兩個節點)
# vi /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
# chmod 755 /etc/sysconfig/modules/ipvs.modules
# bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
查看是否已經正確加載所需的內核模塊
# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4 15053 0
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 139224 2 ip_vs,nf_conntrack_ipv4
libcrc32c 12644 3 xfs,ip_vs,nf_conntrack
安裝ipset軟件包
# yum install -y ipset ipvsadm
安裝docker(兩個節點)
裝Docker,啟動Docker引擎並設置開機自啟
# yum install -y yum-utils device-mapper-persistent-data lvm2
# yum install docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io -y
# mkdir -p /etc/docker
# vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
# systemctl daemon-reload
# systemctl restart docker
# systemctl enable docker
# docker info |grep Cgroup
Cgroup Driver: system
安裝Kubernetes集群
安裝工具(兩個節點)
Kubelet負責與其他節點集群通信,並進行本節點Pod和容器生命周期的管理。Kubeadm是Kubernetes的自動化部署工具,降低了部署難度,提高效率。Kubectl是Kubernetes集群命令行管理工具
安裝Kubernetes工具並啟動Kubelet
# yum install -y kubelet-1.14.1 kubeadm-1.14.1 kubectl-1.14.1
# systemctl enable kubelet && systemctl start kubelet
## 此時啟動不成功正常,后面初始化的時候會變成功
初始化集群(master節點)
# ./kubernetes_base.sh
# kubeadm init --apiserver-advertise-address 192.168.7.10 --kubernetes-version="v1.14.1" --pod-network-cidr=10.16.0.0/16 --image-repository=registry.aliyuncs.com/google_containers
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.18.4.33]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [10.18.4.33 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.18.4.33 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 25.502670 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: i9k9ou.ujf3blolfnet221b
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.7.10:6443 --token i9k9ou.ujf3blolfnet221b \
--discovery-token-ca-cert-hash sha256:a0402e0899cf798b72adfe9d29ae2e9c20d5c62e06a6cc6e46c93371436919dc
初始化操作主要經歷了下面15個步驟,每個階段均輸出均使用[步驟名稱]作為開頭:
①[init]:指定版本進行初始化操作。
②[preflight]:初始化前的檢查和下載所需要的Docker鏡像文件。
③[kubelet-start]:生成Kubelet的配置文件/var/lib/kubelet/config.yaml,沒有這個文件Kubelet無法啟動,所以初始化之前的Kubelet實際上啟動失敗。
④[certificates]:生成Kubernetes使用的證書,存放在/etc/kubernetes/pki目錄中。
⑤[kubeconfig]:生成KubeConfig文件,存放在/etc/kubernetes目錄中,組件之間通信需要使用對應文件。
⑥[control-plane]:使用/etc/kubernetes/manifest目錄下的YAML文件,安裝Master組件。
⑦[etcd]:使用/etc/kubernetes/manifest/etcd.yaml安裝Etcd服務。
⑧[wait-control-plane]:等待control-plan部署的Master組件啟動。
⑨[apiclient]:檢查Master組件服務狀態。
⑩[uploadconfig]:更新配置。
11[kubelet]:使用configMap配置Kubelet。
12[patchnode]:更新CNI信息到Node上,通過注釋的方式記錄。
13[mark-control-plane]:為當前節點打標簽,打了角色Master和不可調度標簽,這樣默認就不會使用Master節點來運行Pod。
14[bootstrap-token]:生成的Token需要記錄下來,后面使用kubeadm join命令往集群中添加節點時會用到。
15[addons]:安裝附加組件CoreDNS和kube-proxy。
輸出結果中的最后一行用於其它節點加入集群
Kubectl默認會在執行的用戶home目錄下面的.kube目錄中尋找config文件,配置kubectl工具
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
檢查集群狀態
# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
配置網絡(master節點)
部署flannel網絡
# kubectl apply -f yaml/kube-flannel.yml
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-8686dcc4fd-v88br 0/1 Running 0 4m42s
coredns-8686dcc4fd-xf28r 0/1 Running 0 4m42s
etcd-master 1/1 Running 0 3m51s
kube-apiserver-master 1/1 Running 0 3m46s
kube-controller-manager-master 1/1 Running 0 3m48s
kube-flannel-ds-amd64-6hf4w 1/1 Running 0 24s
kube-proxy-r7njz 1/1 Running 0 4m42s
kube-scheduler-master 1/1 Running 0 3m37s
加入集群
在master節點執行
# ./kubernetes_base.sh
在node節點使用kubeadm join命令將Node節點加入集群
# kubeadm join 192.168.7.10:6443 --token qf4lef.d83xqvv00l1zces9 --discovery-token-ca-cert-hash
sha256:ec7c7db41a13958891222b2605065564999d124b43c8b02a3b32a6b2ca1a1c6c
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在master節點檢查各節點狀態
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 4m53s v1.14.1
node Ready <none> 13s v1.14.1
安裝Dashboard(master節點)
安裝Dashboard
# kubectl create -f yaml/kubernetes-dashboard.yaml
創建管理員
# kubectl create -f yaml/dashboard-adminuser.yaml
serviceaccount/kubernetes-dashboard-admin created
clusterrolebinding.rbac.authorization.K8S.io/kubernetes-dashboard-admin created
檢查所有Pod狀態
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-8686dcc4fd-8jqzh 1/1 Running 0 11m
coredns-8686dcc4fd-dkbhw 1/1 Running 0 11m
etcd-master 1/1 Running 0 11m
kube-apiserver-master 1/1 Running 0 11m
kube-controller-manager-master 1/1 Running 0 11m
kube-flannel-ds-amd64-49ssg 1/1 Running 0 7m56s
kube-flannel-ds-amd64-rt5j8 1/1 Running 0 7m56s
kube-proxy-frz2q 1/1 Running 0 11m
kube-proxy-xzq4t 1/1 Running 0 11m
kube-scheduler-master 1/1 Running 0 11m
kubernetes-dashboard-5f7b999d65-djgxj 1/1 Running 0 11m
查看端口號
# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 15m
kubernetes-dashboard NodePort 10.102.195.101 <none> 443:30000/TCP 4m43s
瀏覽器中輸入地址(https://192.168.7.10:30000),訪問Kubernetes Dashboard
單擊“接受風險並繼續”按鈕,即可進入Kubernetes Dasboard認證界面
獲取訪問Dashboard的認證令牌
# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard-admin-token | awk '{print $1}')
Name: kubernetes-dashboard-admin-token-j5dvd
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin
kubernetes.io/service-account.uid: 1671a1e1-cbb9-11e9-8009-ac1f6b169b00
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1qNWR2ZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjE2NzFhMWUxLWNiYjktMTFlOS04MDA5LWFjMWY2YjE2OWIwMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.u6ZaVO-WR632jpFimnXTk5O376IrZCCReVnu2Brd8QqsM7qgZNTHD191Zdem46ummglbnDF9Mz4wQBaCUeMgG0DqCAh1qhwQfV6gVLVFDjHZ2tu5yn0bSmm83nttgwMlOoFeMLUKUBkNJLttz7-aDhydrbJtYU94iG75XmrOwcVglaW1qpxMtl6UMj4-bzdMLeOCGRQBSpGXmms4CP3LkRKXCknHhpv-pqzynZu1dzNKCuZIo_vv-kO7bpVvi5J8nTdGkGTq3FqG6oaQIO-BPM6lMWFeLEUkwe-EOVcg464L1i6HVsooCESNfTBHjjLXZ0WxXeOOslyoZE7pFzA0qg
將獲取到的令牌輸入瀏覽器,認證后即可進入Kubernetes控制台
配置kuboard(master節點)
Kuboard是一款免費的Kubernetes圖形化管理工具,其力圖幫助用戶快速在Kubernetes上落地微服務
# kubectl create -f yaml/kuboard.yaml
deployment.apps/kuboard created
service/kuboard created
serviceaccount/kuboard-user created
clusterrolebinding.rbac.authorization.K8S.io/kuboard-user created
serviceaccount/kuboard-viewer created
clusterrolebinding.rbac.authorization.K8S.io/kuboard-viewer created
clusterrolebinding.rbac.authorization.K8S.io/kuboard-viewer-node created
clusterrolebinding.rbac.authorization.K8S.io/kuboard-viewer-pvp created
ingress.extensions/kuboard created
在瀏覽器中輸入地址http://192.168.7.10:31000,即可進入Kuboard的認證界面
在Token文本框中輸入令牌后可進入Kuboard控制台
配置Kubernetes集群
開啟IPVS(master節點)
IPVS是基於TCP四層(IP+端口)的負載均衡軟件
IPVS會從TCPSYNC包開始為一個TCP連接所有的數據包,建立狀態跟蹤機制,保證一個TCP連接中所有的數據包能到同一個后端
根據處理請求和響應數據包的模式的不同,IPVS具有如下4種工作模式:
①NAT模式
②DR(Direct Routing)模式
③TUN(IP Tunneling)模式
④FULLNAT模式
而根據響應數據包返回路徑的不同,可以分為如下2種模式:
①雙臂模式:請求、轉發和返回在同一路徑上,client和IPVS director、IPVS director和后端real server都是由請求和返回2個路徑連接。
②三角模式:請求、轉發和返回3個路徑連接client、IPVS director和后端real server成為一個三角形。
修改ConfigMap的kube-system/kube-proxy中的config.conf文件,修改為mode: "ipvs"
# kubectl edit cm kube-proxy -n kube-system
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs" //修改此處
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
重啟kube-proxy(master節點)
# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-bd68w" deleted
pod "kube-proxy-qq54f" deleted
pod "kube-proxy-z9rp4" deleted
查看日志
# kubectl logs kube-proxy-9zv5x -n kube-system
I1004 07:11:17.538141 1 server_others.go:177] Using ipvs Proxier. #正在使用ipvs
W1004 07:11:17.538589 1 proxier.go:381] IPVS scheduler not specified, use rr by default
I1004 07:11:17.540108 1 server.go:555] Version: v1.14.1
I1004 07:11:17.555484 1 conntrack.go:52] Setting nf_conntrack_max to 524288
I1004 07:11:17.555827 1 config.go:102] Starting endpoints config controller
I1004 07:11:17.555899 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1004 07:11:17.555927 1 config.go:202] Starting service config controller
I1004 07:11:17.555965 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1004 07:11:17.656090 1 controller_utils.go:1034] Caches are synced for service config controller
I1004 07:11:17.656091 1 controller_utils.go:1034] Caches are synced for endpoints config controller
日志中打印出了“Using ipvs Proxier”字段,說明IPVS模式已經開啟
測試IPVS(master節點)
# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.17.0.1:30099 rr
TCP 172.17.0.1:30188 rr
TCP 172.17.0.1:30301 rr
TCP 172.17.0.1:31000 rr
調度(master節點)
查看Taints字段默認配置
# kubectl describe node master
……
CreationTimestamp: Fri, 04 Oct 2020 06:02:45 +0000
Taints: node-role.kubernetes.io/master:NoSchedule //狀態為NoSchedule
Unschedulable: false
……
希望將K8S-master也當作Node節點使用,可以執行如下命令
# kubectl taint node master node-role.kubernetes.io/master-
node/master untainted
# kubectl describe node master
……
CreationTimestamp: Fri, 04 Oct 2020 06:02:45 +0000
Taints: <none> //狀態已經改變
Unschedulable: false
……