一、kubernetes組件:
master節點 | Minion 節點是實際運行 Docker 容器的節點,負責和節點上運行的 Docker 進行交互,並且提供了代理功能 |
minion節點 | Master 節點負責對外提供一系列管理集群的 API 接口,並且通過和 Minion 節點交互來實現對集群的操作管理 |
etcd | key-value鍵值存儲數據庫,用來存儲kubernetes的信息的。 |
apiserver | 用戶和 kubernetes 集群交互的入口,封裝了核心對象的增刪改查操作,提供了 RESTFul 風格的 API 接口,通過 etcd 來實現持久化並維護對象的一致性。 |
scheduler | 負責集群資源的調度和管理,例如當有 pod 異常退出需要重新分配機器時,scheduler 通過一定的調度算法從而找到最合適的節點。 |
controller-manager | 主要是用於保證 replicationController 定義的復制數量和實際運行的 pod 數量一致,另外還保證了從 service 到 pod 的映射關系總是最新的。 |
kubelet | 運行在 minion 節點,負責和節點上的 Docker 交互,例如啟停容器,監控運行狀態等 |
proxy | 運行在 minion 節點,負責為 pod 提供代理功能,會定期從 etcd 獲取 service 信息,並根據 service 信息通過修改 iptables 來實現流量轉發(最初的版本是直接通過程序提供轉發功能,效率較低。),將流量轉發到要訪問的 pod 所在的節點上去 |
flannel | Flannel 的目的就是為集群中的所有節點重新規划 IP 地址的使用規則,從而使得不同節點上的容器能夠獲得同屬一個內網且不重復的 IP 地址,並讓屬於不同節點上的容器能夠直接通過內網 IP 通信 |
二、主機配置信息:
角色 | 系統環境 | IP | 相關組件 |
master | CentOS- 7 | 192.168.10.5 | docker、etcd、api-server、scheduler、controller-manager、flannel、harbor |
node-1 | CentOS-7 | 192.168.10.8 | docker、etcd、kubelet、proxy、flannel |
node-2 | CentOS-7 | 192.168.10.9 | docker、etcd、kubelet、proxy、flannel |
三、安裝部署配置kubernets集群
master主機操作:
1、安裝etcd
yum安裝
[root@master ~]# yum -y install etcd
[root@master ~]# rpm -ql etcd /etc/etcd /etc/etcd/etcd.conf /usr/bin/etcd /usr/bin/etcdctl /usr/lib/systemd/system/etcd.service
修改配置:
[root@master ~]# egrep -v "^#|^$" /etc/etcd/etcd.conf ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_NAME="default" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.5:2379"
啟動服務器查看端口:
[root@master ~]# systemctl start etcd [root@master ~]# netstat -tunlp|grep etcd tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 20919/etcd tcp6 0 0 :::2379 :::* LISTEN 20919/etcd
設置etcd網絡:
etcd集群(高可用)詳見:https://www.cnblogs.com/51wansheng/p/10234036.html
2、配置api-server服務
在master主機上,只安裝了master包
[root@master kubernetes]# yum -y install kubernetes-master [root@master kubernetes]# tree /etc/kubernetes/ /etc/kubernetes/ ├── apiserver ├── config ├── controller-manager └── scheduler 0 directories, 4 files
修改apiserver配置:
[root@master kubernetes]# egrep -v "^#|^$" /etc/kubernetes/apiserver #API服務監聽地址
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" #API服務監聽端口
KUBE_API_PORT="--port=8080"
#Etcd服務地址 KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.10.5:2379" 配置DNS在一個區間
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" #對api請求控制 AlwaysAdmit 不做限制,允許所有結點訪問apiserver
KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit" KUBE_API_ARGS=""
啟動服務:
[root@master kubernetes]# systemctl start kube-apiserver [root@master kubernetes]# netstat -tunlp|grep apiserve tcp6 0 0 :::6443 :::* LISTEN 21042/kube-apiserve tcp6 0 0 :::8080 :::* LISTEN 21042/kube-apiserve
3、配置scheduler服務
[root@master kubernetes]# egrep -v "^#|^$" /etc/kubernetes/scheduler KUBE_SCHEDULER_ARGS="" KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=4" KUBE_MASTER="--master=192.168.10.5:8080" KUBE_LEADER_ELECT="--leader-elect"
啟動服務:
[root@master kubernetes]# systemctl start kube-scheduler [root@master kubernetes]# netstat -tunlp|grep kube-sche tcp6 0 0 :::10251 :::* LISTEN 21078/kube-schedule [root@master kubernetes]# systemctl status kube-scheduler ● kube-scheduler.service - Kubernetes Scheduler Plugin Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2019-01-09 19:20:50 CST; 5s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 21078 (kube-scheduler) CGroup: /system.slice/kube-scheduler.service └─21078 /usr/bin/kube-scheduler --logtostderr=true --v=4 --master=192.168.10.5:8080
4、配置controller-manager服務
[root@master kubernetes]# egrep -v "^#|^$" /etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS="" KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=4" KUBE_MASTER="--master=192.168.10.5:8080"
啟動服務:
[root@master kubernetes]# systemctl start kube-controller-manager [root@master kubernetes]# systemctl status kube-controller-manager ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2019-01-09 19:26:38 CST; 8s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 21104 (kube-controller) CGroup: /system.slice/kube-controller-manager.service └─21104 /usr/bin/kube-controller-manager --logtostderr=true --v=4 --master=192.168.10.5:8080 [root@master kubernetes]# netstat -tunlp|grep kube-controll tcp6 0 0 :::10252 :::* LISTEN 21104/kube-controll
以上是master配置:
[root@master kubernetes]# netstat -tunlp|grep etc tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 20919/etcd tcp6 0 0 :::2379 :::* LISTEN 20919/etcd [root@master kubernetes]# netstat -tunlp|grep kube tcp6 0 0 :::10251 :::* LISTEN 21078/kube-schedule tcp6 0 0 :::6443 :::* LISTEN 21042/kube-apiserve tcp6 0 0 :::10252 :::* LISTEN 21104/kube-controll tcp6 0 0 :::8080 :::* LISTEN 21042/kube-apiserve
node節點服務器操作
在node-1和node-2執行以下操作:
[root@node-1 kubernetes]# yum -y install kubernetes-node
5、修改kubelet配置:
[root@node-1 kubernetes]# egrep -v "^#|^$" /etc/kubernetes/config # 啟用日志標准錯誤
KUBE_LOGTOSTDERR="--logtostderr=true" # 日志級別
KUBE_LOG_LEVEL="--v=0" #允許容器請求特權模式,默認false
KUBE_ALLOW_PRIV="--allow-privileged=false"
#指定master節點 KUBE_MASTER="--master=http://192.168.10.5:8080"
[root@node-1 kubernetes]# egrep -v "^#|^$" /etc/kubernetes/kubelet # Kubelet監聽IP地址
KUBELET_ADDRESS="--address=0.0.0.0"
# Kubelet監聽服務端口
KUBELET_PORT="--port=10250"
#配置kubelet主機 KUBELET_HOSTNAME="--hostname-override=192.168.10.8" #配置apiserver 服務地址
KUBELET_API_SERVER="--api-servers=http://192.168.10.5:8080" #默認獲取容器鏡像地址
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" KUBELET_ARGS=""
啟動kebelet服務:
[root@node-1 kubernetes]# systemctl start kubelet [root@node-1 kubernetes]# netstat -tunlp|grep kube tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 23883/kubelet tcp6 0 0 :::10250 :::* LISTEN 23883/kubelet tcp6 0 0 :::10255 :::* LISTEN 23883/kubelet tcp6 0 0 :::4194 :::* LISTEN 23883/kubelet
[root@node-1 kubernetes]# systemctl status kubelet ● kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2019-01-09 20:58:55 CST; 37min ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 23883 (kubelet) Memory: 31.6M CGroup: /system.slice/kubelet.service ├─23883 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://192.168.10.5:8080 --address=0.0.0.0 --port=10250 --hostname-override=192.168.10.8 --allow-... └─23927 journalctl -k -f
6、修改proxy配置:
[root@node-1 kubernetes]# egrep -v "^#|^$" /etc/kubernetes/proxy KUBE_PROXY_ARGS="" NODE_HOSTNAME="--hostname-override=192.168.10.8"
啟動proxy服務
[root@node-1 kubernetes]# systemctl start kube-proxy
[root@node-1 kubernetes]# netstat -tunlp|grep kube-pro
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 24544/kube-proxy
[root@node-1 kubernetes]# systemctl status kube-proxy ● kube-proxy.service - Kubernetes Kube-Proxy Server Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2019-01-09 21:48:41 CST; 5s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 24544 (kube-proxy) Memory: 12.0M CGroup: /system.slice/kube-proxy.service └─24544 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.10.5:8080
查看集群節點:
[root@master kubernetes]# kubectl get node NAME STATUS AGE 192.168.10.8 Ready 59m 192.168.10.9 Ready 51m [root@master kubernetes]# kubectl get cs NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health": "true"} controller-manager Healthy ok scheduler Healthy ok
四、安裝配置flanneld服務
在三台執行(docker0網卡需要通過flannel獲取IP地址):
安裝flannel:
[root@master kubernetes]# yum -y install flannel
flannel的配置文件及二進制文件
[root@master kubernetes]# rpm -ql flannel /etc/sysconfig/flanneld /run/flannel /usr/bin/flanneld /usr/bin/flanneld-start /usr/lib/systemd/system/docker.service.d/flannel.conf /usr/lib/systemd/system/flanneld.service
在etcd服務上面設置etcd網絡
#創建一個目錄/ k8s/network用於存儲flannel網絡信息
[root@master kubernetes]# etcdctl mkdir /k8s/network
#給/k8s/network/config 賦一個字符串的值 '{"Network": "10.255.0.0/16"}'
[root@master kubernetes]# etcdctl set /k8s/network/config '{"Network": "10.255.0.0/16"}' {"Network": "10.255.0.0/16"}
#查看設置的值
[root@master kubernetes]# etcdctl get /k8s/network/config {"Network": "10.255.0.0/16"}
說明:
flannel啟動過程解析:
(1)、從etcd中獲取network的配置信息
(2)、划分subnet,並在etcd中進行注冊
(3)、將子網信息記錄到/run/flannel/subnet.env中
修改flanneld配置文件:
[root@node-1 kubernetes]# egrep -v "^#|^$" /etc/sysconfig/flanneld
#指定ETCD服務地址
FLANNEL_ETCD_ENDPOINTS="http://192.168.10.5:2379" #注其中/k8s/network與上面etcd中的Network
FLANNEL_ETCD_PREFIX="/k8s/network" #指定物理網卡
FLANNEL_OPTIONS="--iface=ens33"
啟動flanneld服務:
[root@master kubernetes]# systemctl start flanneld [root@master kubernetes]# netstat -tunlp|grep flan udp 0 0 192.168.10.5:8285 0.0.0.0:* 21369/flanneld
查看flanneld子網信息/run/flannel/subnet.env (flannel運行)
[root@master kubernetes]# cat /run/flannel/subnet.env FLANNEL_NETWORK=10.255.0.0/16 FLANNEL_SUBNET=10.255.75.1/24 FLANNEL_MTU=1472 FLANNEL_IPMASQ=false
之后將會有一個腳本將subnet.env轉寫成一個docker的環境變量文件/run/flannel/docker。docker0的地址是由 /run/flannel/subnet.env 的 FLANNEL_SUBNET 參數決定的
[root@master kubernetes]# cat /run/flannel/docker DOCKER_OPT_BIP="--bip=10.255.75.1/24" DOCKER_OPT_IPMASQ="--ip-masq=true" DOCKER_OPT_MTU="--mtu=1472" DOCKER_NETWORK_OPTIONS=" --bip=10.255.75.1/24 --ip-masq=true --mtu=1472"
查看flannel網卡信息:
注:在啟動flannel之前,需要在etcd中添加一條網絡配置記錄,這個配置將用於flannel分配給每個docker的虛擬IP地址段。#設置在minion上docker的IP地址.
由於flannel將覆蓋docker0網橋,所以flannel服務要先於docker服務啟動。如果docker服務已經啟動,則先停止docker服務,然后啟動flannel,再啟動docker
[root@master kubernetes]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.255.75.1 netmask 255.255.255.0 broadcast 0.0.0.0 ether 02:42:fe:16:d0:a1 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.5 netmask 255.255.255.0 broadcast 192.168.10.255 inet6 fe80::a6a4:698e:10d6:69cf prefixlen 64 scopeid 0x20<link> ether 00:0c:29:53:a7:50 txqueuelen 1000 (Ethernet) RX packets 162601 bytes 211261546 (201.4 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 28856 bytes 10747031 (10.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 10.255.75.0 netmask 255.255.0.0 destination 10.255.75.0 inet6 fe80::bf21:8888:cfcc:f153 prefixlen 64 scopeid 0x20<link> unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3 bytes 144 (144.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 472702 bytes 193496275 (184.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 472702 bytes 193496275 (184.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
可以看到docker0網卡通過flannel獲取到了子網IP地址
五、測試集群容器網絡連通性:
在任意兩台節點上啟動一個容器:
在節點node-1上運行一個容器:
[root@node-1 kubernetes]# docker pull ubuntu #啟動一個容器 [root@node-1 ~]# docker run -it --name test-docker ubuntu #查看容器IP地址 [root@node-1 kubernetes]# docker inspect test-docker|grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "10.255.18.2", "IPAddress": "10.255.18.2",
在節點master上運行一個容器:
[root@master kubernetes]# docker pull ubuntu [root@master kubernetes]# docker run -it --name test-docker ubuntu [root@master ~]# docker inspect test-docker|grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "10.255.75.2", "IPAddress": "10.255.75.2",
在兩台上面安裝ping命令:
測試網絡連通執行前:
iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -F iptables -L -n
在master節點上的容器ping:
root@38165ea9582b:~# ping 192.168.10.8 node-1節點物理網卡 PING 192.168.10.8 (192.168.10.8): 56 data bytes 64 bytes from 192.168.10.8: icmp_seq=0 ttl=63 time=1.573 ms 64 bytes from 192.168.10.8: icmp_seq=1 ttl=63 time=0.553 ms ^C--- 192.168.10.8 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.553/1.063/1.573/0.510 ms root@38165ea9582b:~# ping 10.255.18.2 # node-1 容器運行的IP地址 PING 10.255.18.2 (10.255.18.2): 56 data bytes 64 bytes from 10.255.18.2: icmp_seq=0 ttl=60 time=1.120 ms 64 bytes from 10.255.18.2: icmp_seq=1 ttl=60 time=1.264 ms ^C--- 10.255.18.2 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 1.120/1.192/1.264/0.072 ms root@38165ea9582b:~# ping 10.255.18.1 # node-1節點 docker0網卡 PING 10.255.18.1 (10.255.18.1): 56 data bytes 64 bytes from 10.255.18.1: icmp_seq=0 ttl=61 time=1.364 ms 64 bytes from 10.255.18.1: icmp_seq=1 ttl=61 time=0.741 ms ^C--- 10.255.18.1 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.741/1.052/1.364/0.312 ms root@38165ea9582b:~# ping 10.255.18.0 # node-1 節點 flannel網卡地址 PING 10.255.18.0 (10.255.18.0): 56 data bytes 64 bytes from 10.255.18.0: icmp_seq=0 ttl=61 time=1.666 ms 64 bytes from 10.255.18.0: icmp_seq=1 ttl=61 time=0.804 ms ^C--- 10.255.18.0 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.804/1.235/1.666/0.431 ms
至此K8S基礎環境安裝完畢,總結:
master節點運行的服務及端口:
[root@master ~]# netstat -tunlp|grep "etcd" tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 20919/etcd tcp6 0 0 :::2379 :::* LISTEN 20919/etcd [root@master ~]# netstat -tunlp|grep "kube" tcp6 0 0 :::10251 :::* LISTEN 21078/kube-schedule tcp6 0 0 :::6443 :::* LISTEN 21042/kube-apiserve tcp6 0 0 :::10252 :::* LISTEN 21104/kube-controll tcp6 0 0 :::8080 :::* LISTEN 21042/kube-apiserve [root@master ~]# netstat -tunlp|grep "flannel" udp 0 0 192.168.10.5:8285 0.0.0.0:* 21369/flanneld
node節點運行的服務及端口:
[root@node-1 ~]# netstat -tunlp|grep "kube" tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 69207/kubelet tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 24544/kube-proxy tcp6 0 0 :::10250 :::* LISTEN 69207/kubelet tcp6 0 0 :::10255 :::* LISTEN 69207/kubelet tcp6 0 0 :::4194 :::* LISTEN 69207/kubelet [root@node-1 ~]# netstat -tunlp|grep "flannel" udp 0 0 192.168.10.8:8285 0.0.0.0:* 47254/flanneld