Kubernetes Flannel網路的安裝配置


1下載安裝包地址

https://github.com/coreos/flannel/releases

2.部署flannel網路之前,提前安裝好docker 參考《Yum 安裝Docker

同時,需要向etcd 中寫入一個子網,該子網就是為每一個docker 節點分配一個不同的小子網

[root@dn01 ~]# /opt/etcd/bin/etcdctl  --ca-file=/root/k8s/etcd-cert/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem  --endpoints="https://10.10.100.30:2379,https://10.10.100.31:2379,https://10.10.100.32:2379" set /coreos.com/network/config  '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
View Code

如果要獲取設置的子網配置,可將執行命令中的“set” 修改成“get” 查看,該值是寫到了etcd數據庫中的。

[root@dn01 ~]# /opt/etcd/bin/etcdctl  --ca-file=/root/k8s/etcd-cert/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem  --endpoints="https://10.10.100.30:2379,https://10.10.100.31:2379,https://10.10.100.32:2379" get /coreos.com/network/config 
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
[root@dn01 ~]# 
View Code

3.上傳flannel安裝包,只需要部署到node節點即可,解壓

[root@dn02 ~]# tar -zxf flannel-v0.10.0-linux-amd64.tar.gz
[root@dn02 ~]# ls
anaconda-ks.cfg flanneld flannel-v0.10.0-linux-amd64.tar.gz mk-docker-opts.sh README.md

4.為flannel 創建一個部署目錄,作為kubernetes的組件,我們將其放置在k8s的部署目錄中

[root@dn02 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p

將flannel解壓目錄中的 flanneld mk-docker-opts.sh 兩個文件拷貝到/opt/kubernetes/bin/目錄下

[root@dn02 ~]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/

[root@dn02 ~]# ls /opt/kubernetes/bin/
flanneld mk-docker-opts.sh

5.部署

部署前同樣要將flannel證書拷貝到各個節點中,因為本例中每個節點都拷貝了etcd的證書,在指定證書時將flannel的證書指定到etcd的證書即可

生成配置文件,默認沒有需要手動創建

[root@dn02 ~]# vi /opt/kubernetes/cfg/flanneld 


FLANNEL_OPTIONS="--etcd-endpoints=https://10.10.100.30:2379,https://10.10.100.31:2379,https://10.10.100.32:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
View Code

為系統system目錄創建flannel 服務,本例中將system管理的flannel 服務放置在目錄/usr/lib/systemd/system下

[root@dn02 ~]# cat /usr/lib/systemd/system/flanneld.service 
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

[root@dn02 ~]#
View Code

 導入配置文件,啟動flanneld

[root@dn02 ~]# systemctl daemon-reload
[root@dn02 ~]# systemctl enable flanneld
[root@dn02 ~]# systemctl start flanneld
View Code

修改docker配置文件,讓docker 的網絡與flannel 整合

[root@dn02 ~]# vi /usr/lib/systemd/system/docker.service 

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env  ##增加這一行,讓docker使用flannel的網絡配置環境
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ##修改這一行,引用環境變量的DOCKER_NETWORK_OPTIONS選項值
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target



flanne 的網絡配置EnvironmentFile=/run/flannel/subnet.env
[root@dn02 ~]# cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=172.17.85.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.85.1/24 --ip-masq=false --mtu=1450"   ###docker配置文件ExecStart項引用的值
[root@dn02 ~]#
View Code

 配置修改之后,重啟啟動docker,通過ifconfig命令查看主機的網路配置

[root@dn02 ~]# systemctl daemon-reload
[root@dn02 ~]# systemctl restart docker

[root@dn02 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.85.1 netmask 255.255.255.0 broadcast 172.17.85.255
ether 02:42:22:b1:77:2e txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.100.31 netmask 255.255.255.0 broadcast 10.10.100.255
inet6 fe80::389d:e340:ea17:3a30 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:58:c5:7c txqueuelen 1000 (Ethernet)
RX packets 87291 bytes 10835949 (10.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 86850 bytes 10793745 (10.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.85.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::6cdb:36ff:fef4:70f3 prefixlen 64 scopeid 0x20<link>
ether 6e:db:36:f4:70:f3 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 320 bytes 18686 (18.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 320 bytes 18686 (18.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
View Code

此時在節點2上的flannel網絡就配置完成,接下來通過復制拷貝的方式配置節點3

[root@dn02 ~]# scp -r /opt/kubernetes/ root@10.10.100.32:/opt/
The authenticity of host '10.10.100.32 (10.10.100.32)' can't be established.
ECDSA key fingerprint is SHA256:pyiZjF3b1phvgSDt3+LU2LbME/tEfDsNOrZJCCZiicg.
ECDSA key fingerprint is MD5:35:c1:58:24:d0:7f:a9:6c:d9:99:68:a2:98:b8:9a:8d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.10.100.32' (ECDSA) to the list of known hosts.
root@10.10.100.32's password: 
flanneld                                                                            100%  232    72.8KB/s   00:00    
mk-docker-opts.sh                                                                   100% 2139   866.3KB/s   00:00    
flanneld                                                                            100%   35MB  59.9MB/s   00:00    
[root@dn02 ~]# scp -r /usr/lib/systemd/system/{flanneld,docker}.service root@10.10.100.32:/usr/lib/systemd/system
root@10.10.100.32's password: 
flanneld.service                                                                    100%  417   217.6KB/s   00:00    
docker.service                                                                      100% 1693   603.6KB/s   00:00    
[root@dn02 ~]# 


注意:拷貝時一個要拷貝安裝目錄下flannel的配置文件,一個要拷貝系統system 服務的配置文件
View Code

因為在配置文件中沒有寫死的ip,主機名等信息,所以在拷貝到新的節點后可以直接啟動flannel

[root@dn03 kubernetes]# systemctl daemon-reload
[root@dn03 kubernetes]# systemctl start flanneld
[root@dn03 kubernetes]# systemctl restart docker 

注意: flannel 的啟動服務名為 flanneld


查看啟動進程
[root@dn03 kubernetes]# ps -ef | grep flanneld
root      20448      1  0 21:45 ?        00:00:00 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://10.10.100.30:2379,https://10.10.100.31:2379,https://10.10.100.32:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      20862  16786  0 21:47 pts/0    00:00:00 grep --color=auto flanneld
View Code

 

測試兩台跨主機的網絡連通性

節點1 的IP

[root@dn02 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.85.1  netmask 255.255.255.0  broadcast 172.17.85.255
        ether 02:42:22:b1:77:2e  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.100.31  netmask 255.255.255.0  broadcast 10.10.100.255
        inet6 fe80::389d:e340:ea17:3a30  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:58:c5:7c  txqueuelen 1000  (Ethernet)
        RX packets 146602  bytes 18254938 (17.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 170054  bytes 85670174 (81.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.85.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::6cdb:36ff:fef4:70f3  prefixlen 64  scopeid 0x20<link>
        ether 6e:db:36:f4:70:f3  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 496  bytes 27966 (27.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 496  bytes 27966 (27.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
View Code

節點2的IP

[root@dn03 kubernetes]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.41.1  netmask 255.255.255.0  broadcast 172.17.41.255
        ether 02:42:f7:5b:56:4a  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.100.32  netmask 255.255.255.0  broadcast 10.10.100.255
        inet6 fe80::1534:7f05:3d6a:9287  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::25e8:8754:cb81:68c8  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::389d:e340:ea17:3a30  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:c8:13:a5  txqueuelen 1000  (Ethernet)
        RX packets 170723  bytes 56263010 (53.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 146659  bytes 18277647 (17.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.41.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::28e1:c9ff:febf:9948  prefixlen 64  scopeid 0x20<link>
        ether 2a:e1:c9:bf:99:48  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 715  bytes 49915 (48.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 715  bytes 49915 (48.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
View Code

通過節點1ping 節點2的docker 0的地址,同時用節點2ping 節點1的docker 0,結果網絡是可達的

[root@dn02 ~]# ping  172.17.41.1
PING 172.17.41.1 (172.17.41.1) 56(84) bytes of data.
64 bytes from 172.17.41.1: icmp_seq=1 ttl=64 time=0.376 ms
64 bytes from 172.17.41.1: icmp_seq=2 ttl=64 time=1.40 ms
64 bytes from 172.17.41.1: icmp_seq=3 ttl=64 time=1.03 ms
^C
--- 172.17.41.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.376/0.940/1.407/0.426 ms


[root@dn03 kubernetes]# ping 172.17.85.1
PING 172.17.85.1 (172.17.85.1) 56(84) bytes of data.
64 bytes from 172.17.85.1: icmp_seq=1 ttl=64 time=0.349 ms
64 bytes from 172.17.85.1: icmp_seq=2 ttl=64 time=0.928 ms
64 bytes from 172.17.85.1: icmp_seq=3 ttl=64 time=1.39 ms
^C
--- 172.17.85.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.349/0.891/1.397/0.429 ms
[root@dn03 kubernetes]# 
View Code

測試兩個跨主機創建的容器網絡連通性

節點1 啟動容器,可查看IP為172.17.85.2/24

[root@dn02 ~]# docker run -it busybox sh
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
7c9d20b9b6cd: Pull complete 
Digest: sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e
Status: Downloaded newer image for busybox:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether 02:42:ac:11:55:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.85.2/24 brd 172.17.85.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

容器IP地址為 172.17.85.2
/ # ping  172.17.41.2
PING 172.17.41.2 (172.17.41.2): 56 data bytes
64 bytes from 172.17.41.2: seq=0 ttl=62 time=0.494 ms
64 bytes from 172.17.41.2: seq=1 ttl=62 time=1.284 ms
64 bytes from 172.17.41.2: seq=2 ttl=62 time=1.247 ms
^C
--- 172.17.41.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.494/1.008/1.284 ms
View Code

節點2啟動容器,可查看IP為172.17.41.2/24

[root@dn03 kubernetes]# docker run -it busybox sh
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
7c9d20b9b6cd: Pull complete 
Digest: sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e
Status: Downloaded newer image for busybox:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether 02:42:ac:11:29:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.41.2/24 brd 172.17.41.255 scope global eth0
       valid_lft forever preferred_lft forever

該容器的IP地址為172.17.41.2

/ # 
/ # 
/ # ping 172.17.85.2
PING 172.17.85.2 (172.17.85.2): 56 data bytes
64 bytes from 172.17.85.2: seq=0 ttl=62 time=1.323 ms
64 bytes from 172.17.85.2: seq=1 ttl=62 time=1.359 ms
64 bytes from 172.17.85.2: seq=2 ttl=62 time=1.237 ms
^C
--- 172.17.85.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 1.237/1.306/1.359 ms
/ # 
View Code

經驗證跨主機的連個容器的網絡相互可以ping通。

同樣在一台主機上ping 另一台主機上的容器ip,也可以ping通,至此網絡全網可達。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM