Docker容器網絡篇


              Docker容器網絡篇

                                      作者:尹正傑

版權聲明:原創作品,謝絕轉載!否則將追究法律責任。

 

 

一.Docker的網絡模型概述

如上圖所示,Docker有四種網絡模型:
  封閉式網絡(Closed container):
    封閉式容器,只有本地回環接口(Loopback interface,和咱們服務器看到的lo接口類似),無法與外界進行通信。

  橋接式網絡(Bridge container A):
    橋接容器,除了有一塊本地回環接口(Loopback interface)外,還有一塊私有接口(Private interface)通過容器虛擬接口(Container virtual interface)連接到橋接虛擬接口(Docker bridge virtual interface),之后通過邏輯主機接口(Logical host interface)連接到主機物理網絡(Physical network interface)。
    橋接網卡默認會分配到172.
17.0.0/16的IP地址段。
    如果我們在創建容器時沒有指定網絡模型,默認就是(Nat)橋接網絡,這也就是為什么我們在登錄到一個容器后,發現IP地址段都在172.17.0.0/16網段的原因啦。   聯盟式網絡(Joined container A
| Joined container B):     每個容器都各有一部分名稱空間(Mount,PID,User),另外一部分名稱空間是共享的(UTS,Net,IPC)。由於他們的網絡是共享的,因此各個容器可以通過本地回環接口(Loopback interface)進行通信。除了共享同一組本地回環接口(Loopback interface)外,還有一塊一塊私有接口(Private interface)通過聯合容器虛擬接口(Joined container virtual interface)連接到橋接虛擬接口(Docker bridge virtual interface),之后通過邏輯主機接口(Logical host interface)連接到主機物理網絡(Physical network interface)。   
  開放式容器(Open container):     比聯盟式網絡更開放,我們知道聯盟式網絡是多個容器共享網絡(Net),而開放式容器(Open contaner)就直接共享了宿主機的名稱空間。因此物理網卡有多少個,那么該容器就能看到多少網卡信息。我們可以說Open container是聯盟式容器的衍生。     

 

二.容器虛擬化網絡概述

1>.查看docker支持的網絡模型

[root@node102.yinzhengjie.org.cn ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
33d001d8b94d        bridge              bridge              local      #如果我們創建一個容器式未指定網絡模型,默認就是橋接式網絡喲~
9f539144f682        host                host                local
e10670abb710        none                null                local
[root@node102.yinzhengjie.org.cn ~]# 
[root@node102.yinzhengjie.org.cn ~]# 

2>.查看橋接式網絡元數據信息

[root@node102.yinzhengjie.org.cn ~]# docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "33d001d8b94d4080411e06c711a1b6d322115aebbe1253ecef58a9a70e05bdd7",
        "Created": "2019-10-18T17:27:49.282236251+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",      #這里就是默認的橋接式網絡的網段地址,既然式默認那自然式可以修改的。 "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",      #看這里,告訴咱們bridge默認的網卡名稱為"docker0" "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
[root@node102.yinzhengjie.org.cn ~]# 

 

三.使用ip命令網絡名稱空間(netns)來模擬容器間通信

1>.查看幫助信息

[root@node101.yinzhengjie.org.cn ~]# rpm -q iproute
iproute-4.11.0-14.el7.x86_64
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ip netns help      #注意:當我們使用ip命令去管理網絡名稱時,其它名稱都是共享的,這和容器所說的六個名稱空間都是隔離的有所不同喲~
Usage: ip netns list
       ip netns add NAME
       ip netns set NAME NETNSID
       ip [-all] netns delete [NAME]
       ip netns identify [PID]
       ip netns pids NAME
       ip [-all] netns exec [NAME] cmd ...
       ip netns monitor
       ip netns list-id
[root@node101.yinzhengjie.org.cn ~]#

2>.添加2個網絡名稱空間

[root@node103.yinzhengjie.org.cn ~]# ip netns add r1              #添加一個r1網絡名稱空間
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns add r2
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns list                                     #查看已經存在的網絡名稱空間列表
r2
r1
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns add r1              #添加一個r1網絡名稱空間 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r1 ifconfig -a      #我們發現創建的r1網絡名稱空間並沒有網卡,僅有本地回環地址,目前還沒有綁定任何網絡接口設備。
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r2 ifconfig -a                 #和r1同理。
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r1 ifconfig -a      #我們發現創建的r1網絡名稱空間並沒有網卡,僅有本地回環地址,目前還沒有綁定任何網絡接口設備。

3>.創建虛擬網卡對

[root@node103.yinzhengjie.org.cn ~]# ip link help          #查看該命令的幫助信息
Usage: ip link add [link DEV] [ name ] NAME
                   [ txqueuelen PACKETS ]
                   [ address LLADDR ]
                   [ broadcast LLADDR ]
                   [ mtu MTU ] [index IDX ]
                   [ numtxqueues QUEUE_COUNT ]
                   [ numrxqueues QUEUE_COUNT ]
                   type TYPE [ ARGS ]

       ip link delete { DEVICE | dev DEVICE | group DEVGROUP } type TYPE [ ARGS ]

       ip link set { DEVICE | dev DEVICE | group DEVGROUP }
                      [ { up | down } ]
                      [ type TYPE ARGS ]
                      [ arp { on | off } ]
                      [ dynamic { on | off } ]
                      [ multicast { on | off } ]
                      [ allmulticast { on | off } ]
                      [ promisc { on | off } ]
                      [ trailers { on | off } ]
                      [ carrier { on | off } ]
                      [ txqueuelen PACKETS ]
                      [ name NEWNAME ]
                      [ address LLADDR ]
                      [ broadcast LLADDR ]
                      [ mtu MTU ]
                      [ netns { PID | NAME } ]
                      [ link-netnsid ID ]
              [ alias NAME ]
                      [ vf NUM [ mac LLADDR ]
                   [ vlan VLANID [ qos VLAN-QOS ] [ proto VLAN-PROTO ] ]
                   [ rate TXRATE ]
                   [ max_tx_rate TXRATE ]
                   [ min_tx_rate TXRATE ]
                   [ spoofchk { on | off} ]
                   [ query_rss { on | off} ]
                   [ state { auto | enable | disable} ] ]
                   [ trust { on | off} ] ]
                   [ node_guid { eui64 } ]
                   [ port_guid { eui64 } ]
              [ xdp { off |
                  object FILE [ section NAME ] [ verbose ] |
                  pinned FILE } ]
              [ master DEVICE ][ vrf NAME ]
              [ nomaster ]
              [ addrgenmode { eui64 | none | stable_secret | random } ]
                      [ protodown { on | off } ]

       ip link show [ DEVICE | group GROUP ] [up] [master DEV] [vrf NAME] [type TYPE]

       ip link xstats type TYPE [ ARGS ]

       ip link afstats [ dev DEVICE ]

       ip link help [ TYPE ]

TYPE := { vlan | veth | vcan | dummy | ifb | macvlan | macvtap |
          bridge | bond | team | ipoib | ip6tnl | ipip | sit | vxlan |
          gre | gretap | ip6gre | ip6gretap | vti | nlmon | team_slave |
          bond_slave | ipvlan | geneve | bridge_slave | vrf | macsec }
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip link help          #查看該命令的幫助信息
[root@node103.yinzhengjie.org.cn ~]# ip link add name veth1.1 type veth peer name veth1.2      #創建一對類型為虛擬以太網網卡(veth),兩端名稱分別為veth1.1和veth1.2
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip link show      #我們可以看到veth1.1@veth1.2表示veth1.1和veth1.2為一對網卡,目前他們都在宿主機且狀態為DOWN。
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:ef:75:60 brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:3a:da:a7 brd ff:ff:ff:ff:ff:ff
4: veth1.2@veth1.1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 36:1e:54:37:69:78 brd ff:ff:ff:ff:ff:ff
5: veth1.1@veth1.2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 52:00:e0:83:48:cd brd ff:ff:ff:ff:ff:ff
[root@node103.yinzhengjie.org.cn ~]#
[root@node103.yinzhengjie.org.cn ~]# ip link add name veth1.1 type veth peer name veth1.2      #創建一對類型為虛擬以太網網卡(veth),兩端名稱分別為veth1.1和veth1.2

4>.將虛擬網卡移動到指定名稱空間並實現網絡互通

[root@node103.yinzhengjie.org.cn ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:ef:75:60 brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:3a:da:a7 brd ff:ff:ff:ff:ff:ff
4: veth1.2@veth1.1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 36:1e:54:37:69:78 brd ff:ff:ff:ff:ff:ff
5: veth1.1@veth1.2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:00:e0:83:48:cd brd ff:ff:ff:ff:ff:ff
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip link set dev veth1.2 netns r2    #我們將veth1.2這塊虛擬網卡移動到r2名稱空間。
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip link show          #此時,我們發現veth1.2不見啦,因此我們可以說一塊網卡只能歸屬一個名稱空間,注意觀察此時veth1.1的名稱由"veth1.1@veth1.2"變為"veth1.1@if4"了喲~
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:ef:75:60 brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:3a:da:a7 brd ff:ff:ff:ff:ff:ff
5: veth1.1@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:00:e0:83:48:cd brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip link set dev veth1.2 netns r2    #我們將veth1.2這塊虛擬網卡移動到r2名稱空間。
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r2 ifconfig -a    #於此同時,我們上面說將宿主機的veth1.2移動到了r2名稱空間里啦,那我們就來看看,發現的確存在。
lo: flags=8<LOOPBACK> mtu 65536
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth1.2: flags=4098<BROADCAST,MULTICAST> mtu 1500              #果不其然,這里的確有該虛擬網卡設備呢!
ether 36:1e:54:37:69:78 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r2 ip link set dev veth1.2 name eth0    #於此同時我們將veth1.2更名為eth0,便於規范化。
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r2 ifconfig -a      #veth1.2成功更名為eth0,默認網卡是沒有激活的,因此我們需要使用-a選項查看喲
eth0: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 36:1e:54:37:69:78 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=8<LOOPBACK> mtu 65536
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@node103.yinzhengjie.org.cn ~]#
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r2 ip link set dev veth1.2 name eth0    #於此同時我們將veth1.2更名為eth0,便於規范化。
[root@node103.yinzhengjie.org.cn ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:ef:75:60 brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:3a:da:a7 brd ff:ff:ff:ff:ff:ff
5: veth1.1@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:00:e0:83:48:cd brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ifconfig veth1.1 10.1.0.1/24 up      #給宿主機端的veth1.1虛擬網卡分配臨時IP地址並激活該網卡
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ifconfig                    #此時我們可以使用ifconfig命令看到veth1.1的網卡配置信息啦
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        ether 08:00:27:ef:75:60  txqueuelen 1000  (Ethernet)
        RX packets 15  bytes 3188 (3.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 80  bytes 8168 (7.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.30.1.103  netmask 255.255.255.0  broadcast 172.30.1.255
        ether 08:00:27:3a:da:a7  txqueuelen 1000  (Ethernet)
        RX packets 3905  bytes 300723 (293.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13059  bytes 1199271 (1.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth1.1: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.1.0.1  netmask 255.255.255.0  broadcast 10.1.0.255
        ether 52:00:e0:83:48:cd  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ifconfig veth1.1 10.1.0.1/24 up      #給宿主機端的veth1.1虛擬網卡分配臨時IP地址並激活該網卡
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r2 ifconfig -a
eth0: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 36:1e:54:37:69:78  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r2 ifconfig eth0 10.1.0.102/24 up    #我們為r2網絡名稱空間的eth0虛擬網卡分配臨時IP地址並激活
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r2 ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.1.0.102  netmask 255.255.255.0  broadcast 10.1.0.255
        inet6 fe80::341e:54ff:fe37:6978  prefixlen 64  scopeid 0x20<link>
        ether 36:1e:54:37:69:78  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6  bytes 516 (516.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r2 ifconfig eth0 10.1.0.102/24 up    #我們為r2網絡名稱空間的eth0虛擬網卡分配臨時IP地址並激活
[root@node103.yinzhengjie.org.cn ~]# ifconfig               #宿主機已激活網卡信息
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
ether 08:00:27:ef:75:60 txqueuelen 1000 (Ethernet)
RX packets 15 bytes 3188 (3.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 80 bytes 8168 (7.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.30.1.103 netmask 255.255.255.0 broadcast 172.30.1.255
ether 08:00:27:3a:da:a7 txqueuelen 1000 (Ethernet)
RX packets 4280 bytes 331527 (323.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 13362 bytes 1235909 (1.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth1.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.0.1 netmask 255.255.255.0 broadcast 10.1.0.255
ether 52:00:e0:83:48:cd txqueuelen 1000 (Ethernet)
RX packets 18 bytes 1524 (1.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10 bytes 868 (868.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r2 ifconfig         #r2名稱空間已激活網卡信息
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.0.102 netmask 255.255.255.0 broadcast 10.1.0.255
inet6 fe80::341e:54ff:fe37:6978 prefixlen 64 scopeid 0x20<link>
ether 36:1e:54:37:69:78 txqueuelen 1000 (Ethernet)
RX packets 10 bytes 868 (868.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18 bytes 1524 (1.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]#


[root@node103.yinzhengjie.org.cn ~]# ping 10.1.0.102            #我們發現宿主機可以和r2名稱空間的eth0虛擬網卡通信啦。
PING 10.1.0.102 (10.1.0.102) 56(84) bytes of data.
64 bytes from 10.1.0.102: icmp_seq=1 ttl=64 time=0.037 ms
64 bytes from 10.1.0.102: icmp_seq=2 ttl=64 time=0.018 ms
64 bytes from 10.1.0.102: icmp_seq=3 ttl=64 time=0.022 ms
64 bytes from 10.1.0.102: icmp_seq=4 ttl=64 time=0.020 ms
64 bytes from 10.1.0.102: icmp_seq=5 ttl=64 time=0.019 ms
64 bytes from 10.1.0.102: icmp_seq=6 ttl=64 time=0.040 ms
64 bytes from 10.1.0.102: icmp_seq=7 ttl=64 time=0.022 ms
64 bytes from 10.1.0.102: icmp_seq=8 ttl=64 time=0.047 ms
^C
--- 10.1.0.102 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7024ms
rtt min/avg/max/mdev = 0.018/0.028/0.047/0.010 ms
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ping 10.1.0.102            #我們發現宿主機可以和r2名稱空間的eth0虛擬網卡通信啦。

5>.實現兩個虛擬網絡名稱空間的網絡互通 

[root@node103.yinzhengjie.org.cn ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:ef:75:60 brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:3a:da:a7 brd ff:ff:ff:ff:ff:ff
5: veth1.1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 52:00:e0:83:48:cd brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip link set dev veth1.1 netns r1      #將veth1.1虛擬網卡移動到r1網絡名稱空間
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:ef:75:60 brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:3a:da:a7 brd ff:ff:ff:ff:ff:ff
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r1 ifconfig -a
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth1.1: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 52:00:e0:83:48:cd  txqueuelen 1000  (Ethernet)
        RX packets 18  bytes 1524 (1.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10  bytes 868 (868.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip link set dev veth1.1 netns r1      #將veth1.1虛擬網卡移動到r1網絡名稱空間
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r1 ifconfig -a
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth1.1: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 52:00:e0:83:48:cd  txqueuelen 1000  (Ethernet)
        RX packets 18  bytes 1524 (1.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10  bytes 868 (868.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r1 ifconfig veth1.1 10.1.0.101/24 up    #為r1網絡名稱空間的veth1.1虛擬網卡分配IP地址並激活
[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r1 ifconfig 
veth1.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.1.0.101  netmask 255.255.255.0  broadcast 10.1.0.255
        inet6 fe80::5000:e0ff:fe83:48cd  prefixlen 64  scopeid 0x20<link>
        ether 52:00:e0:83:48:cd  txqueuelen 1000  (Ethernet)
        RX packets 18  bytes 1524 (1.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16  bytes 1384 (1.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r1 ifconfig veth1.1 10.1.0.101/24 up    #為r1網絡名稱空間的veth1.1虛擬網卡分配IP地址並激活
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r1 ifconfig 
veth1.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.1.0.101  netmask 255.255.255.0  broadcast 10.1.0.255
        inet6 fe80::5000:e0ff:fe83:48cd  prefixlen 64  scopeid 0x20<link>
        ether 52:00:e0:83:48:cd  txqueuelen 1000  (Ethernet)
        RX packets 26  bytes 2196 (2.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 26  bytes 2196 (2.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r2 ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.1.0.102  netmask 255.255.255.0  broadcast 10.1.0.255
        inet6 fe80::341e:54ff:fe37:6978  prefixlen 64  scopeid 0x20<link>
        ether 36:1e:54:37:69:78  txqueuelen 1000  (Ethernet)
        RX packets 26  bytes 2196 (2.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 26  bytes 2196 (2.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node103.yinzhengjie.org.cn ~]# 
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r1 ping 10.1.0.102    #我們使用r1名稱空間去ping 一下r2名稱空間的虛擬網卡地址,發現是可以互通的!
PING 10.1.0.102 (10.1.0.102) 56(84) bytes of data.
64 bytes from 10.1.0.102: icmp_seq=1 ttl=64 time=0.017 ms
64 bytes from 10.1.0.102: icmp_seq=2 ttl=64 time=0.019 ms
64 bytes from 10.1.0.102: icmp_seq=3 ttl=64 time=0.041 ms
64 bytes from 10.1.0.102: icmp_seq=4 ttl=64 time=0.019 ms
64 bytes from 10.1.0.102: icmp_seq=5 ttl=64 time=0.048 ms
64 bytes from 10.1.0.102: icmp_seq=6 ttl=64 time=0.020 ms
^C
--- 10.1.0.102 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 4999ms
rtt min/avg/max/mdev = 0.017/0.027/0.048/0.013 ms
[root@node103.yinzhengjie.org.cn ~]#
[root@node103.yinzhengjie.org.cn ~]# ip netns exec r1 ping 10.1.0.102    #我們使用r1名稱空間去ping 一下r2名稱空間的虛擬網卡地址,發現是可以互通的!

 

四.主機名及主機列表解析相關配置案例

1>.啟動一個容器時指定網絡模型

[root@node101.yinzhengjie.org.cn ~]# docker run --name test1 -it --network bridge --rm busybox:latest    #啟動一個名稱為test1的容器,網絡模型為bridge,其中"-it"表示啟動一個交互式終端,"-rm"表示程序允許結束后自動刪除容器。
/ # 
/ # hostname       #大家初次看到這個主機名是否有疑慮為什么主機名會是一個隨機字符串,其實不然,它默認是容器ID(CONTAINER ID)的名稱,可以在開一個終端使用"docker ps"命令進行驗證。
9441fef2264c
/ # 
/ # exit
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker run --name test1 -it --network bridge -h node102.yinzhengjie.org.cn --rm busybox:latest      #我們在啟動容器時使用"-h"參數來指定虛擬機的主機名,但該主機名並不會修改"docker ps"中的"CONTAINER ID"的名稱喲~
/ # 
/ # hostname 
node102.yinzhengjie.org.cn
/ #

2>.啟動容器時指定dns地址

[root@node101.yinzhengjie.org.cn ~]# docker run --name test1 -it --network bridge -h node102.yinzhengjie.org.cn --rm busybox:latest
/ # 
/ # hostname 
node102.yinzhengjie.org.cn
/ # 
/ # cat /etc/resolv.conf 
# Generated by NetworkManager
search yinzhengjie.org.cn
nameserver 172.30.1.254
/ # 
/ # exit
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker run --name test1 -it --network bridge -h node102.yinzhengjie.org.cn --dns 114.114.114.114 --rm busybox:latest
/ # 
/ # cat /etc/resolv.conf 
search yinzhengjie.org.cn
nameserver 114.114.114.114
/ # 
[root@node101.yinzhengjie.org.cn ~]# docker run --name test1 -it --network bridge -h node102.yinzhengjie.org.cn --dns 114.114.114.114 --rm busybox:latest

3>.啟動容器時自定義dns search

[root@node101.yinzhengjie.org.cn ~]# docker run --name test1 -it --network bridge -h node102.yinzhengjie.org.cn --dns 114.114.114.114 --rm busybox:latest
/ # 
/ # cat /etc/resolv.conf 
search yinzhengjie.org.cn
nameserver 114.114.114.114
/ # 
/ # 
/ # exit
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker run --name test1 -it --network bridge -h node102.yinzhengjie.org.cn --dns 114.114.114.114 --dns-search ilinux.io --rm busybox:latest
/ # 
/ # cat /etc/resolv.conf 
search ilinux.io
nameserver 114.114.114.114
/ # 
[root@node101.yinzhengjie.org.cn ~]# docker run --name test1 -it --network bridge -h node102.yinzhengjie.org.cn --dns 114.114.114.114 --dns-search ilinux.io --rm busybox:latest

4>.啟動容器時自定義主機解析列表

[root@node101.yinzhengjie.org.cn ~]# docker run --name test1 -it --network bridge -h node102.yinzhengjie.org.cn --dns 114.114.114.114 --dns-search ilinux.io --rm busybox:latest
/ # 
/ # cat /etc/hosts 
127.0.0.1    localhost
::1    localhost ip6-localhost ip6-loopback
fe00::0    ip6-localnet
ff00::0    ip6-mcastprefix
ff02::1    ip6-allnodes
ff02::2    ip6-allrouters
172.17.0.2    node102.yinzhengjie.org.cn node102
/ # 
/ # exit
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker run --name test1 -it --network bridge -h node102.yinzhengjie.org.cn --dns 114.114.114.114 --dns-search ilinux.io --add-host node101.yinzhengjie.org.cn:172.30.1.101 --rm busybox:latest
/ # 
/ # cat /etc/hosts
127.0.0.1    localhost
::1    localhost ip6-localhost ip6-loopback
fe00::0    ip6-localnet
ff00::0    ip6-mcastprefix
ff02::1    ip6-allnodes
ff02::2    ip6-allrouters
172.30.1.101    node101.yinzhengjie.org.cn
172.17.0.2    node102.yinzhengjie.org.cn node102
/ # 
/ # 
[root@node101.yinzhengjie.org.cn ~]# docker run --name test1 -it --network bridge -h node102.yinzhengjie.org.cn --dns 114.114.114.114 --dns-search ilinux.io --add-host node101.yinzhengjie.org.cn:172.30.1.101 --rm busybox:latest

 

五.打開入站通信(opening inbound communication)

1>.“-p”選項的使用格式

-P<containerPort>
  將指定的容器端口映射至主機所有地址的一個動態端口。 -p<hostPort>:<containerPort>
  將容器端口<containerPort>映射至指定的主機端口<hostPort>。 -p<ip>::<containerPort>
  將指定的容器端口<containerPort>映射至主機指定<ip>的動態端口。 -p<ip>:<hostPort>:<containerPort>
  將指定的容器端口<containerPort>映射至主機指定<ip>的端口<hostPort>

注意:"動態端口"指隨機端口,具體的映射結果可使用"docker port"命令查看。

2>.“-P <containerPort>”案例展示

[root@node101.yinzhengjie.org.cn ~]# docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
jason/httpd         v0.2                78fb6601880f        26 hours ago        1.22MB
jason/httpd         v0.1-1              76d5e6c143b2        42 hours ago        1.22MB
busybox             latest              19485c79a9bb        6 weeks ago         1.22MB
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker run --name myweb --rm -p 80 jason/httpd:v0.2    #我們使用咱們之前自定義的鏡像來做實驗,啟動容器后會自動啟動httpd服務
[root@node101.yinzhengjie.org.cn ~]# docker inspect myweb | grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.2",
                    "IPAddress": "172.17.0.2",
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# curl 172.17.0.2                #另啟動一個終端,可以正常訪問咱們的容器服務。
<h1>Busybox httpd server.[Jason Yin dao ci yi you !!!]</h1>
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# curl 172.17.0.2                #另啟動一個終端,可以正常訪問咱們的容器服務。
[root@node101.yinzhengjie.org.cn ~]# iptables -t nat -vnL      #注意觀察"Chain Docker"那一列,其實我們發現所謂的端口暴露無非是Docker底層調用了DNAT實現的。
Chain PREROUTING (policy ACCEPT 2 packets, 473 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   25  2999 PREROUTING_direct  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
   25  2999 PREROUTING_ZONES_SOURCE  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
   25  2999 PREROUTING_ZONES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    3   156 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 1 packets, 60 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  115  8297 OUTPUT_direct  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 3 packets, 164 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    1    59 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
    2   271 RETURN     all  --  *      *       192.168.122.0/24     224.0.0.0/24        
    0     0 RETURN     all  --  *      *       192.168.122.0/24     255.255.255.255     
    0     0 MASQUERADE  tcp  --  *      *       192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
    0     0 MASQUERADE  udp  --  *      *       192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
    0     0 MASQUERADE  all  --  *      *       192.168.122.0/24    !192.168.122.0/24    
  115  8130 POSTROUTING_direct  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  115  8130 POSTROUTING_ZONES_SOURCE  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  115  8130 POSTROUTING_ZONES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:80

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           
    2   104 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:32768 to:172.17.0.2:80      #看到這一行了沒有?我們發現docker其實底層調用了iptable來幫它實現端口暴露。

Chain OUTPUT_direct (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING_ZONES (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  103  6991 POST_public  all  --  *      ens33   0.0.0.0/0            0.0.0.0/0           [goto] 
   12  1139 POST_public  all  --  *      +       0.0.0.0/0            0.0.0.0/0           [goto] 

Chain POSTROUTING_ZONES_SOURCE (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING_direct (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain POST_public (2 references)
 pkts bytes target     prot opt in     out     source               destination         
  115  8130 POST_public_log  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  115  8130 POST_public_deny  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  115  8130 POST_public_allow  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain POST_public_allow (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain POST_public_deny (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain POST_public_log (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain PREROUTING_ZONES (1 references)
 pkts bytes target     prot opt in     out     source               destination         
   24  2940 PRE_public  all  --  ens33  *       0.0.0.0/0            0.0.0.0/0           [goto] 
    1    59 PRE_public  all  --  +      *       0.0.0.0/0            0.0.0.0/0           [goto] 

Chain PREROUTING_ZONES_SOURCE (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain PREROUTING_direct (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain PRE_public (2 references)
 pkts bytes target     prot opt in     out     source               destination         
   25  2999 PRE_public_log  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
   25  2999 PRE_public_deny  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
   25  2999 PRE_public_allow  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain PRE_public_allow (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain PRE_public_deny (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain PRE_public_log (1 references)
 pkts bytes target     prot opt in     out     source               destination         
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]#iptables -t nat             #注意觀察"Chain Docker"那一列,其實我們發現所謂的端口暴露無非是Docker底層調用了DNAT實現的。

[root@node101.yinzhengjie.org.cn ~]# docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                   NAMES
9c020aca98dc        jason/httpd:v0.2    "/bin/httpd -f -h /d…"   2 minutes ago       Up 2 minutes        0.0.0.0:32768->80/tcp   myweb
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker port myweb          #我們發現docker幫我們做了一次映射,myweb容器將httpd服務綁定到了物理機所有網卡可用地址的32768端口進行服務暴露,我們可以直接使用物理機訪問它,如下圖所示。
80/tcp -> 0.0.0.0:32768
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hostname -i
172.30.1.101
[root@node101.yinzhengjie.org.cn ~]# 

[root@node101.yinzhengjie.org.cn ~]# docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                   NAMES
9c020aca98dc        jason/httpd:v0.2    "/bin/httpd -f -h /d…"   11 hours ago        Up 11 hours         0.0.0.0:32768->80/tcp   myweb
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker kill myweb        #停止容器運行后iptables中的nat表記錄也會跟着清除
myweb
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker container ls
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# iptables -t nat -vnL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination 
359 44547 PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 
359 44547 PREROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 
359 44547 PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 
3 156 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination 
118 8513 OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0 
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination 
1 59 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 
2 271 RETURN all -- * * 192.168.122.0/24 224.0.0.0/24 
0 0 RETURN all -- * * 192.168.122.0/24 255.255.255.255 
0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24 
118 8346 POSTROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 
118 8346 POSTROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 
118 8346 POSTROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0

Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination 
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0

Chain OUTPUT_direct (1 references)
pkts bytes target prot opt in out source destination

Chain POSTROUTING_ZONES (1 references)
pkts bytes target prot opt in out source destination 
106 7207 POST_public all -- * ens33 0.0.0.0/0 0.0.0.0/0 [goto] 
12 1139 POST_public all -- * + 0.0.0.0/0 0.0.0.0/0 [goto]

Chain POSTROUTING_ZONES_SOURCE (1 references)
pkts bytes target prot opt in out source destination

Chain POSTROUTING_direct (1 references)
pkts bytes target prot opt in out source destination

Chain POST_public (2 references)
pkts bytes target prot opt in out source destination 
118 8346 POST_public_log all -- * * 0.0.0.0/0 0.0.0.0/0 
118 8346 POST_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0 
118 8346 POST_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0

Chain POST_public_allow (1 references)
pkts bytes target prot opt in out source destination

Chain POST_public_deny (1 references)
pkts bytes target prot opt in out source destination

Chain POST_public_log (1 references)
pkts bytes target prot opt in out source destination

Chain PREROUTING_ZONES (1 references)
pkts bytes target prot opt in out source destination 
358 44488 PRE_public all -- ens33 * 0.0.0.0/0 0.0.0.0/0 [goto] 
1 59 PRE_public all -- + * 0.0.0.0/0 0.0.0.0/0 [goto]

Chain PREROUTING_ZONES_SOURCE (1 references)
pkts bytes target prot opt in out source destination

Chain PREROUTING_direct (1 references)
pkts bytes target prot opt in out source destination

Chain PRE_public (2 references)
pkts bytes target prot opt in out source destination 
359 44547 PRE_public_log all -- * * 0.0.0.0/0 0.0.0.0/0 
359 44547 PRE_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0 
359 44547 PRE_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0

Chain PRE_public_allow (1 references)
pkts bytes target prot opt in out source destination

Chain PRE_public_deny (1 references)
pkts bytes target prot opt in out source destination

Chain PRE_public_log (1 references)
pkts bytes target prot opt in out source destination 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]#
[root@node101.yinzhengjie.org.cn ~]# docker kill myweb        #停止容器運行后iptables中的nat表記錄也會跟着清除

3>.“-p<hostPort>:<containerPort>”案例展示

[root@node101.yinzhengjie.org.cn ~]# docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
jason/httpd         v0.2                78fb6601880f        26 hours ago        1.22MB
jason/httpd         v0.1-1              76d5e6c143b2        42 hours ago        1.22MB
busybox             latest              19485c79a9bb        6 weeks ago         1.22MB
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker run --name myweb --rm -p 80:80 jason/httpd:v0.2      #運行時阻塞在當前終端,需要重新開啟一個終端查看容器運行情況
[root@node101.yinzhengjie.org.cn ~]# docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                NAMES
4e508f5351e9        jason/httpd:v0.2    "/bin/httpd -f -h /d…"   About a minute ago   Up About a minute   0.0.0.0:80->80/tcp   myweb
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker port myweb        #我們發現容器的80端口被綁定到物理機所有可用網卡的80端口啦。
80/tcp -> 0.0.0.0:80
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker port myweb        #我們發現容器的80端口被綁定到物理機所有可用網卡的80端口啦,服務可以正常訪問,如下圖所示。

[root@node101.yinzhengjie.org.cn ~]# docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
4e508f5351e9        jason/httpd:v0.2    "/bin/httpd -f -h /d…"   3 minutes ago       Up 3 minutes        0.0.0.0:80->80/tcp   myweb
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker kill myweb        #終止myweb容器運行
myweb
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker container ls
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker kill myweb        #終止myweb容器運行

4>.“-p <ip>::<containerPort>”案例展示

[root@node101.yinzhengjie.org.cn ~]# docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
jason/httpd         v0.2                78fb6601880f        36 hours ago        1.22MB
jason/httpd         v0.1-1              76d5e6c143b2        2 days ago          1.22MB
busybox             latest              19485c79a9bb        6 weeks ago         1.22MB
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker run --name myweb --rm -p 172.30.1.101::80 jason/httpd:v0.2      #啟動容器時指定綁定的具體IP,另外啟動一個終端查看對應的port信息
[root@node101.yinzhengjie.org.cn ~]# docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                        N
AMES9dab293691ef        jason/httpd:v0.2    "/bin/httpd -f -h /d…"   25 seconds ago      Up 24 seconds       172.30.1.101:32768->80/tcp   myweb
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker port myweb         #很明顯,此時myweb容器的80端口被動態綁定到172.30.1.101這塊網卡的32768端口啦
80/tcp -> 172.30.1.101:32768
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# curl 172.30.1.101:32768      #服務依然是可用正常訪問的
<h1>Busybox httpd server.[Jason Yin dao ci yi you !!!]</h1>
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker kill myweb          #停止容器運行
myweb
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker port myweb         #很明顯,此時myweb容器的80端口被動態綁定到172.30.1.101這塊網卡的32768端口啦

4>.“-p<ip>:<hostPort>:<containerPort>”

[root@node101.yinzhengjie.org.cn ~]# docker run --name myweb --rm -p 172.30.1.101:8080:80 jason/httpd:v0.2    #啟動容器時指定具體IP和其對應的端口映射容器主機的80端口,運行時程序會阻塞。
[root@node101.yinzhengjie.org.cn ~]# docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                       N
AMES4bb5e9b96599        jason/httpd:v0.2    "/bin/httpd -f -h /d…"   About a minute ago   Up About a minute   172.30.1.101:8080->80/tcp   
myweb[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker port myweb        #發現配置生效啦,物理機訪問結果如下圖所示。
80/tcp -> 172.30.1.101:8080
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 

 

六.基於host的網絡模型案例

1>.兩個虛擬機公用相同網絡名稱空間

[root@node101.yinzhengjie.org.cn ~]# docker run --name c1 -it --rm busybox      #啟動第一個容器,注意觀察IP地址
/ # 
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02  
          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:656 (656.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 
[root@node101.yinzhengjie.org.cn ~]# docker run --name c1 -it --rm busybox      #啟動第一個容器,注意觀察IP地址
[root@node101.yinzhengjie.org.cn ~]# docker run --name c2 -it --network container:c1 --rm busybox    #啟動第二個容器,網絡名稱空間和c1容器共享,注意觀察2個容器的IP地址,發現c2容器的IP地址和c1的竟然是一樣的。
/ # 
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02  
          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:656 (656.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 
/ # echo "hello world" > /tmp/index.html
/ # 
/ # ls /tmp/
index.html
/ # 
/ # httpd -h /tmp/            #我們在c2容器中啟動一個http服務
/ # 
/ # netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State 
tcp 0 0 :::80 :::* LISTEN 
/ #
[root@node101.yinzhengjie.org.cn ~]# docker run --name c1 -it --rm busybox
/ # 
.....
/ # wget -O - -q 127.0.0.1              #我們在c1容器完全是可以訪問到c2容器的服務的,因為它們公用的是相同的網絡名稱空間,但文件系統依舊是獨立的,可以查看"/tmp"目錄下是否有"index.html"文件
hello world
/ # 
/ # ls /tmp/                        #需要注意的是,它們的文件系統還是獨立的,我們在c1的容器無法訪問到c2容器的文件系統。
/ #
/ # wget -O - -q 127.0.0.1              #我們在c1容器完全是可以訪問到c2容器的服務的,因為它們公用的是相同的網絡名稱空間,但文件系統依舊是獨立的,可以查看"/tmp"目錄下是否有"index.html"文件 hello world

2>.容器和宿主機公用相同的名稱空間案例

[root@node101.yinzhengjie.org.cn ~]# ifconfig 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:5aff:fe50:fc34  prefixlen 64  scopeid 0x20<link>
        ether 02:42:5a:50:fc:34  txqueuelen 0  (Ethernet)
        RX packets 61  bytes 4728 (4.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 75  bytes 8346 (8.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.30.1.101  netmask 255.255.255.0  broadcast 172.30.1.255
        inet6 fe80::20c:29ff:febe:114d  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:be:11:4d  txqueuelen 1000  (Ethernet)
        RX packets 9344  bytes 2086245 (1.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6433  bytes 634801 (619.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 100  bytes 10960 (10.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 100  bytes 10960 (10.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:a9:de:9b  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker run --name c1 --network host -it --rm busybox        #我們發現容器和宿主機使用的是相同網絡名稱空間,容器的所有網卡均和宿主機一樣。
/ # 
/ # ifconfig 
docker0   Link encap:Ethernet  HWaddr 02:42:5A:50:FC:34  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:5aff:fe50:fc34/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:61 errors:0 dropped:0 overruns:0 frame:0
          TX packets:75 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:4728 (4.6 KiB)  TX bytes:8346 (8.1 KiB)

ens33     Link encap:Ethernet  HWaddr 00:0C:29:BE:11:4D  
          inet addr:172.30.1.101  Bcast:172.30.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:febe:114d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:9552 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6566 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2104397 (2.0 MiB)  TX bytes:652471 (637.1 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:100 errors:0 dropped:0 overruns:0 frame:0
          TX packets:100 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:10960 (10.7 KiB)  TX bytes:10960 (10.7 KiB)

virbr0    Link encap:Ethernet  HWaddr 52:54:00:A9:DE:9B  
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 
/ # 
/ # echo "hello container" > /tmp/index.html
/ # 
/ # ls /tmp/
index.html
/ # 
/ # httpd -h /tmp/        #啟動HTTPD服務
/ # 
/ # netstat -tnl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State 
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 
tcp 0 0 0.0.0.0:6000 0.0.0.0:* LISTEN 
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 
tcp 0 0 :::111 :::* LISTEN 
tcp 0 0 :::80 :::* LISTEN 
tcp 0 0 :::6000 :::* LISTEN 
tcp 0 0 :::22 :::* LISTEN 
tcp 0 0 ::1:631 :::* LISTEN 
tcp 0 0 ::1:25 :::* LISTEN 
/ # 
/ #
[root@node101.yinzhengjie.org.cn ~]# docker run --name c1 --network host -it --rm busybox        #我們發現容器和宿主機使用的是相同網絡名稱空間,容器的所有網卡均和宿主機一樣。
[root@node101.yinzhengjie.org.cn ~]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:6000            0.0.0.0:*               LISTEN     
tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN     
tcp6       0      0 :::111                  :::*                    LISTEN     
tcp6       0      0 :::80                   :::*                    LISTEN     
tcp6       0      0 :::6000                 :::*                    LISTEN     
tcp6       0      0 :::22                   :::*                    LISTEN     
tcp6       0      0 ::1:631                 :::*                    LISTEN     
tcp6       0      0 ::1:25                  :::*                    LISTEN     
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# curl 127.0.0.1:80        #在容器內部啟動httpd服務后,在宿主機上也是可以正常訪問到的,因為它們使用的是相同的網絡名稱空間。
hello container
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# curl 127.0.0.1:80        #在容器內部啟動httpd服務后,在宿主機上也是可以正常訪問到的,因為它們使用的是相同的網絡名稱空間。

 

七.自定義docker0橋的網絡屬性信息

1>.修改配置文件

[root@node101.yinzhengjie.org.cn ~]# docker ps 
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# systemctl stop docker
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# vi /etc/docker/daemon.json 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# cat /etc/docker/daemon.json       #注意,下面的核心選項為"bip",即"bridge ip"之意,用於指定docker0橋自身的IP地址;其它選項可通過此計算出來,當然DNS咱們得單獨指定哈~
{
  "registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com"],
  "bip":"192.168.100.254/24",
  "dns":["219.141.139.10","219.141.140.10"]
}
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# systemctl start docker
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ifconfig docker0            #重啟docker服務后,發現配置生效啦!
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.100.254  netmask 255.255.255.0  broadcast 192.168.100.255
        inet6 fe80::42:5aff:fe50:fc34  prefixlen 64  scopeid 0x20<link>
        ether 02:42:5a:50:fc:34  txqueuelen 0  (Ethernet)
        RX packets 61  bytes 4728 (4.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 75  bytes 8346 (8.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 

2>.創建一個docker容器觀察上一步的配置是否生效

[root@node101.yinzhengjie.org.cn ~]# docker run --name c1  -it --rm busybox        #和咱們想想的一樣,配置已經生效啦!
/ # 
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:64:01  
          inet addr:192.168.100.1  Bcast:192.168.100.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:516 (516.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 
/ # cat /etc/resolv.conf 
search yinzhengjie.org.cn
nameserver 219.141.139.10
nameserver 219.141.140.10
/ # 
/ # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.100.254 0.0.0.0         UG    0      0        0 eth0
192.168.100.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
/ # 
[root@node101.yinzhengjie.org.cn ~]# docker run --name c1 -it --rm busybox        #和咱們想想的一樣,配置已經生效啦!

 

八.修改docker默認的監聽方式

1>.查看docker默認監聽Unix Socket格式的地址

[root@node101.yinzhengjie.org.cn ~]# ll /var/run/docker.sock       #我們知道Unix socket文件只支持本地通信,想要跨主機支持就得用別的方式實現啦!而默認就是基於Unix socket套接字實現。
srw-rw----. 1 root docker 0 Oct 19 19:13 /var/run/docker.sock
[root@node101.yinzhengjie.org.cn ~]# 

2>.修改配置文件

[root@node101.yinzhengjie.org.cn ~]# docker container ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
e249bc41f2fd        busybox             "sh"                10 minutes ago      Up 10 minutes                           c1
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker kill c1
c1
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker container ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# systemctl stop docker
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# vi /etc/docker/daemon.json 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com"],
  "bip":"192.168.100.254/24",
  "dns":["219.141.139.10","219.141.140.10"],
  "hosts":["tcp://0.0.0.0:8888","unix:///var/run/docker.sock"]        #此處我們綁定了docker啟動基於tcp啟動便於其它主機訪問,基於unix套接字啟動便於本地訪問
}
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# systemctl start docker
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# netstat -ntl | grep 8888
tcp6       0      0 :::8888                 :::*                    LISTEN     
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# systemctl restart docker          #記一次啟動服務報錯,附有詳細解決流程。
Warning: docker.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl
-xe" for details.

[root@node101.yinzhengjie.org.cn ~]#
[root@node101.yinzhengjie.org.cn ~]# tail -100f /var/log/messages          #啟動docker服務時最好是一邊啟動服務一邊查看日志信息。啟動時發現有以下報錯信息
......
Oct 19 19:52:03 node101 dockerd: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are 
specified both as a flag and in the configuration file: hosts: (from flag: [fd://], from file: [tcp://0.0.0.0:2375 unix:///var/run/docker.sock])

....
初次看到這個信息給我的感覺是配置文件出錯了,但是始終找不到配置文件哪里有錯,最后查看官方文檔有關報錯信息提到了類似的說明,鏈接地址為:https://docs.docker.com/config/daemon/

  報錯分析說咱們在配置文件指定了"hosts"這個key,“啟動時始終使用主機標志dockerd。如果在中指定 hosts條目,則將daemon.json導致配置沖突並且Docker無法啟動。”

具體解決步驟如下所示:
[root@node101.yinzhengjie.org.cn ~]# mkdir -pv /etc/systemd/system/docker.service.d/
mkdir: created directory ‘/etc/systemd/system/docker.service.d/’
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# vi /etc/systemd/system/docker.service.d/docker.conf
[root@node101.yinzhengjie.org.cn ~]#

[root@node101.yinzhengjie.org.cn ~]# cat /etc/systemd/system/docker.service.d/docker.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# systemctl daemon-reload        #這個操作必須做
[root@node101.yinzhengjie.org.cn ~]#
[root@node101.yinzhengjie.org.cn ~]# systemctl restart docker        #經過上述操作,發現問題得到解決
[root@node101.yinzhengjie.org.cn ~]#

[root@node101.yinzhengjie.org.cn ~]# netstat -ntl | grep 8888
tcp6 0 0 :::8888 :::* LISTEN 
[root@node101.yinzhengjie.org.cn ~]#
[root@node101.yinzhengjie.org.cn ~]# systemctl restart docker          #記一次啟動服務報錯,附有詳細解決流程。

3>.此時配置是否成功(docker客戶端連接其它docker daemon進程)

[root@node101.yinzhengjie.org.cn ~]# docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
redis               latest              de25a81a5a0b        33 hours ago        98.2MB
busybox             latest              19485c79a9bb        6 weeks ago         1.22MB
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker run --name c1  -it --rm busybox        #我們在node101.yinzhengjie.org.cn節點上運行一個容器,於此同時,我們可以在另一個節點來訪問當前節點的docker服務
/ # 
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:64:01  
          inet addr:192.168.100.1  Bcast:192.168.100.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 
/ # cat /etc/resolv.conf 
search www.tendawifi.com yinzhengjie.org.cn
nameserver 219.141.139.10
nameserver 219.141.140.10
/ # 
[root@node101.yinzhengjie.org.cn ~]# docker run --name c1 -it --rm busybox        #我們在node101.yinzhengjie.org.cn節點上運行一個容器,於此同時,我們可以在另一個節點來訪問當前節點的docker服務
[root@node102.yinzhengjie.org.cn ~]# systemctl start docker
[root@node102.yinzhengjie.org.cn ~]# 
[root@node102.yinzhengjie.org.cn ~]# docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
[root@node102.yinzhengjie.org.cn ~]# 
[root@node102.yinzhengjie.org.cn ~]# docker container ls
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@node102.yinzhengjie.org.cn ~]# 
[root@node102.yinzhengjie.org.cn ~]# systemctl stop docker          #我們停掉node102.yinzhengjie.org.cn節點的docker服務
[root@node102.yinzhengjie.org.cn ~]# 
[root@node102.yinzhengjie.org.cn ~]# docker -H node101.yinzhengjie.org.cn:8888 image ls      #我們訪問node101.yinzhengjie.org.cn節點的docker服務,查看該節點的鏡像發現是可以查看到數據的。
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
redis               latest              de25a81a5a0b        33 hours ago        98.2MB
busybox             latest              19485c79a9bb        6 weeks ago         1.22MB
[root@node102.yinzhengjie.org.cn ~]# 
[root@node102.yinzhengjie.org.cn ~]# docker -H node101.yinzhengjie.org.cn:8888 container ls    #同理訪問容器信息也可以查看到!注意:端口號必須得指定,如果你設置的是2375端口則可以不指定端口喲~
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
68db1d8192ad        busybox             "sh"                3 minutes ago       Up 3 minutes                            c1
[root@node102.yinzhengjie.org.cn ~]# 
[root@node102.yinzhengjie.org.cn ~]# 

 

 

九.創建自定義的網絡模型

1>.查看已有的網絡模型

[root@node101.yinzhengjie.org.cn ~]# docker network ls     #默認的網絡模型,創建容器時若不指定網絡模型,默認使用"bridge"
NETWORK ID          NAME                DRIVER              SCOPE
7ad23e4ff4b3        bridge              bridge              local
8aeb2bc6b3fe        host                host                local
7e83f7595aac        none                null                local
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker info | grep Network
Network: bridge host ipvlan macvlan null overlay          #我們發現其實網絡模型不僅僅只有上面默認的bridge,host和null,docker還支持macvlan以及overlay技術。
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]#

2>.通過創建自定義的網絡模型啟動一個容器

[root@node101.yinzhengjie.org.cn ~]# docker network create --help    #查看創建網絡驅動的幫助信息

Usage:    docker network create [OPTIONS] NETWORK

Create a network

Options:
      --attachable           Enable manual container attachment
      --aux-address map      Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[])
      --config-from string   The network from which copying the configuration
      --config-only          Create a configuration only network
  -d, --driver string        Driver to manage the Network (default "bridge")
      --gateway strings      IPv4 or IPv6 Gateway for the master subnet
      --ingress              Create swarm routing-mesh network
      --internal             Restrict external access to the network
      --ip-range strings     Allocate container ip from a sub-range
      --ipam-driver string   IP Address Management Driver (default "default")
      --ipam-opt map         Set IPAM driver specific options (default map[])
      --ipv6                 Enable IPv6 networking
      --label list           Set metadata on a network
  -o, --opt map              Set driver specific options (default map[])
      --scope string         Control the network's scope
      --subnet strings       Subnet in CIDR format that represents a network segment
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker network create --help    #查看創建網絡驅動的幫助信息
[root@node101.yinzhengjie.org.cn ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
7ad23e4ff4b3        bridge              bridge              local
8aeb2bc6b3fe        host                host                local
7e83f7595aac        none                null                local
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker network create -d bridge --subnet "192.168.200.0/24" --gateway "192.168.200.254" mybr0      #基於bridge創建一塊mybr0的網絡模型
3d42817e3691bc9f4275b6a222ef6d792b1e0817817e97af77d35dfdfbfe7e24
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker network ls        #注意觀察多出來了一行"mybr0"的信息
NETWORK ID          NAME                DRIVER              SCOPE
7ad23e4ff4b3        bridge              bridge              local
8aeb2bc6b3fe        host                host                local
3d42817e3691        mybr0               bridge              local
7e83f7595aac        none                null                local
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ifconfig             #注意觀察多出來了一塊網卡,而且網卡的地址就是咱們上面配置的"192.168.200.254"
br-3d42817e3691: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.200.254  netmask 255.255.255.0  broadcast 192.168.200.255
        ether 02:42:db:6f:71:5d  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.100.254  netmask 255.255.255.0  broadcast 192.168.100.255
        ether 02:42:16:4f:26:da  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        ether 08:00:27:e0:bb:66  txqueuelen 1000  (Ethernet)
        RX packets 104555  bytes 139228111 (132.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13542  bytes 860664 (840.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.30.1.101  netmask 255.255.255.0  broadcast 172.30.1.255
        ether 08:00:27:c1:c7:46  txqueuelen 1000  (Ethernet)
        RX packets 2478  bytes 200047 (195.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1999  bytes 391091 (381.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# docker network create -d bridge --subnet "192.168.200.0/24" --gateway "192.168.200.254" mybr0      #基於bridge創建一塊mybr0的網絡模型
[root@node101.yinzhengjie.org.cn ~]# ifconfig 
br-3d42817e3691: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.200.254  netmask 255.255.255.0  broadcast 192.168.200.255
        ether 02:42:db:6f:71:5d  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.100.254  netmask 255.255.255.0  broadcast 192.168.100.255
        ether 02:42:16:4f:26:da  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        ether 08:00:27:e0:bb:66  txqueuelen 1000  (Ethernet)
        RX packets 104555  bytes 139228111 (132.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13542  bytes 860664 (840.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.30.1.101  netmask 255.255.255.0  broadcast 172.30.1.255
        ether 08:00:27:c1:c7:46  txqueuelen 1000  (Ethernet)
        RX packets 2623  bytes 211871 (206.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2102  bytes 408675 (399.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ifconfig br-3d42817e3691 down              #關掉網卡后改名
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ip link set dev br-3d42817e3691 name docker1      #將咱們自定義的網卡名稱改為docker1
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ifconfig -a
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.100.254  netmask 255.255.255.0  broadcast 192.168.100.255
        ether 02:42:16:4f:26:da  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker1: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.200.254  netmask 255.255.255.0  broadcast 192.168.200.255
        ether 02:42:db:6f:71:5d  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        ether 08:00:27:e0:bb:66  txqueuelen 1000  (Ethernet)
        RX packets 104555  bytes 139228111 (132.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13542  bytes 860664 (840.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.30.1.101  netmask 255.255.255.0  broadcast 172.30.1.255
        ether 08:00:27:c1:c7:46  txqueuelen 1000  (Ethernet)
        RX packets 2716  bytes 219575 (214.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2165  bytes 419525 (409.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ip link set dev br-3d42817e3691 name docker1      #將咱們自定義的網卡名稱改為docker1
[root@node101.yinzhengjie.org.cn ~]# docker run --name c1 -it --net mybr0 --rm busybox:latest    #基於咱們自定義的網卡mybr0創建一個容器名稱為c1
/ # 
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:C8:01  
          inet addr:192.168.200.1  Bcast:192.168.200.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 
[root@node101.yinzhengjie.org.cn ~]# docker run --name c1 -it --net mybr0 --rm busybox:latest    #基於咱們自定義的網卡mybr0創建一個容器名稱為c1
[root@node101.yinzhengjie.org.cn ~]# docker run --name c2 -it --net bridge --rm busybox        #使用默認的bridge網絡創建一個c2的容器
/ # 
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:64:01  
          inet addr:192.168.100.1  Bcast:192.168.100.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 
/ # ping 192.168.200.1          #此時我們發現無法和咱們自定義的"mybr0"網絡模型的容器c1進行通信,如果路由轉發("/proc/sys/net/ipv4/ip_forward")參數是開啟的話,估計問題就出現在iptables上了,需要自行添加放行語句。
PING 192.168.200.1 (192.168.200.1): 56 data bytes

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM