Centos7的安裝、Docker1.12.3的安裝,以及Docker Swarm集群的簡單實例


目錄

1、環境准備

​ 本文中的案例會有四台機器,他們的Host和IP地址如下

c1 -> 10.0.0.31
c2 -> 10.0.0.32
c3 -> 10.0.0.33
c4 -> 10.0.0.34

​ 四台機器的host以c1為例:

[root@c1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.31 c1
10.0.0.32 c2
10.0.0.33 c3
10.0.0.34 c4

1.1、Centos 7 64位安裝

​ 以c1安裝為示例,安裝過程中使用英文版本,然后點擊continueidoall.org

​ 點擊LOCALIZATION下面的Data & Time,然后選擇Asia/shanghai時區,點擊Done.

idoall.org

​ 點擊SYSTEM下面的INSTALLATION DESTINATION,選擇你的硬盤后,在下面的單選框中,選擇I will configure partitioning點擊Done,我們來自定義硬盤和分區

idoall.org

​ 點擊Click here to create them automatically,系統會自動幫我們創建出推薦的分區格式。

idoall.org

​ 我們將/home的掛載點刪除掉,統一加到點/,文件類型是xfs,使用全部的硬盤空間,點擊Update Settings,確保后面軟件有足夠的安裝空間。 最后點擊左上角的Done按鈕

idoall.org

xfs是在Centos7.0開始提供的,原來的ext4雖然穩定,但最多只能有大概40多億文件,單個文件大小最大只能支持到16T(4K block size) 的話。而XFS使用64位管理空間,文件系統規模可以達到EB級別。

用於正式生產的服務器,切記必須把數據盤單獨分區,防止系統出問題時,保證數據的完整性。比如可以再划分一個,/data專門用來存放數據。

​ 在彈出的窗口中點擊Accept Changes

idoall.org

​ 點擊下圖中的位置,設置機器的Host Name,這里我們安裝機器的Host Namec1

idoall.org

​ 最后點擊右下角的Begin Installation,過程中可以設置root的密碼,也可以創建其他用戶

idoall.org

1.2、網絡配置

​ 以下以c1為例

[root@c1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=static     #啟用靜態IP地址
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eth0
UUID=e57c6a58-1951-4cfa-b3d1-cf25c4cdebdd
DEVICE=eth0
ONBOOT=yes  #開啟自動啟用網絡連接
IPADDR0=192.168.0.31    #設置IP地址
PREFIXO0=24 #設置子網掩碼
GATEWAY0=192.168.0.1    #設置網關
DNS1=192.168.0.1    #設置DNS
DNS2=8.8.8.8

​ 重啟網絡:

[root@c1 ~]# service network restart

​ 更改源為阿里雲

[root@c1 ~]# yum install -y wget
[root@c1 ~]# cd /etc/yum.repos.d/
[root@c1 yum.repos.d]# mv CentOS-Base.repo CentOS-Base.repo.bak
[root@c1 yum.repos.d]# wget http://mirrors.aliyun.com/repo/Centos-7.repo
[root@c1 yum.repos.d]# wget http://mirrors.163.com/.help/CentOS7-Base-163.repo
[root@c1 yum.repos.d]# yum clean all
[root@c1 yum.repos.d]# yum makecache

​ 安裝網絡工具包和基礎工具包

[root@c1 ~]# yum install net-tools checkpolicy gcc dkms foomatic openssh-server bash-completion -y

1.3、更改hostname

​ 在四台機器上依次設置hostname,以下以c1為例

[root@localhost ~]# hostnamectl --static set-hostname c1
[root@localhost ~]# hostnamectl status
   Static hostname: c1
         Icon name: computer-vm
           Chassis: vm
        Machine ID: e4ac9d1a9e9b4af1bb67264b83da59e4
           Boot ID: a128517ed6cb41d083da61de5951a109
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-327.36.3.el7.x86_64
      Architecture: x86-64

1.4、配置ssh免密碼登錄登錄

​ 先后在四台機器分別執行,以c1為例

[root@c1 ~]# ssh-keygen
#一路按回車到最后

​ 在免登錄端修改配置文件

[root@c1 ~]# vi /etc/ssh/sshd_config
#找到以下內容,並去掉注釋符#
  RSAAuthentication yes
  PubkeyAuthentication yes
  AuthorizedKeysFile  .ssh/authorized_keys

​ 將ssh-keygen生成的密鑰,分別復制到其他三台機器,以下以c1為例

[root@c1 ~]# ssh-copy-id c1
The authenticity of host 'c1 (10.0.0.31)' can't be established.
ECDSA key fingerprint is 22:84:fe:22:c2:e1:81:a6:77:d2:dc:be:7b:b7:bf:b8.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c1'"
and check to make sure that only the key(s) you wanted were added.

[root@c1 ~]# ssh-copy-id c2
The authenticity of host 'c2 (10.0.0.32)' can't be established.
ECDSA key fingerprint is 22:84:fe:22:c2:e1:81:a6:77:d2:dc:be:7b:b7:bf:b8.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c2'"
and check to make sure that only the key(s) you wanted were added.

[root@c1 ~]# ssh-copy-id c3
The authenticity of host 'c3 (10.0.0.33)' can't be established.
ECDSA key fingerprint is 22:84:fe:22:c2:e1:81:a6:77:d2:dc:be:7b:b7:bf:b8.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c3's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c3'"
and check to make sure that only the key(s) you wanted were added.

[root@c1 ~]# ssh-copy-id c4
The authenticity of host 'c4 (10.0.0.34)' can't be established.
ECDSA key fingerprint is 22:84:fe:22:c2:e1:81:a6:77:d2:dc:be:7b:b7:bf:b8.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c4's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c4'"
and check to make sure that only the key(s) you wanted were added.

​ 測試密鑰是否配置成功

[root@c1 ~]# for N in $(seq 1 4); do ssh c$N hostname; done;
c1
c2
c3
c4

​ 安裝ntp時間同步工具和git

[root@c1 ~]# for N in $(seq 1 4); do ssh c$N yum install ntp git -y; done;

2、安裝Docker1.12.3和初步配置

​ 可以直接在github上獲取Docker各個版本包:https://github.com/docker/docker/releases

​ 鏈接中提供了所有的Docker核心包:http://yum.dockerproject.org/repo/main/centos/7/Packages/

2.1、安裝Docker1.12.3

​ 不建議直接使用Docker官方的docker yum源進行安裝,因為會依據系統版本去選擇Docker版本,不能指定相應的版本進行選擇安裝。在四台機器上依次執行下面的命令,可以將下面的命令,直接復制粘貼到命令行中

mkdir -p ~/_src \
&& cd ~/_src \
&& wget http://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-selinux-1.12.3-1.el7.centos.noarch.rpm \
&& wget http://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-1.12.3-1.el7.centos.x86_64.rpm \
&& wget http://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-debuginfo-1.12.3-1.el7.centos.x86_64.rpm \
&& yum localinstall -y docker-engine-selinux-1.12.3-1.el7.centos.noarch.rpm docker-engine-1.12.3-1.el7.centos.x86_64.rpm docker-engine-debuginfo-1.12.3-1.el7.centos.x86_64.rpm

2.2、 驗證Docker是否安裝成功

​ Centos7中Docker1.12中默認使用Docker作為客戶端程序,使用dockerd作為服務端程序。

[root@c1 _src]# docker version
Client:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:
 OS/Arch:      linux/amd64
Cannot connect to the Docker daemon. Is the docker daemon running on this host?

2.3、啟動Docker daemon程序

​ 在Docker1.12中,默認的daemon程序是dockerd,可以執行dockerd或者使用系統自帶systemd去管理服務。但是需要注意的是,默認用的都是默認的參數,比如私有網段默認使用172.17.0.0/16 ,網橋使用docker0等等

[root@c1 _src]# dockerd
INFO[0000] libcontainerd: new containerd process, pid: 6469
WARN[0000] containerd: low RLIMIT_NOFILE changing to max  current=1024 max=4096
WARN[0001] devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section.
WARN[0001] devmapper: Base device already exists and has filesystem xfs on it. User specified filesystem  will be ignored.
INFO[0001] [graphdriver] using prior storage driver "devicemapper"
INFO[0001] Graph migration to content-addressability took 0.00 seconds
WARN[0001] mountpoint for pids not found
INFO[0001] Loading containers: start.
INFO[0001] Firewalld running: true
INFO[0001] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address

INFO[0001] Loading containers: done.
INFO[0001] Daemon has completed initialization
INFO[0001] Docker daemon                                 commit=6b644ec graphdriver=devicemapper version=1.12.3
INFO[0001] API listen on /var/run/docker.sock

2.3、通過系統自帶的systemctl啟動docker,並啟動docker服務

[root@c1 _src]# systemctl enable docker && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

使用dockerd --help查看啟動參數

[root@c1 _src]# dockerd --help
Usage: dockerd [OPTIONS]

A self-sufficient runtime for containers.

Options:

  --add-runtime=[]                         Register an additional OCI compatible runtime
  --api-cors-header                        Set CORS headers in the remote API
  --authorization-plugin=[]                Authorization plugins to load
  -b, --bridge          #指定容器使用的網絡接口,默認為docker0,也可以指定其他網絡接口
--bip                 #指定橋接地址,即定義一個容器的私有網絡
--cgroup-parent       #為所有的容器指定父cgroup
--cluster-advertise   #為集群設定一個地址或者名字
--cluster-store       #后端分布式存儲的URL
--cluster-store-opt=map[]  #設置集群存儲參數
--config-file=/etc/docker/daemon.json  #指定配置文件
-D                    #啟動debug模式
--default-gateway     #為容器設定默認的ipv4網關(--default-gateway-v6)
--dns=[]              #設置dns
--dns-opt=[]          #設置dns參數
--dns-search=[]       #設置dns域
--exec-opt=[]         #運行時附加參數
--exec-root=/var/run/docker  #設置運行狀態文件存儲目錄
--fixed-cidr          #為ipv4子網綁定ip
-G, --group=docker    #設置docker運行時的屬組
-g, --graph=/var/lib/docker  #設置docker運行時的家目錄
-H, --host=[]         #設置docker程序啟動后套接字連接地址
--icc=true            #是內部容器可以互相通信,環境中需要禁止內部容器訪問
--insecure-registry=[] #設置內部私有注冊中心地址
--ip=0.0.0.0          #當映射容器端口的時候默認的ip(這個應該是在多主機網絡的時候會比較有用)
--ip-forward=true     #使net.ipv4.ip_forward生效,其實就是內核里面forward
--ip-masq=true        #啟用ip偽裝技術(容器訪問外部程序默認不會暴露自己的ip)
--iptables=true       #啟用容器使用iptables規則
-l, --log-level=info  #設置日志級別
--live-restore        #啟用熱啟動(重啟docker,保證容器一直運行1.12新特性)
--log-driver=json-file  #容器日志默認的驅動
--max-concurrent-downloads=3  #為每個pull設置最大並發下載
--max-concurrent-uploads=5    #為每個push設置最大並發上傳
--mtu                   #設置容器網絡的MTU
--oom-score-adjust=-500  #設置內存oom的平分策略(-1000/1000)
-p, --pidfile=/var/run/docker.pid  #指定pid所在位置
-s, --storage-driver     #設置docker存儲驅動
--selinux-enabled        #啟用selinux的支持
--storage-opt=[]         #存儲參數驅動
--swarm-default-advertise-addr  #設置swarm默認的node節點
--tls                    #使用tls加密
--tlscacert=~/.docker/ca.pem  #配置tls CA 認證
--tlscert=~/.docker/cert.pem  #指定認證文件
--tlskey=~/.docker/key.pem    #指定認證keys
--userland-proxy=true         #為回環接口使用用戶代理
--userns-remap                #為用戶態的namespaces設定用戶或組

2.4、修改docker的配置文件

​ 以下以c1為例,在ExecStart后面加上我們自定義的參數,其中三台機器也要做同步修改

[root@c1 ~]# vi /lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
# Overlayfs跟AUFS很像,但是性能比AUFS好,有更好的內存利用。
# 加上阿里雲的docker加速
ExecStart=/usr/bin/dockerd -s=overlay --registry-mirror=https://7rgqloza.mirror.aliyuncs.com --insecure-registry=localhost:5000 -H unix:///var/run/docker.sock --pidfile=/var/run/docker.pid
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

重啟docker服務,以保證新的配置生效

[root@c1 ~]# systemctl daemon-reload && systemctl restart docker.service

3、創建swarm 集群

10.0.0.31 (hostname:c1)作為swarm manager1

10.0.0.32 (hostname:c2)做為swarm manager2

10.0.0.33 (hostname:c3)做為swarm agent1

10.0.0.34 (hostname:c4)做為swarm agent2

3.1、開放firewall防火牆端口

​ 在配置集群前要先開放防火牆的端口,將下面的命令,復制、粘貼到4台機器的命令行中執行。

firewall-cmd --zone=public --add-port=2377/tcp --permanent && \
firewall-cmd --zone=public --add-port=7946/tcp --permanent && \
firewall-cmd --zone=public --add-port=7946/udp --permanent && \
firewall-cmd --zone=public --add-port=4789/tcp --permanent && \
firewall-cmd --zone=public --add-port=4789/udp --permanent && \
firewall-cmd --reload 

​ 以c1為例,查看端口開放情況

[root@c1 ~]# firewall-cmd --list-ports
4789/tcp 4789/udp 7946/tcp 2377/tcp 7946/udp

3.2、設置swarm集群並將其他3台機器添加到集群

​ 在c1上初始化swarm集群,用--listen-addr指定監聽的ip與端口

[root@c1 ~]# docker swarm init --listen-addr 0.0.0.0
Swarm initialized: current node (73ju72f6nlyl9kiib7z5r0bsk) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-47dxwelbdopq8915rjfr0hxe6t9cebsm0q30miro4u4qcwbh1c-4f1xl8ici0o32qfyru9y6wepv \
    10.0.0.31:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

使用docker swarm join-token manager可以查看加入為swarm manager的token

​ 查看結果,可以看到我們現在只有一個節點

[root@c1 ~]# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
73ju72f6nlyl9kiib7z5r0bsk *  c1        Ready   Active        Leader

​ 通過以下命令,我們將另外3台機器,加入到集群中,將下面的命令,復制、粘貼到c1的命令行中

for N in $(seq 2 4); \
do ssh c$N \
docker swarm join \
--token SWMTKN-1-47dxwelbdopq8915rjfr0hxe6t9cebsm0q30miro4u4qcwbh1c-4f1xl8ici0o32qfyru9y6wepv \
10.0.0.31:2377 \
;done

​ 再次查看集群節點情況,可以看到其他機器已經添加到集群中,並且c1是leader狀態

[root@c1 ~]# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
2qn7aw9ihbjphtnm1toaoevq8    c4        Ready   Active
4cxm0w5j3x4mqrj8f1kdrgln5 *  c1        Ready   Active        Leader
4wqpz2v3b71q0ohzdifi94ma9    c2        Ready   Active
9t9ceme3w14o4gfnljtfrkpgp    c3        Ready   Active

​ 將c2也設置為集群的主節點,先在c1上查看加入到主節點的token

[root@c1 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-47dxwelbdopq8915rjfr0hxe6t9cebsm0q30miro4u4qcwbh1c-b7k3agnzez1bjj3nfz2h93xh0 \
    10.0.0.31:2377

​ 根據c1的token信息,我們先在c2上脫離集群,再將c2加入到管理者

[root@c2 ~]# docker swarm leave
Node left the swarm.
[root@c2 ~]# docker swarm join \
>     --token SWMTKN-1-47dxwelbdopq8915rjfr0hxe6t9cebsm0q30miro4u4qcwbh1c-b7k3agnzez1bjj3nfz2h93xh0 \
>     10.0.0.31:2377
This node joined a swarm as a manager.

​ 這時我們在c1c2任意一台機器,輸入docker node ls都能夠看到最新的集群節點狀態,這時c2MANAGER STATUS已經變為了Reachable

[root@c1 ~]# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
2qn7aw9ihbjphtnm1toaoevq8    c4        Ready   Active
4cxm0w5j3x4mqrj8f1kdrgln5 *  c1        Ready   Active        Leader
4wqpz2v3b71q0ohzdifi94ma9    c2        Down    Active
9t9ceme3w14o4gfnljtfrkpgp    c3        Ready   Active
ai6peof1e9wyovp8uxn5b2ufe    c2        Ready   Active        Reachable

因為之前我們是使用docker swarm leave,所以早期的c2的狀態是Down,可以通過 docker node rm <ID>命令刪除掉

3.3、創建一個overlay 網絡

​ 單台服務器的時候我們應用所有的容器都跑在一台主機上, 所以容器之間的網絡是能夠互通的. 現在我們的集群有4台主機,如何保證不同主機之前的docker是互通的呢?

swarm集群已經幫我們解決了這個問題了,就是只用overlay network.

​ 在docker 1.12以前, swarm集群需要一個額外的key-value存儲(consul, etcd etc). 來同步網絡配置, 保證所有容器在同一個網段中. 在docker 1.12已經內置了這個存儲, 集成了overlay networks的支持.

​ 查看原有網絡

[root@c1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
23ee2bb5a2a1        bridge              bridge              local
fd17ed8db4d8        docker_gwbridge     bridge              local
6878c36aa311        host                host                local
08tt2s4pqf96        ingress             overlay             swarm
7c18e57e24f2        none                null                local

可以看到在swarm上默認已有一個名為ingress的overlay 網絡,默認在swarm里使用,本文會創建一個新的

​ 創建一個名為idoall-orgoverlay網絡

[root@c1 ~]# docker network create --subnet=10.0.9.0/24 --driver overlay idoall-org
e63ca0d7zcbxqpp4svlv5x04v
[root@c1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5e47ba02a985        bridge              bridge              local
fd17ed8db4d8        docker_gwbridge     bridge              local
6878c36aa311        host                host                local
e63ca0d7zcbx        idoall-org          overlay             swarm
08tt2s4pqf96        ingress             overlay             swarm
7c18e57e24f2        none                null                local

新的網絡(idoall-org)已創建

--subnet 用於指定創建overlay網絡的網段,也可以省略此參數

​ 可以使用docker network inspect idoall-org查看我們添加的網絡信息

[root@c1 ~]# docker network inspect idoall-org
[
    {
        "Name": "idoall-org",
        "Id": "e63ca0d7zcbxqpp4svlv5x04v",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.9.0/24",
                    "Gateway": "10.0.9.1"
                }
            ]
        },
        "Internal": false,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "257"
        },
        "Labels": null
    }
]

3.4、在網絡上運行容器

alpine鏡像在idoall-org網絡上啟動3個實例

[root@c1 ~]# docker service create --name idoall-org-test-ping --replicas 3 --network=idoall-org alpine ping baidu.com
avcrdsntx8b8ei091lq5cl76y
[root@c1 ~]# docker service ps idoall-org-test-ping
ID                         NAME                    IMAGE   NODE  DESIRED STATE  CURRENT STATE           ERROR
42vigh5lxkvgge9zo27hfah88  idoall-org-test-ping.1  alpine  c4    Running        Starting 1 seconds ago
aovr8r7r7lykzmxqs30e8s4ee  idoall-org-test-ping.2  alpine  c3    Running        Starting 1 seconds ago
c7pv2o597qycsqzqzgjwwtw8b  idoall-org-test-ping.3  alpine  c1    Running        Running 3 seconds ago

可以看到3個實例,分別部署在c1c3c4三台機器上

也可以使用--mode golbal 指定service運行在每個swarm節點上,稍后會有介紹

3.5、擴展(Scaling)應用

​ 假設在程序運行的時候,發現資源不夠用,我們可以使用scale進行擴展,現在有3個實例,我們更改為4個實例

[root@c1 ~]# docker service scale idoall-org-test-ping=4
idoall-org-test-ping scaled to 4
[root@c1 ~]# docker service ps idoall-org-test-ping
ID                         NAME                    IMAGE   NODE  DESIRED STATE  CURRENT STATE          ERROR
42vigh5lxkvgge9zo27hfah88  idoall-org-test-ping.1  alpine  c4    Running        Running 4 minutes ago
aovr8r7r7lykzmxqs30e8s4ee  idoall-org-test-ping.2  alpine  c3    Running        Running 4 minutes ago
c7pv2o597qycsqzqzgjwwtw8b  idoall-org-test-ping.3  alpine  c1    Running        Running 4 minutes ago
72of5dfm67duccxsdyt1e25qd  idoall-org-test-ping.4  alpine  c2    Running        Running 1 seconds ago

3.6、對service服務進行指定運行

​ 在上面的案例中,不管你的實例是幾個,是由swarm自動調度定義執行在某個節點上。我們可以通過在創建service的時候可以使用--constraints參數,來對service進行限制,例如我們指定一個服務在c4上運行:

[root@c1 ~]# docker service create \
--network idoall-org \
--name idoall-org \
--constraint 'node.hostname==c4' \
-p 9000:9000 \
idoall/golang-revel

服務啟動以后,通過瀏覽http://10.0.0.31:9000/,或者31-34的任意IP,都可以看到效果,Docker Swarm會自動做負載均衡,稍后會介紹關於Docker Swarm的負載均衡

​ 由於各地的網絡不同,下載鏡像可能有些慢,可以使用下面的命令,對命名為idoall-org的鏡像進行監控

[root@c1 ~]# watch docker service ps idoall-org

​ 除了hostname也可以使用其他節點屬性來創建約束表達式寫法參見下表:

節點屬性 匹配 示例
node.id 節點 ID node.id == 2ivku8v2gvtg4
node.hostname 節點 hostname node.hostname != c2
node.role 節點 role: manager node.role == manager
node.labels 用戶自定義 node labels node.labels.security == high
engine.labels Docker Engine labels engine.labels.operatingsystem == ubuntu 14.04

​ 我們也可以通過docker node update命令,來為機器添加label,例如:

[root@c1 ~]# docker node update --label-add site=idoall-org c1
[root@c2 ~]# docker node inspect c1
[
    {
        "ID": "4cxm0w5j3x4mqrj8f1kdrgln5",
        "Version": {
            "Index": 108
        },
        "CreatedAt": "2016-12-11T11:13:32.495274292Z",
        "UpdatedAt": "2016-12-11T12:00:05.956367412Z",
        "Spec": {
            "Labels": {
                "site": "idoall-org"
...
]

​ 對於已有service, 可以通過docker service update,添加constraint配置, 例如:

[root@c1 ~]# docker service update registry --constraint-add 'node.labels.site==idoall-org'

3.7、測試docker swarm網絡是否能互通

​ 在c1上執行

[root@c1 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
c683692b0153        alpine:latest       "ping baidu.com"    25 minutes ago      Up 25 minutes                                   idoall-org-test-ping.4.c7pv2o597qycsqzqzgjwwtw8b
[root@c1 ~]# docker exec -it 47e5 sh
/ # ping idoall-org.1.9ne6hxjhvneuhsrhllykrg7zm
PING idoall-org.1.9ne6hxjhvneuhsrhllykrg7zm (10.0.9.8): 56 data bytes
64 bytes from 10.0.9.8: seq=0 ttl=64 time=1.080 ms
64 bytes from 10.0.9.8: seq=1 ttl=64 time=1.349 ms
64 bytes from 10.0.9.8: seq=2 ttl=64 time=1.026 ms

idoall-org.1.9ne6hxjhvneuhsrhllykrg7zm是容器在c4上運行的名稱

在使用exec進入容器的時候,可以只輸入容器id的前4位

​ 在c4上執行

[root@c4 ~]# docker ps -a
CONTAINER ID        IMAGE                                        COMMAND                  CREATED              STATUS              PORTS               NAMES
1ead9bb757a0        idoall/docker-golang1.7.4-revel0.13:latest   "/usr/bin/supervisord"   About a minute ago   Up 58 seconds                           idoall-org.1.9ne6hxjhvneuhsrhllykrg7zm
033531b30b79        alpine:latest                                "ping baidu.com"         About a minute ago   Up About a minute                       idoall-org-test-ping.1.6st5xvehh7c3bwaxsen3r4gpn
[root@c2 ~]# docker exec -it f49c435c94ea sh
bash-4.3# ping idoall-org-test-ping.4.cirnop0kxbuxiyjh87ii6hh4x
PING idoall-org-test-ping.4.cirnop0kxbuxiyjh87ii6hh4x (10.0.9.6): 56 data bytes
64 bytes from 10.0.9.6: seq=0 ttl=64 time=0.531 ms
64 bytes from 10.0.9.6: seq=1 ttl=64 time=0.700 ms
64 bytes from 10.0.9.6: seq=2 ttl=64 time=0.756 ms

3.8、測試dokcer swarm自帶的負載均衡

​ 使用--mode global參數,在每個節點上創建一個web服務

[root@c1 ~]# docker service create --name whoami --mode global -p 8000:8000 jwilder/whoami
1u87lrzlktgskt4g6ae30xzb8
[root@c1 ~]# docker service ps whoami
ID                         NAME        IMAGE           NODE  DESIRED STATE  CURRENT STATE           ERROR
cjf5w0pv5bbrph2gcvj508rvj  whoami      jwilder/whoami  c2    Running        Running 16 minutes ago
dokh8j4z0iuslye0qa662axqv   \_ whoami  jwilder/whoami  c3    Running        Running 16 minutes ago
dumjwz4oqc5xobvjv9rosom0w   \_ whoami  jwilder/whoami  c1    Running        Running 16 minutes ago
bbzgdau14p5b4puvojf06gn5s   \_ whoami  jwilder/whoami  c4    Running        Running 16 minutes ago

​ 在任意一台機器上執行以下命令,可以發現,每次獲取到的都是不同的值,超過4次以后,會繼續輪詢到第1台機器

[root@c1 ~]# curl $(hostname --all-ip-addresses | awk '{print $1}'):8000
I'm 8c2eeb5d420f
[root@c1 ~]# curl $(hostname --all-ip-addresses | awk '{print $1}'):8000
I'm 0b56c2a5b2a4
[root@c1 ~]# curl $(hostname --all-ip-addresses | awk '{print $1}'):8000
I'm 000982389fa0
[root@c1 ~]# curl $(hostname --all-ip-addresses | awk '{print $1}'):8000
I'm db8d3e839de5
[root@c1 ~]# curl $(hostname --all-ip-addresses | awk '{print $1}'):8000
I'm 8c2eeb5d420f

擴展閱讀

http://www.lxy520.net/2016/07/02/shi-yong-docker-1-12-da-jian-duo-zhu-ji-docker-swarmji-qun/




博文作者:迦壹 博客地址:[Centos7的安裝、Docker1.12.3的安裝,以及Docker Swarm集群的簡單實例](http://idoall.org/blog/post/lion/Centos7%E5%AE%89%E8%A3%85Docker1.12.3%EF%BC%8C%E4%BB%A5%E5%8F%8ADocker-Swarm%E9%9B%86%E7%BE%A4) 轉載聲明:可以轉載, 但必須以超鏈接形式標明文章原始出處和作者信息及版權聲明,謝謝合作!



免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM