>> from zhuhaiqing.info
etcd單機模式
設置環境變量
export HostIP="192.168.12.50"
執行如下命令,打開etcd的客戶端連接端口4001和2379、etcd互聯端口2380
如果是第一次執行此命令,docker會下載最新的etcd官方鏡像
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \ --name etcd quay.io/coreos/etcd \ -name etcd0 \ -advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001 \ -listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \ -initial-advertise-peer-urls http://${HostIP}:2380 \ -listen-peer-urls http://0.0.0.0:2380 \ -initial-cluster-token etcd-cluster-1 \ -initial-cluster etcd0=http://${HostIP}:2380 \ -initial-cluster-state new
選擇上面2個端口中的任意一個,檢測一下節點情況:
curl -L http://127.0.0.1:2379/v2/members
多節點etcd集群
配置多節點etcd集群和單節點類似,最主要的區別是-initial-cluster參數,它表示了各個成員的互聯地址(peer url):
節點01執行如下命令:
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 \ --restart=always \ --name etcd quay.io/coreos/etcd \ -name etcd01 \ -advertise-client-urls http://192.168.73.140:2379,http://192.168.73.140:4001 \ -listen-client-urls http://0.0.0.0:2379 \ -initial-advertise-peer-urls http://192.168.73.140:2380 \ -listen-peer-urls http://0.0.0.0:2380 \ -initial-cluster-token etcd-cluster \ -initial-cluster "etcd01=http://192.168.73.140:2380,etcd02=http://192.168.73.137:2380" \ -initial-cluster-state new
節點02執行如下命令
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 \ --restart=always \ --name etcd quay.io/coreos/etcd \ -name etcd02 \ -advertise-client-urls http://192.168.73.137:2379,http://192.168.73.137:4001 \ -listen-client-urls http://0.0.0.0:2379 \ -initial-advertise-peer-urls http://192.168.73.137:2380 \ -listen-peer-urls http://0.0.0.0:2380 \ -initial-cluster-token etcd-cluster \ -initial-cluster "etcd01=http://192.168.73.140:2380,etcd02=http://192.168.73.137:2380" \ -initial-cluster-state new
檢查集群連接情況,分別在各個節點執行如下命令:
curl -L http://127.0.0.1:2379/v2/members
如果正常,將看到2個節點的信息,且在各個節點看到的結果都應該是一樣的:
{"members":[{"id":"2bd5fcc327f74dd5","name":"etcd01","peerURLs":["http://192.168.73.140:2380"],"clientURLs":["http://192.168.73.140:2379","http://192.168.73.140:4001"]},{"id":"c8a9cac165026b12","name":"etcd02","peerURLs":["http://192.168.73.137:2380"],"clientURLs":["http://192.168.73.137:2379","http://192.168.73.137:4001"]}]}
擴展etcd集群
在集群中的任何一台etcd節點上執行命令,將新節點注冊到集群:
curl http://127.0.0.1:2379/v2/members -XPOST -H "Content-Type: application/json" -d '{"peerURLs": ["http://192.168.73.172:2380"]}'
在新節點上啟動etcd容器,注意-initial-cluster-state參數為existing
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 \ --restart=always \ --name etcd quay.io/coreos/etcd \ -name etcd03 \ -advertise-client-urls http://192.168.73.150:2379,http://192.168.73.150:4001 \ -listen-client-urls http://0.0.0.0:2379 \ -initial-advertise-peer-urls http://192.168.73.150:2380 \ -listen-peer-urls http://0.0.0.0:2380 \ -initial-cluster-token etcd-cluster \ -initial-cluster "etcd01=http://192.168.73.140:2380,etcd02=http://192.168.73.137:2380,etcd03=http://192.168.73.150:2380" \ -initial-cluster-state existing
任意節點執行健康檢查:
[root@docker01 ~]# etcdctl cluster-health member 2bd5fcc327f74dd5 is healthy: got healthy result from http://192.168.73.140:2379 member c8a9cac165026b12 is healthy: got healthy result from http://192.168.73.137:2379 cluster is healthy
calico部署
現在物理主機下載calicoctl,下載頁面:
https://github.com/projectcalico/calico-containers/releases
並將下載的calicoctl復制到/usr/local/bin下面
在第一台etcd節點上執行如下命令:
[root@docker01 ~]# calicoctl node #如果是第一次執行該命令,會需要聯網下載calico node鏡像並啟動 Running Docker container with the following command: docker run -d --restart=always --net=host --privileged --name=calico-node -e HOSTNAME=docker01 -e IP= -e IP6= -e CALICO_NETWORKING=true -e AS= -e NO_DEFAULT_POOLS= -e ETCD_AUTHORITY=127.0.0.1:2379 -e ETCD_SCHEME=http -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico calico/node:v0.18.0 Calico node is running with id: 60b284221a94b418509f86d3c8d7073e11ab3c2a3ca17e4efd2568e97791ff33 Waiting for successful startup No IP provided. Using detected IP: 192.168.73.140 Calico node started successfully
在第二台etcd節點上執行:
[root@Docker01 ~]# calicoctl node --如果是第一次執行該命令,會需要聯網下載calico node鏡像 Running Docker container with the following command: docker run -d --restart=always --net=host --privileged --name=calico-node -e HOSTNAME=docker01 -e IP= -e IP6= -e CALICO_NETWORKING=true -e AS= -e NO_DEFAULT_POOLS= -e ETCD_AUTHORITY=127.0.0.1:2379 -e ETCD_SCHEME=http -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico calico/node:v0.18.0 Calico node is running with id: 72e7213852e529a3588249d85f904e38a92d671add3cdfe5493687aab129f5e2 Waiting for successful startup No IP provided. Using detected IP: 192.168.73.137 Calico node started successfully
在任意一台calico節點上執行如下命令,配置地址資源池:
[root@Docker01 ~]# calicoctl pool remove 192.168.0.0/16 #刪除默認資源池 [root@Docker01 ~]# calicoctl pool add 10.0.238.0/24 --nat-outgoing --ipip #添加新的IP資源池,支持跨子網的主機上的Docker間網絡互通,需要添加--ipip參數;如果要Docker訪問外網,需要添加--nat-outgoing參數 [root@docker01 ~]# calicoctl pool show #查看配置后的結果
在任意calico節點,檢查Calico狀態:
[root@docker01 ~]# calicoctl status calico-node container is running. Status: Up 3 hours Running felix version 1.4.0rc1 IPv4 BGP status IP: 192.168.73.140 AS Number: 64511 (inherited) +----------------+-------------------+-------+----------+-------------+ | Peer address | Peer type | State | Since | Info | +----------------+-------------------+-------+----------+-------------+ | 192.168.73.137 | node-to-node mesh | up | 09:18:51 | Established | +----------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 address configured.
配置docker容器網絡
分別在2個節點上啟動業務一個容器,不加載網絡驅動,后面網絡讓Calico來配置:
[root@docker01 ~]# docker run --name test01 -itd --log-driver none --net none daocloud.io/library/centos:6.6 /bin/bash [root@docker02 ~]# docker run --name test02 -itd --log-driver none --net none daocloud.io/library/centos:6.6 /bin/bash
在任意的calico節點創建Calico profile:
[root@docker01 ~]# calicoctl profile add starboss
通過Calico手動為容器指定ip,注意此ip需要符合calico pool的ip配置:
[root@docker01 ~]# calicoctl container add test01 10.0.238.10 IP 10.0.238.10 added to test01 [root@docker02 ~]# calicoctl container add test02 10.0.238.11 IP 10.0.238.10 added to test02
在各個calico節點上,分別將需要互相訪問的節點加入同一個profile:
[root@docker01 ~]# calicoctl container test01 profile set starboss Profile(s) set to starboss. [root@docker02 ~]# calicoctl container test02 profile set starboss Profile(s) set to starboss.
在任意節點查看Calico節點的配置情況:
[root@docker01 ~]# calicoctl endpoint show --detailed +----------+-----------------+------------------------------------------------------------------+----------------------------------+-----------------+-------------------+----------+--------+ | Hostname | Orchestrator ID | Workload ID | Endpoint ID | Addresses | MAC | Profiles | State | +----------+-----------------+------------------------------------------------------------------+----------------------------------+-----------------+-------------------+----------+--------+ | docker01 | docker | 8f935b0441739f52334e9f16099a2b52e2c982e3aef3190e02dd7ce67e61a853 | 75b0e79a022211e6975c000c29308ed8 | 192.168.0.10/32 | 1e:14:2d:bf:51:f5 | starboss | active | | docker02 | docker | 3d0a8f39753537592f3e38d7604b0b6312039f3bf57cf13d91e953e7e058263e | 8efb263e022211e6a180000c295008af | 192.168.0.11/32 | ee:2b:c2:5e:b6:c5 | starboss | active | +----------+-----------------+------------------------------------------------------------------+----------------------------------+-----------------+-------------------+----------+--------+
測試,在一台物理主機中ping另外一台主機中的容器:
[root@docker01 ~]# docker exec test01 ping 192.168.0.11 PING 192.168.0.11 (192.168.0.11) 56(84) bytes of data. 64 bytes from 192.168.0.11: icmp_seq=1 ttl=62 time=0.557 ms 64 bytes from 192.168.0.11: icmp_seq=2 ttl=62 time=0.603 ms 64 bytes from 192.168.0.11: icmp_seq=3 ttl=62 time=0.656 ms 64 bytes from 192.168.0.11: icmp_seq=4 ttl=62 time=0.386 ms