1docker版本
docker 17.09
https://docs.docker.com/
appledeAir:~ apple$ docker version
Client: Docker Engine - Community
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:47:43 2018
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:55:00 2018
OS/Arch: linux/amd64
Experimental: false
vagrant
創建linux虛擬機
創建一個目錄
mkdir centos7
vagrant init centos/7 #會創建一個vagrant file
vagrant up #啟動
vagrant ssh #進入虛擬機
vagrant status
vagrant halt #停機
vagrant destroy 刪除機器
docker machine 自動在虛擬機安裝docker的工具
docker-machine create demo virtualbox 里會自動運行一台虛擬機
docker-machine ls 顯示有哪些虛擬機在運行
docker-machine ssh demo 進入機器
docker-machine create demo1 創建第二台有docker的虛擬機
docker-machine stop demo1
docker playground https://labs.play-with-docker.com/
運行docker
docker run -dit ubuntu /bin/bash
執行不退出
docker exec -it 33 /bin/bash
把普通用戶添加到docker組,不用sudo
sudo gpasswd -a alex docker
/etc/init.d/docker restart
重新登錄shell exit
驗證docker version
創建自己的image
from scratch
ADD app.py /
CMD ["/app.py"]
build自己的image
docker build -t alex/helloworld .
顯示當前正在運行的容器
docker container ls
docker container ls -a
顯示狀態為退出的容器
docker container ls -f "status=exited" -q
刪除容器
docker container rm 89123
docker rm 89123
或者一次性刪除全部容器
docker rm $(docker container ls -aq)
刪除退出狀態的容器
docker rm $(docker container ls -f "status=exited" -q)
刪除不用的image
docker rmi 98766
把container,commit成為一個新的image
docker commit 12312312 alexhe/changed_a_lot:v1.0
docker image ls
docker history 901923123 (image的id)
Dockerfile案例:
cat Dockerfile
FROM centos
ENV name Docker
CMD echo "hello $name"
用dockerfile建立image
docker build -t alexhe/firstblood:latest .
從registry拉取
docker pull ubuntu:18.04
Dockerfile案例:
cat Dockerfile
FROM centos
RUN yum install -y vim
Dockerfile案例:
FROM ubuntu
RUN apt-get update && apt-get install -y python
Dockerfile語法梳理及最佳實踐
From ubuntu:18.04
LABEL maintainer="alex@alexhe.net"
LABEL version="1.0"
LABEL description="This is comment"
RUN yum update && yum instlal -y vim \
python-dev #每運行一次run,增加一層layer,需要合並起來
WORKDIR /root #進入目錄,如果沒有目錄會自動創建目錄
WORKDIR demo #進入了/root/demo
ADD hello /
ADD test.tar.gz / #添加到根目錄並解壓
WORKDIR /root
ADD hello test/ # /root/test/hello
WORKDIR /root
COPY hello test/ #
大部分情況,copy優於add,add除了copy還有額外功能(解壓),添加遠程文件/目錄請使用curl或者wget
ENV MYSQL_VERSION 5.6 #設置常亮
RUN apt-get install -y mysql-server="${MYSQL_VERSION}" && rm -rf /var/lib/apt/lists/* #引用常量
RUN vs CMD vs ENTRYPOINT
run:執行命令並創建新的image layer
cmd:設置容器啟動后默認執行的命令和參數
entrypoint:設置容器啟動時運行的命令
shell格式
RUN apt-get install -y vim
CMD echo "hello docker"
ENTRYPOINT echo "hello docker"
Exec格式
RUN ["apt-get","install","-y","vim"]
CMD ["/bin/echo","hello docker"]
ENTRYPOINT ["/bin/echo","hello docker"]
例子:注意
FROM centos
ENV name Docker
ENTRYPOINT ["/bin/bash","-c","echo","hello $name"] #這樣正解 如果["echo","hello $name"],這樣運行了以后還是顯示hello $name,沒有變量替換。用exec格式,執行的是echo這個命令,而不是shell,所以沒辦法把變量替換掉。
和上面的區別
FROM centos
ENV name Docker
ENTRYPOINT echo "hello $name" #正常 可以顯示hello Docker 會用shell執行命令,識別變量
CMD:
容器啟動時默認執行的命令,如果docker run指定了其他命令,CMD命令被忽略。如果定義了多個CMD,只有最后一個會執行。
ENTRYPOINT:
讓容器以應用程序或者服務的形式運行。不會被忽略,一定會執行。最佳實踐:下一個shell腳本作為entrypoint
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 27017
CMD ["mongod"]
鏡像發布
docker login
docker push alexhe/hello-world:latest
docker rmi alexhe/hello-world #刪掉
docker pull alexhe/hello-world:latest #再拉回來
本地registry private repository
https://docs.docker.com/v17.09/registry/
1.啟動私有registry
docker run -d -p 5000:5000 -v /opt/registry:/var/lib/registry --restart always --name registry registry:2
2.其他機器測試 telnet x.x.x.x 5000
3.往私有registry push
3.1先用dockerfile build和打tag
docker build -t x.x.x.x:5000/hello-world .
3.2設置允許不安全的私有庫
vim /etc/docker/daemon.json
{ "insecure-registries" : ["x.x.x.x:5000"] }
vim /lib/systemd/service/docker.service
EnvirmentFile=/etc/docker/daemon.json
/etc/init.d/docker restart
3.3開始push
docker push x.x.x.x:5000/hello-world
3.4驗證
registry有api https://docs.docker.com/v17.09/registry/spec/api/#listing-repositories
GET /v2/_catalog
docker pull x.x.x.x:5000/helloworld
Dockerfile github很多示例https://github.com/docker-library/docs https://docs.docker.com/engine/reference/builder/#add
Dockerfile案例:安裝flask 復制目錄中的app.py到/app/ 進入app目錄 暴露5000端口 執行app.py
cat Dockerfile
FROM python:2.7
LABEL maintainer="alex he<alex@alexhe.net>"
RUN pip install flask
COPY app.py /app/
WORKDIR /app
EXPOSE 5000
CMD ["python", "app.py"]
cat app.py
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return "hello docker"
if __name__ == '__main__':
app.run(host="0.0.0.0", port=5000)
docker build -t alexhe/flask-hello-world . #打包
如果打包時出錯
docker run -it 報錯時的第幾步id /bin/bash
進入以后看看哪里報錯
最后docker run -d alexhe/flask-hello-world #讓容器在后台運行
在運行中的容器,執行命令:
docker exec -it xxxxxx /bin/bash
顯示ip地址:
docker exec -it xxxx ip a
docker inspect xxxxxxid
顯示容器運行產生的輸出:
docker logs xxxxx
dockerfile 案例:
linux的stress工具
cat Dockerfile
FROM ubuntu
RUN apt-get update && apt-get install -y stress
ENTRYPOINT ["/usr/bin/stress"] #使用entrypoint加cmd配合使用,cmd為空的,用docker run來接收請求參數
CMD []
使用:
docker build alexhe/ubuntu-stress .
#dokcer run -it alexhe/ubuntu-stress #無任何參數運行,類似於打印
docker run -it alexhe/ubuntu-stress -vm 1 --verbose # 類似於運行 stress -vm 1 --verbose
容器資源限制,cpu,ram
docker run --memory=200M alexhe/ubuntu-stress -vm 1 --vm-bytes=500M --verbose #直接報錯,內存不夠。因為給了容器200m,壓力測試占用500m
docker run --cpu-shares=10 --name=test1 alexhe/ubuntu-stress -vm1 #和下面一起啟動,他的cpu占用率為66%
docker run --cpu-shares=5 --name=test2 alexhe/ubuntu-stress -vm1 #一起啟動,他的cpu占用率33%
容器網絡
單機:bridge,host,none
多機:Overlay
linux中的網絡命名空間
docker run -dit --name test1 busybox /bin/sh -c "while true;do sleep 3600;done"
docker exec -it xxxxx /bin/sh
ip a 顯示網絡接口
exit
進入host , 執行ip a ,顯示host的接口
container和host的網絡namespace是隔離開的
docker run -dit --name test2 busybox /bin/sh -c "while true;do sleep 3600;done"
docker exec xxxxx ip a #看第二台機器的網絡
同一台機器,container之間的網絡是相通的。
以下是linux中的網絡命名空間端口互通的原理實現(docker的和他類似):
host中執行 ip netns list 查看本機的network namespace
ip netns delete test1
ip netns add test1 #創建network namespace
ip netns add test2 #創建network namespace
在test1的network namespace中執行ip link
ip netns exec test1 ip link #目前狀態是down的
ip netns exec test1 ip link set dev lo up #狀態變成了unknown,要兩端都連通,他才會變成up
創建一對veth,一個放入test1的namespace,另一個放入test2的namespace。
創建一對veth:
ip link add veth-test1 type veth peer name veth-test2
把veth-test1放入test1的namespace:
ip link set veth-test1 netns test1
看看test1的namespace里的情況:
ip netns exec test1 ip link #test1的namespace多了一個veth,狀態為down
看看本地的ip link:
ip link #少了一個,說明這一個已經加到了test1的namespace
把veth-test2放入test2的namespace:
ip link set veth-test2 netns test2
看看本地的ip link:
ip link #又少了一個,說明已經加入到了test2的namespace
看看test2的namespace
ip netns exec test2 ip link #test2的namespace多了一個veth,狀態為down
給兩個veth端口添加ip地址:
ip netns exec test1 ip addr add 192.168.1.1/24 dev veth-test1
ip netns exec test2 ip addr add 192.168.1.2/24 dev veth-test2
查看test1和test2的ip link
ip netns exec test1 ip link #發現沒有ip地址,並且端口狀態是down
ip netns exec test2 ip link #發現沒有ip地址,並且端口狀態是down
把兩個端口up起來
ip netns exec test1 ip link set dev veth-test1 up
ip netns exec test2 ip link set dev veth-test2 up
查看test1和test2的ip link
ip netns exec test1 ip link #發現有ip地址,並且端口狀態是up
ip netns exec test2 ip link #發現有ip地址,並且端口狀態是up
從test1的namespace里的veth-test1 執行ping test2的namespace中的veth-test2
ip netns exec test1 ping 192.168.1.2
ip netns exec test2 ping 192.168.1.1
docker的bridge docker0網絡:
兩個容器test1和test2能互相ping 通,說明兩個network namespace是連接在一起的。
目前系統只有一個test1的容器,刪除test2容器
顯示docker網絡:
docker network ls
NETWORK ID NAME DRIVER SCOPE
9d133c1c82ff bridge bridge local
e44acf9eff90 host host local
bc660dbbb8b6 none null local
顯示bridge網絡的詳情:
docker network inspect xxxxxx(上面顯示bridge網絡的id)
host上的veth和容器里的eth0是一對兒veth
ip link #veth6aa1698@if18
docker exec test1 ip link #18: eth0@if19
這一對兒veth pair連接到了host上的docker0上面。
yum install bridge-utils
brctl show #主機上的veth6aa連接在docker0上
新開一個test2的容器:
docker run -dit --name test2 busybox /bin/sh -c "while true;do sleep 3600;done"
看看docker的網絡:
docker network inspect bridge
看到container多了一個,地址都有
在host運行ip a,又多了一個veth。
運行brctl show,docker0上有兩個接口
docker容器之間的互聯link
目前只有一個容器test1
現在創建第二個容器test2
docker run -d --name test2 --link test1 busybox /bin/sh -c "while true;do sleep 3600;done"
docker exec -it test2 /bin/sh
進入test2后,去ping test1的ip地址,通,ping test1的名字 ,也通。
進入test1里,去ping test2的ip地址,通,ping test2的名字,不通。
自己建一個docker network bridge,並讓容器連接他。
docker network create -d bridge my-bridge
創建一個容器,test3
docker run -d --name test3 --network my-bridege busybox /bin/sh -c "while true;do sleep 3600;done"
通過brctl show來查看。
把test2的網絡換成my-bridge
docker network connect my-bridge test2,當連接進來后,test2就有了2個ip地址。
注意:如果用戶自己創建自定義的network,並讓一些容器連接進來,這些容器,是能通過名字來互相ping連接的。而默認的bridge不行,就像上面測試一樣。
docker的端口映射
創建一個nginx的container
docker run --name web -d -p 80:80 nginx
docker的host和none網絡
none network:
docker run -d --name test1 --network none busybox /bin/sh -c "while true;do sleep 3600;done"
docker network inspect none #可以看到none的network連接了一個container
進入容器:
docker exec -it test1 /bin/sh #進入看看網絡,無任何ip網絡
host network:
docker run -d --name test1 --network host busybox /bin/sh -c "while true;do sleep 3600;done"
docker network inspect host
進入容器:
docker exec -it test1 /bin/sh #查看ip a 網絡,容器的ip和主機的ip完全一樣,他沒有自己獨立的namespace
多容器復雜應用的部署:
Flask+redis, flask的container訪問redis的container
cat app.py
from flask import Flask
from redis import Redis
import os
import socket
app = Flask(__name__)
redis = Redis(host=os.environ.get('REDIS_HOST', '127.0.0.1'), port=6379)
@app.route('/')
def hello():
redis.incr('hits')
return 'Hello Container World! I have been seen %s times and my hostname is %s.\n' % (redis.get('hits'),socket.gethostname())
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000, debug=True)
cat Dockerfile
FROM python:2.7
LABEL maintaner="Peng Xiao xiaoquwl@gmail.com"
COPY . /app
WORKDIR /app
RUN pip install flask redis
EXPOSE 5000
CMD [ "python", "app.py" ]
1.創建redis的container:
docker run -d --name redis redis
2. dokcer build -t alexhe/flask-redis .
3.創建container
docker run -d -p 5000:5000 --link redis --name flask-redis -e REDIS_HOST=redis alexhe/flask-redis #和上面源碼的相對應
4.進入上面的container,並執行env看一下:
docker exec -it flask-redis /bin/bash
env #環境變量
ping redis #在容器里可以ping redis
5.在主機訪問curl 127.0.0.1:5000。可以訪問到
多主機間,多container互相通信
docker網絡的overlay和underlay:
兩台linux 主機 192.168.205.10 192.168.205.11
vxlan數據包(google搜vxlan概念)
cat multi-host-network.md # Mutil-host networking with etcd ## setup etcd cluster 在docker-node1上 ``` vagrant@docker-node1:~$ wget https://github.com/coreos/etcd/releases/download/v3.0.12/etcd-v3.0.12-linux-amd64.tar.gz vagrant@docker-node1:~$ tar zxvf etcd-v3.0.12-linux-amd64.tar.gz vagrant@docker-node1:~$ cd etcd-v3.0.12-linux-amd64 vagrant@docker-node1:~$ nohup ./etcd --name docker-node1 --initial-advertise-peer-urls http://192.168.205.10:2380 \ --listen-peer-urls http://192.168.205.10:2380 \ --listen-client-urls http://192.168.205.10:2379,http://127.0.0.1:2379 \ --advertise-client-urls http://192.168.205.10:2379 \ --initial-cluster-token etcd-cluster \ --initial-cluster docker-node1=http://192.168.205.10:2380,docker-node2=http://192.168.205.11:2380 \ --initial-cluster-state new& ``` 在docker-node2上 ``` vagrant@docker-node2:~$ wget https://github.com/coreos/etcd/releases/download/v3.0.12/etcd-v3.0.12-linux-amd64.tar.gz vagrant@docker-node2:~$ tar zxvf etcd-v3.0.12-linux-amd64.tar.gz vagrant@docker-node2:~$ cd etcd-v3.0.12-linux-amd64/ vagrant@docker-node2:~$ nohup ./etcd --name docker-node2 --initial-advertise-peer-urls http://192.168.205.11:2380 \ --listen-peer-urls http://192.168.205.11:2380 \ --listen-client-urls http://192.168.205.11:2379,http://127.0.0.1:2379 \ --advertise-client-urls http://192.168.205.11:2379 \ --initial-cluster-token etcd-cluster \ --initial-cluster docker-node1=http://192.168.205.10:2380,docker-node2=http://192.168.205.11:2380 \ --initial-cluster-state new& ``` 檢查cluster狀態 ``` vagrant@docker-node2:~/etcd-v3.0.12-linux-amd64$ ./etcdctl cluster-health member 21eca106efe4caee is healthy: got healthy result from http://192.168.205.10:2379 member 8614974c83d1cc6d is healthy: got healthy result from http://192.168.205.11:2379 cluster is healthy ``` ## 重啟docker服務 在docker-node1上 ``` $ sudo service docker stop $ sudo /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.205.10:2379 --cluster-advertise=192.168.205.10:2375& ``` 在docker-node2上 ``` $ sudo service docker stop $ sudo /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.205.11:2379 --cluster-advertise=192.168.205.11:2375& ``` ## 創建overlay network 在docker-node1上創建一個demo的overlay network ``` vagrant@docker-node1:~$ sudo docker network ls NETWORK ID NAME DRIVER SCOPE 0e7bef3f143a bridge bridge local a5c7daf62325 host host local 3198cae88ab4 none null local vagrant@docker-node1:~$ sudo docker network create -d overlay demo 3d430f3338a2c3496e9edeccc880f0a7affa06522b4249497ef6c4cd6571eaa9 vagrant@docker-node1:~$ sudo docker network ls NETWORK ID NAME DRIVER SCOPE 0e7bef3f143a bridge bridge local 3d430f3338a2 demo overlay global a5c7daf62325 host host local 3198cae88ab4 none null local vagrant@docker-node1:~$ sudo docker network inspect demo [ { "Name": "demo", "Id": "3d430f3338a2c3496e9edeccc880f0a7affa06522b4249497ef6c4cd6571eaa9", "Scope": "global", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "10.0.0.0/24", "Gateway": "10.0.0.1/24" } ] }, "Internal": false, "Containers": {}, "Options": {}, "Labels": {} } ] ``` 我們會看到在node2上,這個demo的overlay network會被同步創建 ``` vagrant@docker-node2:~$ sudo docker network ls NETWORK ID NAME DRIVER SCOPE c9947d4c3669 bridge bridge local 3d430f3338a2 demo overlay global fa5168034de1 host host local c2ca34abec2a none null local ``` 通過查看etcd的key-value, 我們獲取到,這個demo的network是通過etcd從node1同步到node2的 ``` vagrant@docker-node2:~/etcd-v3.0.12-linux-amd64$ ./etcdctl ls /docker /docker/network /docker/nodes vagrant@docker-node2:~/etcd-v3.0.12-linux-amd64$ ./etcdctl ls /docker/nodes /docker/nodes/192.168.205.11:2375 /docker/nodes/192.168.205.10:2375 vagrant@docker-node2:~/etcd-v3.0.12-linux-amd64$ ./etcdctl ls /docker/network/v1.0/network /docker/network/v1.0/network/3d430f3338a2c3496e9edeccc880f0a7affa06522b4249497ef6c4cd6571eaa9 vagrant@docker-node2:~/etcd-v3.0.12-linux-amd64$ ./etcdctl get /docker/network/v1.0/network/3d430f3338a2c3496e9edeccc880f0a7affa06522b4249497ef6c4cd6571eaa9 | jq . { "addrSpace": "GlobalDefault", "enableIPv6": false, "generic": { "com.docker.network.enable_ipv6": false, "com.docker.network.generic": {} }, "id": "3d430f3338a2c3496e9edeccc880f0a7affa06522b4249497ef6c4cd6571eaa9", "inDelete": false, "ingress": false, "internal": false, "ipamOptions": {}, "ipamType": "default", "ipamV4Config": "[{\"PreferredPool\":\"\",\"SubPool\":\"\",\"Gateway\":\"\",\"AuxAddresses\":null}]", "ipamV4Info": "[{\"IPAMData\":\"{\\\"AddressSpace\\\":\\\"GlobalDefault\\\",\\\"Gateway\\\":\\\"10.0.0.1/24\\\",\\\"Pool\\\":\\\"10.0.0.0/24\\\"}\",\"PoolID\":\"GlobalDefault/10.0.0.0/24\"}]", "labels": {}, "name": "demo", "networkType": "overlay", "persist": true, "postIPv6": false, "scope": "global" } ``` ## 創建連接demo網絡的容器 在docker-node1上 ``` vagrant@docker-node1:~$ sudo docker run -d --name test1 --net demo busybox sh -c "while true; do sleep 3600; done" Unable to find image 'busybox:latest' locally latest: Pulling from library/busybox 56bec22e3559: Pull complete Digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 Status: Downloaded newer image for busybox:latest a95a9466331dd9305f9f3c30e7330b5a41aae64afda78f038fc9e04900fcac54 vagrant@docker-node1:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a95a9466331d busybox "sh -c 'while true; d" 4 seconds ago Up 3 seconds test1 vagrant@docker-node1:~$ sudo docker exec test1 ifconfig eth0 Link encap:Ethernet HWaddr 02:42:0A:00:00:02 inet addr:10.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:aff:fe00:2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:15 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1206 (1.1 KiB) TX bytes:648 (648.0 B) eth1 Link encap:Ethernet HWaddr 02:42:AC:12:00:02 inet addr:172.18.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::42:acff:fe12:2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vagrant@docker-node1:~$ ``` 在docker-node2上 ``` vagrant@docker-node2:~$ sudo docker run -d --name test1 --net demo busybox sh -c "while true; do sleep 3600; done" Unable to find image 'busybox:latest' locally latest: Pulling from library/busybox 56bec22e3559: Pull complete Digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 Status: Downloaded newer image for busybox:latest fad6dc6538a85d3dcc958e8ed7b1ec3810feee3e454c1d3f4e53ba25429b290b docker: Error response from daemon: service endpoint with name test1 already exists. #已經用過不能再用 vagrant@docker-node2:~$ sudo docker run -d --name test2 --net demo busybox sh -c "while true; do sleep 3600; done" 9d494a2f66a69e6b861961d0c6af2446265bec9b1d273d7e70d0e46eb2e98d20 ``` 驗證連通性。 ``` vagrant@docker-node2:~$ sudo docker exec -it test2 ifconfig eth0 Link encap:Ethernet HWaddr 02:42:0A:00:00:03 inet addr:10.0.0.3 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:aff:fe00:3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:208 errors:0 dropped:0 overruns:0 frame:0 TX packets:201 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:20008 (19.5 KiB) TX bytes:19450 (18.9 KiB) eth1 Link encap:Ethernet HWaddr 02:42:AC:12:00:02 inet addr:172.18.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::42:acff:fe12:2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vagrant@docker-node1:~$ sudo docker exec test1 sh -c "ping 10.0.0.3" PING 10.0.0.3 (10.0.0.3): 56 data bytes 64 bytes from 10.0.0.3: seq=0 ttl=64 time=0.579 ms 64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.411 ms 64 bytes from 10.0.0.3: seq=2 ttl=64 time=0.483 ms ^C vagrant@docker-node1:~$ ```
docker的持久化存儲和數據共享:
持久化有兩種方式:1,Data Volume 2,Bind Mounting
第一種Data Volume:
容器產生數據,比如日志,數據庫,想保留這些數據
例如https://hub.docker.com/_/mysql
docker run -d -e MYSQL_ALLOW_EMPTY_PASSWORD=yes --name mysql1 mysql
查看volume:
docker volume ls
刪除volume
docker volume rm xxxxxxxxxx
查看細節:
docker volume inspect xxxxxxxxx
創建第二個mysql container
docker run -d -e MYSQL_ALLOW_EMPTY_PASSWORD=yes --name mysql2 mysql
查看細節:
docker volume inspect xxxxxxxxx
刪除container,volume是不會刪除的:
docker stop mysql1 mysql2
docker rm mysql1 mysql2
docker volume ls #數據還在
重新創建mysql1:
docker run -d -v mysq:/var/lib/mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=yes --name mysql1 mysql #這里的mysq是volume的名字
docker volume ls #會顯示mysq
進入mysql1的container
創建一個新的數據庫
create database docker;
退出容器,把mysql1 container刪除
docker rm -f mysql1 #強制停止和刪除mysql1這個container
查看volume:
docker volume ls #還在
創建一個新的mysql2 container,但是volume使用之前的mysq
docker run -d -v mysq:/var/lib/mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=yes --name mysql2 mysql
進入mysql2的容器里,看看數據庫在不在
show database #數據庫還在
第二種持久化方式:bind mounting
和第一種方式區別是什么?如果用data volume方式,需要在dockerfile里定義創建的volume,bind mounting不需要,bind mouting只需要在運行時指定本地目錄和容器目錄一一對應的關系。
然后通過這種方式去做一個同步,就是說本地系統中的文件和容器中的文件是同步的。本地文件做了修改,容器目錄中的文件也會做修改。
cat Dockerfile
# this same shows how we can extend/change an existing official image from Docker Hub
FROM nginx:latest
# highly recommend you always pin versions for anything beyond dev/learn
WORKDIR /usr/share/nginx/html
# change working directory to root of nginx webhost
# using WORKDIR is prefered to using 'RUN cd /some/path'
COPY index.html index.html
# I don't have to specify EXPOSE or CMD because they're in my FROM
index.html隨便整一個
docker build -t alexhe/my-nginx .
docker run -d -p 80:80 -v $(pwd):/usr/shar/nginx/html --name web alexhe/my-nginx
bind mount其他案例:
cat Dockerfile
FROM python:2.7
LABEL maintainer="alexhe<alex@alexhe.net>"
COPY . /skeleton
WORKDIR /skeleton
RUN pip install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["scripts/dev.sh"]
開始build image
docker build -t alexhe/flask-skeleton
docker run -d -p 80:5000 -v $(pwd):/skeleton --name flask alexhe/flask-skeleton
其他源碼在
/Users/apple/temp/docker-k8s-devops-master/chapter5/labs/flask-skeleton
部署一個WordPress:
docker run -d -v mysql-data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=wordpress --name mysql mysql
docker run -d -e WORDPRESS_DB_HOST=mysql:3306 -e WORDPRESS_DB_PASSWORD=root --link mysql -p 8080:80 wordpress
docker compose:
官網介紹https://docs.docker.com/compose/overview/
通過一個yml文件定義多容器的docker應用
通過一條命令就可以根據yml文件的定義去創建或者管理這多個容器
docker compose的三大概念:Services,Networks,Volumes
v2可以運行在單機,v3可以運行在多機
services:一個service代表一個container,這個container可以從dokcerhub的image來創建,或者從本地的dockerfile build出來的image來創建
service的啟動,類似docker run,我們可以給其制定network和volume,所以可以給service指定network和volume的引用。
例子:
services:
db: (container的名字叫db)
image:postgres:9.4(docker hub拉取的)
volumes:
- "db-data:/var/lib/postgresql/data"
networks:
- back-tier
就像這樣:
docker run -d --network back-tier -v db-data:/var/lib/postgresql/data postgres:9.4
例子:
services:
worker: (container的名字)
build: ./worker (不是從dockerhub取,而是從本地build)
links:
- db
- redis
networks:
- back-tier
例子:
cat docker-compose.yml
version: '3'
services:
wordpress:
image: wordpress
ports:
- 8080:80
environment:
WORDPRESS_DB_HOST: mysql
WORDPRESS_DB_PASSWORD: root
networks:
- my-bridge
mysql:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: wordpress
volumes:
- mysql-data:/var/lib/mysql
networks:
- my-bridge
volumes:
mysql-data:
networks:
my-bridge:
driver: bridge
docker-compose的安裝和基本使用:
安裝:https://docs.docker.com/compose/install/
curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
docker-compose --version
docker-compose version 1.23.2, build 1110ad01
使用:
用上面的docker-compose.yml
docker-compose up #在當前文件夾的docker-compose.yml,啟動. 1.創建bridge網絡wordpress_my-bridge. 2.創建2個service wordpress_wordpress_1 和wordpress_mysql_1,並且啟動container
docker-compose -f xxxx/docker-compose.yml #在調用指定文件夾中的docker-compose.yml
docker-compose ps #顯示目前的service
docker-compose stop
docker-compose down #stop and remove,但是不會刪除image
docker-compose start
docker-compose up -d #后台執行,不顯示日志
docker-compose images #顯示yml中定義的container和使用的image
docker-compose exec mysql bash #mysql是yml中定義的service,進入mysql 這台container的bash
案例:docker-compose調用Dockerfile來創建
源碼在/Users/apple/temp/docker-k8s-devops-master/chapter6/labs/flask-redis
cat docker-compose.yml
version: "3"
services:
redis:
image: redis
web:
build:
context: . #dockerfile的位置
dockerfile: Dockerfile #調用目錄中的Dockerfile
ports:
- 8080:5000
environment:
REDIS_HOST: redis
cat Dockerfile
FROM python:2.7
LABEL maintaner="alexhe alex@alexhe.net"
COPY . /app
WORKDIR /app
RUN pip install flask redis
EXPOSE 5000
CMD [ "python", "app.py" ]
使用:docker-compose up -d #如果不使用-d 會一直停在那里 web信息輸出在前端
docker-compose中的scale
docker-compose up #用上面的yml
docker-compose up --scale web=3 -d #會報錯,報8080已經被占用,需要把上面的ports: - 8000:5000刪除
刪除后再執行上面的。會啟動3個container,並監聽了容器本地的5000。可用docker-compose ps查看
但這樣不行,容器本地的5000端口我們訪問不到。我們需要在yml里新增haproxy
cat docker-compose.yml
version: "3"
services:
redis:
image: redis
web:
build:
context: .
dockerfile: Dockerfile
environment:
REDIS_HOST: redis
lb:
image: dockercloud/haproxy
links:
- web
ports:
- 8080:80
volumes:
- /var/run/docker.sock:/var/run/docker.sock
cat Dockerfile
FROM python:2.7
LABEL maintaner="alex alex@alexhe.net"
COPY . /app
WORKDIR /app
RUN pip install flask redis
EXPOSE 80
CMD [ "python", "app.py" ]
cat app.py
from flask import Flask
from redis import Redis
import os
import socket
app = Flask(__name__)
redis = Redis(host=os.environ.get('REDIS_HOST', '127.0.0.1'), port=6379)
@app.route('/')
def hello():
redis.incr('hits')
return 'Hello Container World! I have been seen %s times and my hostname is %s.\n' % (redis.get('hits'),socket.gethostname())
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80, debug=True)
docker-compose up -d
curl 127.0.0.1:8080
docker-compose up --scale web=3 -d
案例:部署一個復雜的應用,投票系統,
源碼:
/Users/apple/temp/docker-k8s-devops-master/chapter6/labs/example-voting-app
python前端+redis+java worker+ pg database + results app
cat docker-compose.yml
version: "3"
services:
voting-app:
build: ./voting-app/.
volumes:
- ./voting-app:/app
ports:
- "5000:80"
links:
- redis
networks:
- front-tier
- back-tier
result-app:
build: ./result-app/.
volumes:
- ./result-app:/app
ports:
- "5001:80"
links:
- db
networks:
- front-tier
- back-tier
worker:
build: ./worker
links:
- db
- redis
networks:
- back-tier
redis:
image: redis
ports: ["6379"]
networks:
- back-tier
db:
image: postgres:9.4
volumes:
- "db-data:/var/lib/postgresql/data"
networks:
- back-tier
volumes:
db-data:
networks:
front-tier: #沒指明driver,默認為bridge
back-tier:
docker-compose up
瀏覽器通過5000端口投票,5001查看投票結果
docker-compose build #可以事先build image, 而用up會先build再做start。
docker swarm
創建一個3節點的swarm cluster
manager 192.168.205.10
worker1 192.168.205.11
worker2 192.168.205.12
manager:
docker swarm init --advertise-addr=192.168.205.10
worker1 and 2:
docker swarm join xxxxx
manager:
docker node ls #查看所有節點
docker service create --name demo busybox sh -c "while true;do sleep 3600;done"
docker service ls
docker service ps demo #看service在哪台機器上
docker service scale demo=5 #擴展成5台
如果在work2上,強制刪除了一個container, docker rm -f xxxxxxxxx.
這時候如果docker service ls,會顯示 REPLICAS 4/5, 過一會兒會顯示5/5,在docker service ls里會顯示有狀態為shutdown的container
docker service rm demo #刪除整個service
docker service ps demo
swarm service 部署WordPress
docker network create -d overlay demo #創建overlay的網絡,docker network ls
docker service create --name mysql --env MYSQL_ROOT_PASSWORD=root --env MYSQL_DATABASE=wordpress --network demo --mount type=volume,source=mysql-data,destination=/var/lib/mysql mysql #service中-v是這樣的mount格式,名字叫mysql-data,掛載地址在/var/lib/mysql
docker service ls
docker service ps mysql
docker service create --name wordpress -p 80:80 --env WORDPRESS_DB_PASSWORD=root --env WORDPRESS_DB_HOST=mysql --network demo wordpress
docker service ps wordpress
訪問manager或者worker的http地址,都能訪問到wordpress
集群服務間通信之RoutingMesh
swam有內置服務發現的功能。通過service訪問,是連到了overlay的網絡。 用到了vip。
首先要有demo的overlay網絡。
docker service create --name whoami -p 8000:8000 --network demo -d jwilder/whoami
docker service ls
docker service ps whoami #運行在manager節點
curl 127.0.0.1:8000
再創建一個busybox service
docker service create --name client -d --network demo busybox sh -c "while true;do sleep 3600;done"
docker service ls
docker service ps client #運行在work1節點
首先進到swarm 的 worker1節點
docker exec -it xxxx sh 進入這個busybox container
ping whoami #ping service的name, 10.0.0.7, 這個其實是一個vip,通過lvs創建的
docker service scale whoami=2 #擴展到2台
docker service ps whoami #有一台運行在work1,一台在work2
進入worker1的節點
docker exec -it xxx sh #進入busybox container
ping whoami #還是不變
nslookup whoami # 10.0.0.7 虛擬ip
nslookup tasks.whoami #有2個地址。這才是具體container的真實地址
iptables -t mangle -nL DOCKER-INGRESS 里做了轉發
Routing Mesh的兩種體現
Internal:在網絡中,container和container是通過overlay網絡來進行通信。
Ingress:如果服務有綁定接口,則此服務可以通過任意swarm節點的響應接口訪問。服務端口被暴露到每個swarm節點
docker stack 部署wordpress
compose yml的reference:https://docs.docker.com/compose/compose-file/
官方例子:
version: "3.3"
services:
wordpress:
image: wordpress
ports:
- "8080:80"
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: vip #vip指service互訪的時候,往外暴露的是虛擬的ip,底層通過lvs,負載均衡到后端服務器。默認為vip模式。
mysql:
image: mysql
volumes:
- db-data:/var/lib/mysql/data
networks:
- overlay
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr #dnsrr,直接使用service的ip地址,當橫向擴展了以后,可能有三個或者四個IP地址,循環調用。
volumes:
db-data:
networks:
overlay:
還有labels:打標簽
mode:global和replicated,global代表全cluster只有一個,不能做橫向擴展。replicated,mode的默認值,可以通過docker service scale做橫向擴展。
placement:設定service的限定條件。比如:
version: '3.3'
services:
db:
image: postgres
deploy:
placement:
constraints:
- node.role == manager #db這個service一定會部署到manager這個節點,並且系統環境一定是ubuntu 14.04
- engine.labels.operatingsystem == ubuntu 14.04
preferences:
- spread: node.labels.zone
replicas:如果設置了模式為replicted,可以設置這個值
resources:資源占用和保留。
restart_policy: 重啟條件,延遲,重啟次數
update_config: 配置更新時的參數,比如可以同時更新2個,要等10秒才更新第二個。
cat docker-compose.yml
version: '3'
services:
web: #這個service叫web
image: wordpress
ports:
- 8080:80
environment:
WORDPRESS_DB_HOST: mysql
WORDPRESS_DB_PASSWORD: root
networks:
- my-network
depends_on:
- mysql
deploy:
mode: replicated
replicas: 3
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
mysql: #這個service叫mysql
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: wordpress
volumes:
- mysql-data:/var/lib/mysql
networks:
- my-network
deploy:
mode: global #指能創建一台,不允許replicated
placement:
constraints:
- node.role == manager
volumes:
mysql-data:
networks:
my-network:
driver: overlay #默認為bridge,但是我們在多機集群里,要改成overlay。
發布:
docker stack deploy wordpress --compose-file=docker-compose.yml #stack的名字為wordpress
查看:
docker stack ls
docker stack ps wordpress
docker stack services wordpress #顯示services replicas的情況。
訪問:隨便挑一台node的ip 8080端口
注意:docker swarm不能使用上面投票系統中的build,所以要自己build image
投票系統,使用docker swarm部署:
cat docker-compose.yml
version: "3"
services:
redis:
image: redis:alpine
ports:
- "6379"
networks:
- frontend
deploy:
replicas: 2
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
db:
image: postgres:9.4
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
deploy:
placement:
constraints: [node.role == manager]
vote:
image: dockersamples/examplevotingapp_vote:before
ports:
- 5000:80
networks:
- frontend
depends_on:
- redis
deploy:
replicas: 2
update_config:
parallelism: 2
restart_policy:
condition: on-failure
result:
image: dockersamples/examplevotingapp_result:before
ports:
- 5001:80
networks:
- backend
depends_on:
- db
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 1
labels: [APP=VOTING]
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
placement:
constraints: [node.role == manager]
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
stop_grace_period: 1m30s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
frontend: #在swarm模式下默認是overlay的
backend:
volumes:
db-data:
啟動:
docker stack deploy voteapp --compose-file=docker-compose.yml
docker secret 管理
internal distributed store 是存儲在所有swarm manager節點上的,所以manager節點推薦2台以上。存在swarm manger節點的raft database里。
secret可以assign給一個service,這個service就能看到這個secret
在container內部secret看起來像文件,但是實際是在內存中。
secret創建,從文件創建:
vim alexpasswd
admin123
docker secret create my-pw alexpasswd #給這個secret起個名字叫my-pw
查看:
docker secret ls
從標准輸入創建:
echo "adminadmin" | docker secret create my-pw2 - # 從標准輸入創建
刪除:
docker secret rm my-pw2
把一個secret暴露給service
docker service create --name client --secret my-pw busybox sh -c while true;do sleep 3600;done"
查看container在哪個節點上:
docker service ps client
進入這個container:
docker exec -it ccee sh
cd /run/secret/my-pw #這里就能看到我們的密碼secret
例如mysql的docker:
docker service create --name db --secret my-pw -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/my-pw mysql
secret 在stack中使用:
有個密碼文件:
cat password
adminadmin
docker-compose.yml文件:
version: '3'
services:
web:
image: wordpress
ports:
- 8080:80
secrets:
- my-pw
environment:
WORDPRESS_DB_HOST: mysql
WORDPRESS_DB_PASSWORD_FILE: /run/secrets/my-pw
networks:
- my-network
depends_on:
- mysql
deploy:
mode: replicated
replicas: 3
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
mysql:
image: mysql
secrets:
- my-pw
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/my-pw
MYSQL_DATABASE: wordpress
volumes:
- mysql-data:/var/lib/mysql
networks:
- my-network
deploy:
mode: global
placement:
constraints:
- node.role == manager
volumes:
mysql-data:
networks:
my-network:
driver: overlay
# secrets:
# my-pw:
# file: ./password
使用:
docker stack deploy wordpress -c=docker-compose.yml
service的更新:
首先建立源service:
docker service create --name web --publish 8080:5000 --network demo alexhe/python-flask-demo:1.0
開始擴展,至少2個:
docker service scale web=2
檢查服務 curl 127.0.0.1:8080
while true;do curl 127.0.0.1:8080 && sleep 1;done
更新服務:
可以更新secret,publish port,image等等
docker service update --image alexhe/python-flask-demo:2.0 web
更新端口:
docker service update --publish-rm 8080:5000 --publish-add 8088:5000 web
k8s版本