1 使用Dockerfile定制redis鏡像
Docker基礎知識總結參考上篇博客:https://www.cnblogs.com/darope/p/13861840.html。
1.1 環境准備
- redis安裝包
- redis配置文件
- 編寫Dockerfile文件
~/opt/docker-redis/ ls
Dockerfile redis-4.0.14.tar.gz redis-7000.conf
~/opt/docker-redis/
配置文件redis-7000.conf:
port 7000
bind 0.0.0.0
1.2 編寫Dockerfile文件
我們需要在一個干凈的centOs上安裝必要的環境。由於redis基於c語言開發,我們需要安裝c語言編譯環境
gcc
和gcc-c++
。net-tools
為網絡工具包,make
為安裝程序的必備組件。Dockerfile文件如下
FROM centos
RUN ["yum" , "install" , "-y" ,"gcc","gcc-c++","net-tools","make"]
WORKDIR /usr/local
ADD redis-4.0.14.tar.gz .
WORKDIR /usr/local/redis-4.0.14/src
RUN make && make install
WORKDIR /usr/local/redis-4.0.14
ADD redis-7000.conf .
EXPOSE 7000
CMD ["redis-server","redis-7000.conf"]
解釋:
FROM centos
表示設置基准鏡像為一個centOS環境RUN ["yum" , "install" , "-y" ,"gcc","gcc-c++","net-tools","make"]
表示為純凈的centOS系統安裝運行redis的tar包的必備工具WORKDIR /usr/local
切換工作目錄為/usr/local,類似於linux的cd命令ADD redis-4.0.14.tar.gz .
拷貝並解壓和Dockerfile同級目錄下的redis的tar包。拷貝的鏡像的當前工作目錄下WORKDIR /usr/local/redis-4.0.14/src
更換工作目錄到已經解壓的redis文件夾的src下RUN make && make install
在鏡像中的linux環境對源碼包編譯和安裝WORKDIR /usr/local/redis-4.0.14
工作目錄切換到redis的解壓目錄ADD redis-7000.conf .
把與Dockerfile文件同級目錄下的redis配置文件拷貝到當前鏡像工作空間下EXPOSE 7000
開放鏡像的7000端口CMD ["redis-server","redis-7000.conf"]
啟動redis服務
1.3 通過Dockerfile構建鏡像
Last login: Tue Oct 27 00:47:28 on ttys001
~/ cd opt
~/opt/ cd docker-redis
~/opt/docker-redis/ ls
Dockerfile redis-4.0.14.tar.gz redis-7000.conf
~/opt/docker-redis/ docker build --tag myredis:1.0 .
Sending build context to Docker daemon 1.745MB
Step 1/10 : FROM centos
---> 0d120b6ccaa8
Step 2/10 : RUN ["yum" , "install" , "-y" ,"gcc","gcc-c++","net-tools","make"]
---> Running in c0234e893495
CentOS-8 - AppStream 2.2 MB/s | 5.8 MB 00:02
CentOS-8 - Base 1.5 MB/s | 2.2 MB 00:01
CentOS-8 - Extras 14 kB/s | 8.1 kB 00:00
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
gcc x86_64 8.3.1-5.el8.0.2 AppStream 23 M
...略
...略
...略
Installed:
cpp-8.3.1-5.el8.0.2.x86_64
gcc-8.3.1-5.el8.0.2.x86_64
gcc-c++-8.3.1-5.el8.0.2.x86_64
glibc-devel-2.28-101.el8.x86_64
glibc-headers-2.28-101.el8.x86_64
isl-0.16.1-6.el8.x86_64
kernel-headers-4.18.0-193.19.1.el8_2.x86_64
libgomp-8.3.1-5.el8.0.2.x86_64
libmpc-1.0.2-9.el8.x86_64
libpkgconf-1.4.2-1.el8.x86_64
libstdc++-devel-8.3.1-5.el8.0.2.x86_64
libxcrypt-devel-4.1.1-4.el8.x86_64
make-1:4.2.1-10.el8.x86_64
net-tools-2.0-0.51.20160912git.el8.x86_64
pkgconf-1.4.2-1.el8.x86_64
pkgconf-m4-1.4.2-1.el8.noarch
pkgconf-pkg-config-1.4.2-1.el8.x86_64
Complete!
Removing intermediate container c0234e893495
---> 826ee526b28e
Step 3/10 : WORKDIR /usr/local
---> Running in 0f3cfeb79b31
Removing intermediate container 0f3cfeb79b31
---> c987d8ce6f8c
Step 4/10 : ADD redis-4.0.14.tar.gz .
---> f5dad2363617
Step 5/10 : WORKDIR /usr/local/redis-4.0.14/src
---> Running in 0bc0c20cbfa3
Removing intermediate container 0bc0c20cbfa3
---> c520be237ee0
Step 6/10 : RUN make && make install
---> Running in 302720e3f711
CC Makefile.dep
...略
...略
...略
INSTALL install
INSTALL install
INSTALL install
INSTALL install
INSTALL install
Removing intermediate container 302720e3f711
---> 90d3292283bd
Step 7/10 : WORKDIR /usr/local/redis-4.0.14
---> Running in 1d48fe8e8a0f
Removing intermediate container 1d48fe8e8a0f
---> 4061f5b591b1
Step 8/10 : ADD redis-7000.conf .
---> 74aa59023e05
Step 9/10 : EXPOSE 7000
---> Running in cba837f6acc3
Removing intermediate container cba837f6acc3
---> 7a5ebf5ea52c
Step 10/10 : CMD ["redis-server","redis-7000.conf"]
---> Running in 5f5dedeb0382
Removing intermediate container 5f5dedeb0382
---> d5e04541d181
Successfully built d5e04541d181
Successfully tagged myredis:1.0
~/opt/docker-redis/ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
myredis 1.0 d5e04541d181 27 seconds ago 497MB
docker_run latest ad84f2ed6200 4 days ago 215MB
myweb 1.0 bf912fc6c119 4 days ago 647MB
tomcat latest 891fcd9c5b3a 12 days ago 647MB
centos latest 0d120b6ccaa8 2 months ago 215MB
~/opt/docker-redis/
1.4 通過鏡像運行容器
~/opt/docker-redis/ docker run -p 7000:7000 myredis:1.0
1:C 26 Oct 16:56:13.709 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 26 Oct 16:56:13.709 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 26 Oct 16:56:13.709 # Configuration loaded
1:M 26 Oct 16:56:13.711 * Running mode=standalone, port=7000.
1:M 26 Oct 16:56:13.711 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 26 Oct 16:56:13.711 # Server initialized
1:M 26 Oct 16:56:13.711 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 26 Oct 16:56:13.711 * Ready to accept connections
可以看到Ready to accept connections表示我們的redis服務端已經啟動,等待客戶端建立連接。啟動redis客戶端驗證通過。
開啟新的終端,進入容器,查看工作目錄,文件信息:
~/ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f8190d4d1de myredis:1.0 "redis-server redis-…" 8 seconds ago Up 7 seconds 0.0.0.0:7000->7000/tcp busy_leavitt
26c97c083e42 bf912fc6c119 "catalina.sh run" 4 days ago Up 4 days 0.0.0.0:8000->8080/tcp loving_pasteur
~/ docker exec -it 1f8190d4d1de sh
sh-4.4# pwd
/usr/local/redis-4.0.14
sh-4.4# ls
00-RELEASENOTES BUGS CONTRIBUTING COPYING INSTALL MANIFESTO Makefile README.md deps redis-7000.conf redis.conf runtest runtest-cluster runtest-sentinel sentinel.conf src tests utils
sh-4.4#
1.5 官方鏡像替代我們構建鏡像
一般dockerhub上都有官方鏡像,我們並不需要每次都自己手動構建鏡像,對於官方支持的鏡像我們可以直接運行docker pull
命令安裝,例如docker pull redis
,效果和我們自己構建沒有什么差別
2 容器間單向通信(Link)
2.1 虛擬ip概念
在docker環境下,容器創建后,都會默認分配一個虛擬ip,該ip無法從外界直接訪問,但是在docker環境下各個ip直接是互通互聯的。下圖假設Tomcat分配的虛擬ip為107.1.31.22,Mysql分配的虛擬ip為107.1.31.24。
雖然我們在Docker環境下可以通過虛擬ip互相訪問,但是局限性很大,原因是容器是隨時可能宕機,重啟的,宕機重啟后會為容器重新分配ip,這樣原來直接通信的ip就消失了。所以容器間通信不建議通過ip進行通信
我們可以通過為容器命名,通過名稱通信,這樣無論該容器重啟多少次,ip如何改變,都不會存在通信不可用的情形。
- 通過
docker run -d --name myweb tomcat
命令,使用tomcat鏡像運行一個名稱為myweb的容器。通過docker run -d --name mydatabases mysql
命令,使用mysql鏡像運行一個名稱為mydatabases的容器 - 通過
docker inspect myweb
查看myweb容器的詳細配置。其中NetworkSettings下的IPAddress的信息即為docker為該容器分配的虛擬ip地址
~/ docker run -d --name myweb tomcat
03fc2187d4ef719325f31578ca07438e1cba4257a0abd9d233755bfb8d9812d2
~/ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03fc2187d4ef tomcat "catalina.sh run" 6 seconds ago Up 5 seconds 8080/tcp myweb
1f8190d4d1de myredis:1.0 "redis-server redis-…" 21 hours ago Up 21 hours 0.0.0.0:7000->7000/tcp busy_leavitt
26c97c083e42 bf912fc6c119 "catalina.sh run" 5 days ago Up 5 days 0.0.0.0:8000->8080/tcp loving_pasteur
~/ docker inspect myweb
[
{
"Id": "03fc2187d4ef719325f31578ca07438e1cba4257a0abd9d233755bfb8d9812d2",
"Created": "2020-10-27T13:48:17.8800933Z",
"Path": "catalina.sh",
"Args": [
"run"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 6558,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-10-27T13:48:18.5295188Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:891fcd9c5b3a174d9ef63832ededae9dc5c986bb1bb66fe35391a4b3a6734804",
"ResolvConfPath": "/var/lib/docker/containers/03fc2187d4ef719325f31578ca07438e1cba4257a0abd9d233755bfb8d9812d2/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/03fc2187d4ef719325f31578ca07438e1cba4257a0abd9d233755bfb8d9812d2/hostname",
"HostsPath": "/var/lib/docker/containers/03fc2187d4ef719325f31578ca07438e1cba4257a0abd9d233755bfb8d9812d2/hosts",
"LogPath": "/var/lib/docker/containers/03fc2187d4ef719325f31578ca07438e1cba4257a0abd9d233755bfb8d9812d2/03fc2187d4ef719325f31578ca07438e1cba4257a0abd9d233755bfb8d9812d2-json.log",
"Name": "/myweb",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/fdc6d8ca2570d6bade53746f4db2b36619ef168b83c2f8b31f650943afe477bf-init/diff:/var/lib/docker/overlay2/85b7f801b61243c4b267ca2ad66e60e8739ed444bf78fb06027612a932d1c947/diff:/var/lib/docker/overlay2/74cdb3bef6ede3f9abb989bdc02e374064beebd8aeb7bdf7387f0e3e45d8a088/diff:/var/lib/docker/overlay2/c70ad2ded87f5ab75cc976060f2cce78ee56533b70ce7b06f4be7d8cce6726f5/diff:/var/lib/docker/overlay2/c7bed81095213cd41f384f7fb6dd29f03040fd0dfb22e8e50e9d91fc3166189c/diff:/var/lib/docker/overlay2/93f9bc9528c36bfd401c60a343dec9658ffe4443673474b292cf5f8acbb21954/diff:/var/lib/docker/overlay2/a836c96366b7b5f8c917a53034294a0459f7e6c2ee937ffad66b4f760c690dd1/diff:/var/lib/docker/overlay2/47d65c3061fd960bf4d7f117be0f97c70c270b91d2a6e6e39610b48b90e7f0b0/diff:/var/lib/docker/overlay2/a284464641d4cd773b7c132b05ac0944ecc96bb4348a45b7338d84c73f0209cd/diff:/var/lib/docker/overlay2/f808fbd4a46131f4e7c018f05834422bb6a59035e0c3489daf4908e7f0ef3080/diff:/var/lib/docker/overlay2/7b05a0f542a939883c73f74ee5b71db648dc7ff39108b53566fa160f7c176d4a/diff",
"MergedDir": "/var/lib/docker/overlay2/fdc6d8ca2570d6bade53746f4db2b36619ef168b83c2f8b31f650943afe477bf/merged",
"UpperDir": "/var/lib/docker/overlay2/fdc6d8ca2570d6bade53746f4db2b36619ef168b83c2f8b31f650943afe477bf/diff",
"WorkDir": "/var/lib/docker/overlay2/fdc6d8ca2570d6bade53746f4db2b36619ef168b83c2f8b31f650943afe477bf/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "03fc2187d4ef",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8080/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/tomcat/bin:/usr/local/openjdk-11/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_HOME=/usr/local/openjdk-11",
"JAVA_VERSION=11.0.8",
"CATALINA_HOME=/usr/local/tomcat",
"TOMCAT_NATIVE_LIBDIR=/usr/local/tomcat/native-jni-lib",
"LD_LIBRARY_PATH=/usr/local/tomcat/native-jni-lib",
"GPG_KEYS=05AB33110949707C93A279E3D3EFE6B686867BA6 07E48665A34DCAFAE522E5E6266191C37C037D42 47309207D818FFD8DCD3F83F1931D684307A10A5 541FBE7D8F78B25E055DDEE13C370389288584E7 61B832AC2F1C5A90F0F9B00A1C506407564C17A3 79F7026C690BAA50B92CD8B66A3AD3F4F22C4FED 9BA44C2621385CB966EBA586F72C284D731FABEE A27677289986DB50844682F8ACB77FC2E86E29AC A9C5DF4D22E99998D9875A5110C01C5A2F6059E7 DCFD35E0BF8CA7344752DE8B6FB21E8933C60243 F3A04C595DB5B6A5F1ECA43E3B7BBB100D811BBE F7DA48BB64BCB84ECBA7EE6935CD23C10D498E23",
"TOMCAT_MAJOR=9",
"TOMCAT_VERSION=9.0.39",
"TOMCAT_SHA512=307ca646bac267e529fb0862278f7133fe80813f0af64a44aed949f4c7a9a98aeb9bd7f08b087645b40c6fefdd3a7fe519e4858a3dbf0a19c38c53704f92b575"
],
"Cmd": [
"catalina.sh",
"run"
],
"Image": "tomcat",
"Volumes": null,
"WorkingDir": "/usr/local/tomcat",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "1ba8bc8211a894e4895e85552eaba33124c85b2efd280c2a8662d1f2683bf6b4",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8080/tcp": null
},
"SandboxKey": "/var/run/docker/netns/1ba8bc8211a8",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "1d5195726db2eee73b97ea994d43b1cef17e0918e1c25925de2e9f0b03ba3616",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.4",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:04",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "a31f18ae4b97ed8e84469a6f48aa03de90c59d054f36705c41ef645a6220a68a",
"EndpointID": "1d5195726db2eee73b97ea994d43b1cef17e0918e1c25925de2e9f0b03ba3616",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.4",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:04",
"DriverOpts": null
}
}
}
}
]
~/
我們記錄mydatabases容器的ip地址,進入myweb容器,用ping命令訪問mydatabases的ip。發現是沒問題的,說明docker之間運用虛擬ip地址可以直接互相訪問。
2.2 配置容器間通過名稱訪問
- 把之前的myweb容器。運用
docker rm myweb
移除掉,保留mydatabases容器。通過docker run -d --name myweb --link mydatabases tomcat
來運行容器。其中--link是指定該容器需要和名稱為databases的容器通信。 - 我們進入myweb容器
docker exec -it myweb sh
,運行命令ping mydatabases
,即可通信。我們配置mysql連接的時候,把ip換成mydatabases即可,docker會自動維護mydatabases和該容器ip的映射,即使該容器的ip改變也不會影響訪問
3 容器間雙向通信(bridge)
通過上文,我們配置容器間互相link,也是可以實現雙向通信的,但是有點麻煩。docker為我們提供了網橋,用來快速實現容器間雙向通信
3.1 Docker中的虛擬網橋
docker網橋組件概覽圖
docker中的網橋也是虛擬出來的,網橋的主要用途是用來實現docker環境和外部的通信。我們在某個容器內部ping外部的ip例如百度。是可以ping通的。實際上是容器發出的數據包,通過虛擬網橋,發送給宿主機的物理網卡,實際上是借助物理網卡與外界通信的,反之物理網卡也會通過虛擬網橋把相應的數據包送回給相應的容器。
3.2 借助網橋進行容器間通信
我們可以借助網橋實現容器間的雙向通信。docker虛擬網橋的另一個作用是為容器在網絡層面上進行分組,把不同容器都綁定到網橋上,這樣容器間就可以天然互聯互通。
docker run -d --name myweb tomcat
運行myweb容器;docker run -d -it --name mydatabases centos sh
交互模式掛起一個databases容器(這里用一個linux容器模擬數據庫服務器)。由於沒配置網橋,此時容器間無法通過名稱互相通信docker network ls
列出docker網絡服務明細,其中bridge的條目就是docker默認的網橋。接着我們新建立一個網橋docker network create -d bridge my-bridge
命名為my-bridge- 把容器與我們新建立的my-bridge網橋進行綁定:
docker network connect my-bridge myweb
,同理:docker network connect my-bridge mydatabases
。需要讓哪些容器互聯互通,就把容器綁定到該網橋上,用來標識這些容器同處在my-bridge網橋的分組中。至此容器間可以通過名稱互相訪問
~/ docker network ls
NETWORK ID NAME DRIVER SCOPE
a31f18ae4b97 bridge bridge local
9e311308c3ce host host local
2c5f89509739 none null local
~/ docker network create -d bridge my-bridge
5e678ed577b120f0d95e87ce43d44bab8e15e47f4002428168cad61120c54cc7
~/ docker network ls
NETWORK ID NAME DRIVER SCOPE
a31f18ae4b97 bridge bridge local
9e311308c3ce host host local
5e678ed577b1 my-bridge bridge local
2c5f89509739 none null local
~/ docker network connect my-bridge myweb
~/ docker network connect my-bridge mydatabases
~/ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
92888ef080a9 centos "sh" 2 minutes ago Up 2 minutes mydatabases
03fc2187d4ef tomcat "catalina.sh run" 43 minutes ago Up 43 minutes 8080/tcp myweb
~/ docker exec -it 03fc2187d4ef sh
# ping mydatabases
PING mydatabases (172.18.0.3) 56(84) bytes of data.
64 bytes from mydatabases.my-bridge (172.18.0.3): icmp_seq=1 ttl=64 time=0.278 ms
64 bytes from mydatabases.my-bridge (172.18.0.3): icmp_seq=2 ttl=64 time=0.196 ms
64 bytes from mydatabases.my-bridge (172.18.0.3): icmp_seq=3 ttl=64 time=0.417 ms
^C
--- mydatabases ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 55ms
rtt min/avg/max/mdev = 0.196/0.297/0.417/0.091 ms
# exit
~/
3.3 網橋通信原理
網橋為什么可以實現容器間的互聯互通?實際上,每當我們創建一個網橋,docker都會在宿主機上安裝一個虛擬網卡,該虛擬網卡也充當了一個網關的作用。與該網橋(虛擬網關)綁定的容器,相當於處在一個局域網,所以可以通信。虛擬網卡畢竟是虛擬的,如果容器想要和外部通信,仍然需要借助外部(宿主機)的物理網卡。虛擬網卡的數據包進行地址轉換,轉換為物理網卡的數據包發送出去,反之外部和內部容器通信,也需要進行數據包地址轉換
4 容器間共享數據(Volume)
4.1 數據卷Volume
場景:很多情形下,我們的web服務都是存在多台相同的服務達到負載均衡的作用。這些web服務在webapp下都存在相同的副本。如果我們要更改頁面,那么每個服務都要去更新webapp下的頁面文件,才能保證所有的web服務仍然相同。如果我們的服務比較多,甚至達到上千個,那么每個服務更新太過繁瑣。為了解決這個問題,我們產生了數據共享方案(Volume)
數據共享的原理,是在宿主機上開辟一塊空間,該空間會被其他容器同時共享。我們可以把頁面文件存放一份到宿主機的這塊磁盤空間中,其他容器掛載這個文件即可。以后更新所有的web服務的頁面,我們只需要更改宿主機的這份頁面文件即可。
4.2 容器間共享數據-掛載
方法一、docker run --name 容器名 -v 宿主機路徑:容器內掛載路徑 鏡像名
。例如docker run --name myweb01 -v /usr/webapps:/usr/local/tomcat/webapps tomcat
。
方法二、通過設置掛載點,我們可以創建一個用於掛載的容器,其他容器設置參照該容器的掛載信息,進行掛載。例如我們創建一個webpage的容器,設置掛載信息,無需運行該容器:docker create --name webpage -v /webapps:/tomcat/webapps tomcat /bin/true
,接着我們創建其他容器參照該容器的掛載信息進行掛載:docker run --volumes-from webpage --name myweb02 -d tomcat
,我們創建的myweb02的容器的掛載信息,就和webpage的掛載信息相同。如果需要修改掛載點,只需要修改webpage的掛載信息即可。
5 容器編排技術(docker-compose)
5.1 docker-compose介紹
我們部署一套應用,例如nginx+tomcat+mysql,每個服務可以當成一個容器,每個容器都有自己獨立的配置文件,每個容器單獨部署如果應用很多,非常麻煩。docker-compose解決了這個問題
通過我們的編排,我們可以自動的先安裝mysql容器,再安裝tomcat容器,再安裝nginx容器,被依賴的容器先安裝,完全自動化。這種定義容器安裝先后順序的部署腳本稱為容器編排。
Docker-compose 是單機多容器部署工具,通過yml文件定義多容器如何部署,WIN/MAC默認提供Docker Compose,Linux需安裝。docker-compose能力有限,只能在一台宿主機上對容器編排部署。集群部署由另外的工具k8s來完成
5.2 安裝docker-compose
Mac和Win的桌面版docker自帶docker-compose。linux的安裝參考官網https://docs.docker.com/compose/
1、安裝
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
2、文件夾授權
sudo chmod +x /usr/local/bin/docker-compose #賦予可執行權限
3、驗證docker-compose
~/ docker-compose -version
docker-compose version 1.27.4, build 40524192
~/
5.3 使用docker-compose快速安裝開源博客
1、創建名為docker-compose.yml的文件
2、編輯文件參考官網直接填入
version: '3.3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data: {}
3、解釋執行該docker-compose文件,運行docker-compose up -d
,其中up表示解析該腳本,-d表示后台執行
~/opt/docker_compose/ ls
docker-compose.yml
~/opt/docker_compose/ docker-compose up -d
Creating network "docker_compose_default" with the default driver
Creating volume "docker_compose_db_data" with default driver
Pulling db (mysql:5.7)...
5.7: Pulling from library/mysql
bb79b6b2107f: Pull complete
49e22f6fb9f7: Pull complete
842b1255668c: Pull complete
9f48d1f43000: Pull complete
c693f0615bce: Pull complete
8a621b9dbed2: Pull complete
0807d32aef13: Pull complete
f15d42f48bd9: Pull complete
098ceecc0c8d: Pull complete
b6fead9737bc: Pull complete
351d223d3d76: Pull complete
Digest: sha256:4d2b34e99c14edb99cdd95ddad4d9aa7ea3f2c4405ff0c3509a29dc40bcb10ef
Status: Downloaded newer image for mysql:5.7
Pulling wordpress (wordpress:latest)...
latest: Pulling from library/wordpress
bb79b6b2107f: Already exists
80f7a64e4b25: Pull complete
da391f3e81f0: Pull complete
8199ae3052e1: Pull complete
284fd0f314b2: Pull complete
f38db365cd8a: Pull complete
1416a501db13: Pull complete
1a45b5b978cd: Pull complete
c662caa8d2ec: Pull complete
2db216a7247d: Pull complete
c3a7647076e8: Pull complete
e40fcea67f94: Pull complete
7f3f9920f7b8: Pull complete
815cf81de52a: Pull complete
680504ca4ff0: Pull complete
9ffcc5a051ce: Pull complete
b9db15beb1db: Pull complete
d5b4974eafaa: Pull complete
0265b92c6601: Pull complete
3342ef871b20: Pull complete
Digest: sha256:6bfe0d4bdb581493c2350da80c48fca089d39315d8fa309bdff7984442e13ba9
Status: Downloaded newer image for wordpress:latest
Creating docker_compose_db_1 ... done
Creating docker_compose_wordpress_1 ... done
~/opt/docker_compose/ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0afa29639779 wordpress:latest "docker-entrypoint.s…" 55 seconds ago Up 54 seconds 0.0.0.0:8000->80/tcp docker_compose_wordpress_1
32333851cdf3 mysql:5.7 "docker-entrypoint.s…" 56 seconds ago Up 54 seconds 3306/tcp, 33060/tcp docker_compose_db_1
~/opt/docker_compose/
可以看到我們自動安裝了兩個容器,一個是底層數據庫,一個值上層博客應用wordpress。並且在應用的容器上自動做了端口映射。此時我們可以訪問127.0.0.1:8000可以看到wordpress的安裝向導頁面。填寫用戶名和密碼等注冊信息,接着登錄,博客系統的后台就呈現出來了。
6 docker-dompose實戰
項目結構:
~/opt/ ls
bsbdj docker docker-redis docker_compose docker_run
~/opt/ cd bsbdj
~/opt/bsbdj/ ls
bsbdj-app bsbdj-db docker-compose.yml
~/opt/bsbdj/ cd bsbdj-db
~/opt/bsbdj/bsbdj-db/ ls
Dockerfile init-db.sql
~/opt/bsbdj/bsbdj-db/ ls ../bsbdj-app
Dockerfile application-dev.yml application.yml bsbdj.jar
~/opt/bsbdj/bsbdj-db/
其中/opt/bsbdj/bsbdj-db/init-db.sql是項目需要的初始化sql腳本。/opt/bsbdj/bsbdj-app/babdj.jar是由springboot開發的web應用。
6.1 創建Dockerfile為DB構建DB容器
Dockerfile處在~/opt/bsbdj/bsbdj-db/目錄下,內容為:
FROM mysql:5.7
# 該目錄是mysql5.7提供的初始化數據庫的目錄,mysql5.7官方定義的,可以參考官方文檔
WORKDIR /docker-entrypoint-initdb.d
ADD init-db.sql .
mysql官方鏡像提供,構建mysql鏡像時,可以傳入環境變量用來設置mysql的root用戶的密碼。例如:docker run --name mysql01 -e MYSQL_ROOT_PASSWORD=abc123 -d mysql:5.7
。Mysql官方提供了大量的環境變量的設置,例如MYSQL_ROOT_PASSWORD表示設置密碼,MYSQL_DATABASES設置mysql的數據庫,MYSQL_USER和MYSQL_PASSWORD設置另外一個用戶和密碼。文檔還指出,放在/docker-entrypoint-initdb.d目錄下的sql文件,在運行容器的過程中會被執行。詳情參考mysql官方鏡像的文檔(https://hub.docker.com/_/mysql)
6.2 創建Dockerfile為App構建web服務容器
Dockerfile處在~/opt/bsbdj/bsbdj-app/目錄下,內容為:
FROM openjdk:8u222-jre
WORKDIR /usr/local/bsbdj
ADD bsbdj.jar .
ADD application.yml .
ADD application-dev.yml .
EXPOSE 80 # 由於配置文件中端口開放是80,我們鏡像也要開放相應的80端口
CMD ["java","-jar","bsbdj.jar"]
6.3 用docker-compose編排容器
- 在~/optbsbdj/下創建docker-compose.yml並編寫:
version: '3.3'
services:
db:
build: ./bsbdj-db/
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
app:
build: ./bsbdj-app/
depends_on:
- db
ports:
- "80:80"
restart: always
其中
1、version:'3.3'是docker-compose最常見的版本,不同版本的配置項使用略有差別。
2、services是一個父級節點,表示該編排存在的各種服務。
3、services下級,是各個容器的服務信息,我們這里存在db和app兩個服務,db和app也會作為容器的名稱,並且我們可以通過服務名替換容器間相互通信的ip(網絡的主機名),也可達到互相訪問的效果
4、build是服務的子節點,表示該服務容器通過哪里的Dockerfile來構建
5、restart表示服務重啟的策略,如果容器意外退出,那么docker會重新啟動
6、environment指定服務容器的環境變量
7、depends_on表示該容器的以來,我們的app容器依賴於db服務容器,depends_on下跟服務名
8、ports用來設置該服務的端口和宿主機端口的一個映射,格式為宿主機端口:容器暴露的端口
- 運行編排腳本,在~/optbsbdj/下執行
docker-compose up -d
~/opt/bsbdj/ ls
bsbdj-app bsbdj-db docker-compose.yml
~/opt/bsbdj/ docker-compose up -d
Creating network "bsbdj_default" with the default driver
Building db
Step 1/3 : FROM mysql:5.7
---> 1b12f2e9257b
Step 2/3 : WORKDIR /docker-entrypoint-initdb.d
---> Running in 75a8bc4e79e2
Removing intermediate container 75a8bc4e79e2
---> c66ad7c93eff
Step 3/3 : ADD init-db.sql .
---> d5f353210a89
Successfully built d5f353210a89
Successfully tagged bsbdj_db:latest
WARNING: Image for service db was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building app
Step 1/7 : FROM openjdk:8u222-jre
8u222-jre: Pulling from library/openjdk
9a0b0ce99936: Pull complete
db3b6004c61a: Pull complete
f8f075920295: Pull complete
4901756f2337: Pull complete
9cfcf0e1f584: Pull complete
d6307286bdcd: Pull complete
Digest: sha256:3d3df6a0e485f9c38236eaa795fc4d2e8b8d0f9305051c1e4f7fbca71129b06a
Status: Downloaded newer image for openjdk:8u222-jre
---> 25073ded58d2
Step 2/7 : WORKDIR /usr/local/bsbdj
---> Running in 3f8dfd0f5274
Removing intermediate container 3f8dfd0f5274
---> c4be1b7c9c34
Step 3/7 : ADD bsbdj.jar .
---> e63c66e4097a
Step 4/7 : ADD application.yml .
---> e670ee9a6aef
Step 5/7 : ADD application-dev.yml .
---> 164471f9fdc9
Step 6/7 : EXPOSE 80
---> Running in efdae300b488
Removing intermediate container efdae300b488
---> 91f9d21b9090
Step 7/7 : CMD ["java","-jar","bsbdj.jar"]
---> Running in b2cee3ea8900
Removing intermediate container b2cee3ea8900
---> 5e4ed28267e8
Successfully built 5e4ed28267e8
Successfully tagged bsbdj_app:latest
WARNING: Image for service app was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating bsbdj_db_1 ... done
Creating bsbdj_app_1 ... done
~/opt/bsbdj/ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eb5a44f51446 bsbdj_app "java -jar bsbdj.jar" 4 seconds ago Up 4 seconds 0.0.0.0:80->80/tcp bsbdj_app_1
bfc741434b91 bsbdj_db "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 3306/tcp, 33060/tcp bsbdj_db_1
~/opt/bsbdj/
-
docker-compose logs
用於查看對應的日志,在~/opt/babdj/下執行。docker-compose logs app
用來查看app服務的日志信息。 -
docker-compose down 用於停止服務,在~/opt/babdj/下執行。先停止再移除服務容器
~/opt/bsbdj/ docker-compose down
Stopping bsbdj_app_1 ... done
Stopping bsbdj_db_1 ... done
Removing bsbdj_app_1 ... done
Removing bsbdj_db_1 ... done
Removing network bsbdj_default
~/opt/bsbdj/
-
修改app的配置文件,把之前mysql的數據庫連接的地址localhose,換成db服務名。即可解決數據庫連接不上的問題。
jdbc:mysql://db:3306/bsbdj?useUnicode=true
-
訪問127.0.0.1,即可看到服務運行情況
docker-compose命令可以參考官方文檔,由於docker-compose只支持單機部署,一般大型項目上使用較少,上面的基本命令已經大致滿足。如果要應對大型集群的部署,我們可以進而學習k8s