Docker 网络

⚠ 转载请注明出处:作者:ZobinHuang,更新日期:July 12 2021


知识共享许可协议

    本作品ZobinHuang 采用 知识共享署名-非商业性使用-禁止演绎 4.0 国际许可协议 进行许可,在进行使用或分享前请查看权限要求。若发现侵权行为,会采取法律手段维护作者正当合法权益,谢谢配合。


目录

有特定需要的内容直接跳转到相关章节查看即可。

    Section 1. 容器网络原理:简单阐述了 Docker 网络实现背后的原理

    Section 2. 实现根据容器名称来连接网络:解释了如何使用 --link 参数来使能根据容器名称来 ping 通对端容器

    Section 3. 自定义网络:学习了如何自建容器网络,并且区分了不同类型的容器网络的作用和 Motivation

    Section 4. 网络互通:学习了如何将一个容器接入到多个网络中

1. 容器网络原理

    当我们在一台机器上将 Docker 起起来时,我们会发现这样一个网络接口,其名称为 docker0。docker 0 实际上是一个 Linux 的虚拟网桥:

1
2
3
4
5
6
7
8
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:17ff:fedd:eba5 prefixlen 64 scopeid 0x20<link>
ether 02:42:17:dd:eb:a5 txqueuelen 0 (Ethernet)
RX packets 396128 bytes 55307406 (52.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 416988 bytes 76745267 (73.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    当我们运行一个容器时,我们这里以 tomcat 容器为例,我们查看一下容器内的网络接口信息:

1
2
3
4
5
6
7
8
9
10
$ docker run -P -d --name tomcat01 tomcat
$ docker exec -it tomcat01 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
152: eth0@if153: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever

    我们发现容器的 eth0@if153 网络接口和宿主机的 docker0 虚拟网桥是处于同一个网段下的,

    并且,结合 ip addr 命令和 ifconfig 命令,我们能发现我们的宿主机多了下面这个网络接口:

1
2
3
4
5
6
7
veth00111ba: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet6 fe80::6067:bbff:fe96:2e0f prefixlen 64 scopeid 0x20<link>
ether 62:67:bb:96:2e:0f txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 43 bytes 4476 (4.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    这是 Linux 的 veth-pair 机制下的一个虚拟网络接口。由于容器是基于内核的 Namespace 机制实现了网络之间的相互隔离,因此理论上来说不同的 Namespace 之间的网络应该是互相隔离的,包括宿主机与容器之间,以及容器与容器之间。但是由于 Docker 在每创建一个容器的时候,就会在宿主机和容器之间创建一对 veth-pair 网卡,因此相当于打通了两个 Namespace。我们用下面的例子来继续说明这个结构:

    我们再开一个 Tomcat 容器。并且查看它的网卡信息:

1
2
3
4
5
6
7
8
9
$ docker exec -it tomcat02 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
154: eth0@if155: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever

    同样的,宿主机也多了一张网卡:

1
2
3
4
5
6
7
vetha1cf07a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet6 fe80::7ce1:a5ff:fee1:e prefixlen 64 scopeid 0x20<link>
ether 7e:e1:a5:e1:00:0e txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 36 bytes 3882 (3.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    这样一来,我们在这台机器上的容器网络可以被理解下面的结构:

    我们尝试在 Tomcat 1 中去 ping Tomcat 2,显然是行的通的:

1
2
3
4
5
6
7
8
9
10
11
[root@ks tomcat_example]$ docker exec -it tomcat01 /bin/bash
root@84f1e03fd2a3:/usr/local/tomcat$ ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.176 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.094 ms
64 bytes from 172.17.0.3: icmp_seq=3 ttl=64 time=0.097 ms
64 bytes from 172.17.0.3: icmp_seq=4 ttl=64 time=0.094 ms
64 bytes from 172.17.0.3: icmp_seq=5 ttl=64 time=0.091 ms
^C
--- 172.17.0.3 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 117ms

2. 实现根据容器名称来连接网络

    在创建容器的时候,我们可以通过加上 "--link" 参数,来直接连通两个容器的网络,使得一方能够直接使用另一方的容器名称来找到对方。如下所示,我们创建一个 Tomcat 容器 tomcat03,并且使用 --link 将其连接至 tomcat 02 容器,我们可以发现下面的现象:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ docker run -P -d --name tomcat03 --link tomcat02 tomcat
c1aeef9efd292993f0a2b290addbf3bda5119f959b14e5e8e9c61d7bc217c929

# tomcat03 可以 ping 通 tomcat02
$ docker exec -it tomcat03 ping tomcat02
PING tomcat02 (172.17.0.3) 56(84) bytes of data.
64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.173 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.111 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=3 ttl=64 time=0.110 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=4 ttl=64 time=0.107 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=5 ttl=64 time=0.112 ms
^C
--- tomcat02 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 119ms
rtt min/avg/max/mdev = 0.107/0.122/0.173/0.028 ms

# 但是 tomcat02 无法 ping 通 tomcat03
$ docker exec -it tomcat02 ping tomcat03
ping: tomcat03: No address associated with hostname

3. 自定义网络

    我们可以使用 "docker network ls" 命令来查看现在宿主机上的容器网络情况:

1
2
3
4
5
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
50dc2c37308c bridge bridge local
0f2a23e2682b host host local
1e238be7f053 none null local

    注意到有一个 Name 分别为 "bridge", "host" 和 "none" 的网络,它们分别对应了 Docker 常用的三种网络模式:Bridge, Host 和 Null,也就是我们在 Driver 一列中看到的东西。我们下面分别作解释:

网络模式 解释
Null

顾名思义,null 网络就是什么都没有的网络。挂在这个网络下的容器除了 lo,没有其他任何网卡。这样的设置对于那些安全需求较高且不需要联网的容器应用十分有帮助。

1
2
3
4
5
6
7
8
9
# 启动容器
$ docker run -d -P --name tomcat-no-network --net none tomcat
508b46cbd6cfed5b2b7636319cde77914abf89a88bc92e48cd4e695aeced4253
# 查看容器网络信息
$ docker exec -it 508b46cbd6cf ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
Host
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
# 查看宿主机物理网络接口信息:
$ ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:17ff:fedd:eba5 prefixlen 64 scopeid 0x20<link>
ether 02:42:17:dd:eb:a5 txqueuelen 0 (Ethernet)
RX packets 396140 bytes 55307966 (52.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 417006 bytes 76746751 (73.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.10.135 netmask 255.255.255.0 broadcast 192.168.10.255
inet6 fe80::20c:29ff:fe86:dfdd prefixlen 64 scopeid 0x20<link>
inet6 fdc9:80f7:ca26:0:20c:29ff:fe86:dfdd prefixlen 64 scopeid 0x0<global>
ether 00:0c:29:86:df:dd txqueuelen 1000 (Ethernet)
RX packets 23642705 bytes 14164303980 (13.1 GiB)
RX errors 0 dropped 1566130 overruns 0 frame 0
TX packets 4022915 bytes 843457147 (804.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 10398299 bytes 878205221 (837.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10398299 bytes 878205221 (837.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
省略...

# 启动容器,将容器连接至 host 网络
$ docker run -d -P --name tomcat-host-network --net host tomcat

# 查看容器网络信息
$ docker exec -it 66ee7d534016 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:86:df:dd brd ff:ff:ff:ff:ff:ff
inet 192.168.10.135/24 brd 192.168.10.255 scope global dynamic noprefixroute ens192
valid_lft 37736sec preferred_lft 37736sec
inet6 fdc9:80f7:ca26:0:20c:29ff:fe86:dfdd/64 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe86:dfdd/64 scope link noprefixroute
valid_lft forever preferred_lft forever
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:17:dd:eb:a5 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:17ff:fedd:eba5/64 scope link noprefixroute
valid_lft forever preferred_lft forever
省略...
Bridge

正如我们在上文中所说的那样,Docker 安装时会创建一个命名为 docker0 的 Linux bridge。如果不指定 --net,创建的容器默认都会挂到 docker0 上。挂到 docker0 上的容器与宿主机之间会有一对 veth-pair,正如我们在上文从看到的那样。这样一来,容器与宿主机间的通信就被打通,挂载到同一个 Bridge 模式的网络下的容器之间的互访也被使能。这里为了节省篇幅,我们不再列举具体例子。

    我们可以使用 "docker network inspect + [网络ID]" 来一个网络的详细信息。我们现在查看一下 Bridge 网络的信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
$ docker network inspect 50dc2c37308c
[
{
"Name": "bridge",
"Id": "50dc2c37308c181a588f03a88b6c8c0b5a5ca4a72f9da46c8ba48fb3d6517f56",
"Created": "2021-07-06T03:59:41.194202153-04:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1d9545d59019da68bb5554201d89ef82891d8ec55e6ed46bf10dec626f3ae471": {
"Name": "portainer",
"EndpointID": "9923e3e83da0d97a74f7407c9e504c339571e93238bd74605fe9029467d34c4e",
"MacAddress": "02:42:ac:11:00:05",
"IPv4Address": "172.17.0.5/16",
"IPv6Address": ""
},
"84f1e03fd2a36f4584862b515c4be1f94e2a0cab37e1430b1a2b65de2a2a5b15": {
"Name": "tomcat01",
"EndpointID": "e22c52b882bc09e5af6485195ae99f2a19decfaf3f51ed13ff053e2e47d112e0",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"8cf0dfb319af0aa76cfc878a2295f6eff599bbf11269b595ede1e99e4862d272": {
"Name": "tomcat02",
"EndpointID": "5d8184a719986b02022961f9cceb451c1e37a3e2b8208352bd794836da645beb",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"b78e05ec4fa2a55da843b4f4f801314991b9db5b900a4f90d8a088613de06da2": {
"Name": "mysql_daxecture",
"EndpointID": "2747625c1b95218017d30639f9d9103031f66cc25534403ce2c1d08eca844842",
"MacAddress": "02:42:ac:11:00:06",
"IPv4Address": "172.17.0.6/16",
"IPv6Address": ""
},
"c1aeef9efd292993f0a2b290addbf3bda5119f959b14e5e8e9c61d7bc217c929": {
"Name": "tomcat03",
"EndpointID": "c58695bdaa7f8e26bb10287b096bff5e10ad71257960ab1d2d64d03fd49545dd",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

    我们可以在 "Containers" 一栏中发现我们之前创建的容器都被接入了这个名为 bridge 的网络。也就是说,我们之前启动容器的时候,实际上默认执行了:

1
2
3
# 二者是等价的
docker run -it -d tomcat01 tomcat
docker run -it -d tomcat01 --net bridge tomcat

    因此,我们可以通过修改 --net 参数后的网络名称,来将我们的容器接入我们自定义的网络中。

    我们可以使用 "docker network create" 命令来创建我们的自定义网络:

1
2
3
4
5
6
7
8
9
10
11
# 创建自定义网络
$ docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
3eeec21de11986a49f5ad8ff6c2b58057fbe3531bae74364c0955a1c17167199

# 查看现在的容器网络
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
50dc2c37308c bridge bridge local
0f2a23e2682b host host local
3eeec21de119 mynet bridge local
1e238be7f053 none null local

    然后我们创建一个容器,接入这个自定义网络,然后我们检查它获得的 ip 的情况:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 创建容器,接入自定义网络
$ docker run -d -P --name tomcat-mynet --net mynet tomcat
07e1ead8852189951cefbd13f5b94a25c1397fb44de6591257e8e2eac641b522

# 查看容器 ID
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
07e1ead88521 tomcat "catalina.sh run" 15 seconds ago Up 14 seconds 0.0.0.0:49158->8080/tcp, :::49158->8080/tcp tomcat-mynet

# 查看容器 IP
$ docker exec -it 07e1ead88521 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
159: eth0@if160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.2/16 brd 192.168.255.255 scope global eth0
valid_lft forever preferred_lft forever

    我们可以看见,我们的容器所在的网段是 192.168.0.0/16。

    另外,如果是自建的 Bridge 网段,容器之间互 ping 是可以直接使用容器名的,这一点在 Docker0 网桥下是做不到的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# 创建另一个接入 mynet 网络的容器 tomcat-mynet_copy
$ docker run -d -P --name tomcat-mynet_copy --net mynet tomcat
08530df47884a2f5bce779d5851ad04ef539c117d590adbf10d24f71e5e1ff9b

# 查看容器 ID
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
08530df47884 tomcat "catalina.sh run" 13 seconds ago Up 12 seconds 0.0.0.0:49159->8080/tcp, :::49159->8080/tcp tomcat-mynet_copy
07e1ead88521 tomcat "catalina.sh run" 10 minutes ago Up 10 minutes 0.0.0.0:49158->8080/tcp, :::49158->8080/tcp tomcat-mynet

# tomcat-mynet_copy ping tomcat-mynet
$ docker exec -it 08530df47884 ping tomcat-mynet
PING tomcat-mynet (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-mynet.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.163 ms
64 bytes from tomcat-mynet.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.104 ms
^C
--- tomcat-mynet ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 10ms
rtt min/avg/max/mdev = 0.104/0.133/0.163/0.031 ms

# tomcat-mynet ping tomcat-mynet_copy
$ docker exec -it 07e1ead88521 ping tomcat-mynet_copy
PING tomcat-mynet_copy (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat-mynet_copy.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.147 ms
64 bytes from tomcat-mynet_copy.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.121 ms
^C
--- tomcat-mynet_copy ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 15ms
rtt min/avg/max/mdev = 0.121/0.134/0.147/0.013 ms

4. 网络互通

    在这一小节我们将学习如何将一个容器连接到一个容器网络中,即使这个容器已经存在于另一个容器网络中。

    首先基于我们在上一节中的网络形态,我们在 Docker 0 Bridge 上先创建一个容器:

1
2
3
4
5
6
7
8
9
10
# 在 Bridge 默认网络下创建一个容器
$ docker run -d -P --name tomcat01 tomcat
230a7ebd5e09500d011aa8c553891c28f25c86ab14a84ce29be70b633b4cb8d5

# 查看所有容器
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
230a7ebd5e09 tomcat "catalina.sh run" 6 seconds ago Up 5 seconds 0.0.0.0:49160->8080/tcp, :::49160->8080/tcp tomcat01
08530df47884 tomcat "catalina.sh run" 17 minutes ago Up 17 minutes 0.0.0.0:49159->8080/tcp, :::49159->8080/tcp tomcat-mynet_copy
07e1ead88521 tomcat "catalina.sh run" 27 minutes ago Up 27 minutes 0.0.0.0:49158->8080/tcp, :::49158->8080/tcp tomcat-mynet

    现在我们通过 "docker network connect [网络] [容器]" 的方式,来将我们在默认 Bridge 网络下创建的容器 tomcat01 连接到我们的自定义网络 mynet 下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# 查看网络 ID
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
50dc2c37308c bridge bridge local
0f2a23e2682b host host local
3eeec21de119 mynet bridge local
1e238be7f053 none null local

# 将 tomcat01 连接至 mynet 网络
$ docker network connect mynet 230a7ebd5e09

# 查看 mynet 网络状态
$ docker network inspect mynet
[
{
"Name": "mynet",
"Id": "3eeec21de11986a49f5ad8ff6c2b58057fbe3531bae74364c0955a1c17167199",
"Created": "2021-07-12T11:20:32.10598884-04:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"07e1ead8852189951cefbd13f5b94a25c1397fb44de6591257e8e2eac641b522": {
"Name": "tomcat-mynet",
"EndpointID": "f08f93af4789a25ccd3c7404de59067e085471f7ca76bf78e998952250a5eecc",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
},
"08530df47884a2f5bce779d5851ad04ef539c117d590adbf10d24f71e5e1ff9b": {
"Name": "tomcat-mynet_copy",
"EndpointID": "37e09c52d2e3d7a8136b1ed8457293468d0640d3f577a4c8801257f76a90d163",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
},
# 可以发现此处有新加入的 tomcat01
"230a7ebd5e09500d011aa8c553891c28f25c86ab14a84ce29be70b633b4cb8d5": {
"Name": "tomcat01",
"EndpointID": "fa2a36f790ec7d8c35dfd45e51b71fa800af449a191735acba1ddbf138bee3dd",
"MacAddress": "02:42:c0:a8:00:04",
"IPv4Address": "192.168.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

    这样一来,tomcat01 容器就身处两个容器网络中,也即是有两个网络接口,能够 ping 通两个网络中的全部其它容器了!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 查看容器 ID
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
230a7ebd5e09 tomcat "catalina.sh run" 7 minutes ago Up 7 minutes 0.0.0.0:49160->8080/tcp, :::49160->8080/tcp tomcat01

# 查看 tomcat01 容器的网络接口
$ docker exec -it 230a7ebd5e09 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
163: eth0@if164: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
167: eth1@if168: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.4/16 brd 192.168.255.255 scope global eth1
valid_lft forever preferred_lft forever