当你安装完docker之后,默认已经有了5种网络模式,可以通过docker info查看。

分别是bridge host macvlan null overlay,除了overlay以外都是单机网络,不能跨主机。

使用docker network ls可以查看已有的网络,,最后一列的local表示是只能在本节点使用

1
2
3
4
NETWORK ID NAME DRIVER SCOPE
68c0e2d81161 bridge bridge local
26c7818acfd9 host host local
8f267eb07e0e none null local

bridge

也就是桥接模式,通过Docker0的虚拟网卡进行流量转发

1
2
3
4
5
6
7
8
9
ip link show docker0
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:de:0f:ce:ca brd ff:ff:ff:ff:ff:ff
brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242de0fceca no

使用inspect子命令可以查看详细的信息

docker inspect $(docker network ls |grep bridge|awk ‘{print $1}’)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
[
{
"Name": "bridge",
"Id": "1c006820eb42b4f57f95c91dcb694df703afebd4fcb666cc58ead63877b57deb",
"Created": "2018-01-10T06:05:53.925311686-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
当你未指定网络的时候,默认用的就是bridge网络。例如
docker run -itd vsxen/k8s
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:2/64 scope link
valid_lft forever preferred_lft forever
veth开头的网卡就是为容器分配的一个设备,但是要注意这不是容器中的设备。
由于linux物理网卡只能出现在一个namespace中,所以只能用虚拟设备给容器创建独立的网卡。
同时主机多了一块虚拟网卡
5: veth063a9eb@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 82:94:02:81:ea:a9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::8094:2ff:fe81:eaa9/64 scope link
valid_lft forever preferred_lft forever
ip r
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 src 172.17.0.2
ping d.cn
PING d.cn (183.131.214.42): 56 data bytes
64 bytes from 183.131.214.42: seq=0 ttl=52 time=13.337 ms
docker0 eth0 -> 宿主机
--------------- ----
| |
vethx vethy
---- ----
| | ---->设备对
+----+---+ +----+---+
| eth0 | | eth0 |
+--------+ +--------+
容器1 容器2

host

直接使用主机网络,应该是性能最好的,通过docker所运行的容器会使用默认的端口,
所以多个应用之间需要解决端口占用的问题。

使用inspect子命令可以查看详细的信息

docker inspect $(docker network ls |grep host|awk ‘{print $1}’)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
[
{
"Name": "host",
"Id": "94902992950ea665822834f82c173e19af65c2b628e447fda3fc88a39b219e1b",
"Created": "2017-11-19T13:35:24.770968568-05:00",
"Scope": "local",
"Driver": "host",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
docker run -itd --net host vsxen/k8s
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:5a:5e:34 brd ff:ff:ff:ff:ff:ff
inet 192.168.12.103/24 brd 192.168.12.255 scope global enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe5a:5e34/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:dd:cb:1c:c5 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
ip r
default via 192.168.12.1 dev enp0s3
172.17.0.0/16 dev docker0 scope link src 172.17.0.1
192.168.12.0/24 dev enp0s3 scope link src 192.168.12.103
ping d.cn
PING d.cn (183.131.214.42): 56 data bytes
64 bytes from 183.131.214.42: seq=0 ttl=53 time=11.149 ms
网络和路由与主机相同

与host经常相对的就是-publish参数,与之不同的是,
使用这个参数后,是iptables做了转发操作

1
2
3
iptables -t nat -Ln
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80

none

就是没有网络

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[
{
"Name": "none",
"Id": "0e61fd6480d7a3c68d01d57577a1b282464148fc748850d1493b782111f616c2",
"Created": "2017-11-19T13:35:24.755228508-05:00",
"Scope": "local",
"Driver": "null",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]

overlay

为支持容器跨主机通信,Docker 提供了 overlay driver,使用户可以创建基于 VxLAN 的 overlay 网络。
VxLAN 可将二层数据封装到 UDP 进行传输,VxLAN 提供与 VLAN 相同的以太网二层服务,但是拥有更强的扩展性和灵活性。
Docerk overlay 网络需要一个 key-value 数据库用于保存网络状态信息,包括 Network、Endpoint、IP 等。
Consul、Etcd 和 ZooKeeper 都是 Docker 支持的 key-vlaue 软件,我们这里使用 Consul。
其实Docker内部也有一个类似的KV数据库,通过初始化swarm模式即可,但是只能用于swarm编排的应用。

--cluster-store=consul://192.168.56.1:8500 --cluster-advertise=enp0s3:2376

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
systemctl daemon-reload
systemctl restart docker
docker network create -d overlay ov_net1
docker network inspect ov_net1
docker run -itd --name bbox1 --network ov_net1 vsxen/k8s
docker run -itd --name bbox2 --network ov_net1 vsxen/k8s
docker exec bbox1 ip r
docker exec bbox2 ip r
docker exec bbox1 ping bbox2

macvlan

macvlan 本身是 linxu kernel 模块,其功能是允许在同一个物理网卡上配置多个 MAC 地址,即多个 interface,每个 interface 可以配置自己的 IP。
macvlan 本质上是一种网卡虚拟化技术,Docker 用 macvlan 实现容器网络就不奇怪了。
macvlan 的最大优点是性能极好,相比其他实现,macvlan 不需要创建 Linux bridge,而是直接通过以太 interface 连接到物理网络。

也只有maxvlan支持自定义IP,其他网络会报错

1
2
3
docker run --name bbox2 --ip=172.16.0.4 -itd vsxen/k8s
b1b3ee6c325f8b679f60ec5e8ea3ae54ac2109b17695f82597f0f80715a56d43
docker: Error response from daemon: User specified IP address is supported on user defined networks only.

为保证多个 MAC 地址的网络包都可以从 enp0s9 通过,我们需要打开网卡的混杂模式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
NIC=$(ip link|grep enp |awk '{print $2}' | sed "s/:\+//g")
ip link set enp0s3 promisc on
验证
ip link |grep PROMISC
docker network create -d macvlan \
--subnet=192.168.200.0/24 \
--gateway=192.168.200.1 \
-o parent=enp0s3 mac_net1
报错的话,可以查看是否加载了对于的内核模块
lsmod | grep 8021q
加载模块
modprobe 8021q
这里的网关也和实际网络的网关一致
docker run --name c1 --net=mac_net1 --ip=192.168.200.201 -itd vsxen/k8s
docker run --name c2 --net=mac_net1 --ip=192.168.200.202 -itd vsxen/k8s
docker run --name c3 --net=mac_net1 --ip=192.168.200.203 -itd vsxen/k8s
docker run --name c4 --net=mac_net1 --ip=192.168.200.204 -itd vsxen/k8s
使用macVLAN模式的容器,
无法ping通宿主机
宿主机也无法ping通该容器
同集群服务器和所有的容器都可以ping同
vconfig add eth0 5
ip addr add 192.168.1.121/24 dev eth0.5
ip link set eth0.5 up

flannel

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
IP=10.8.1.50
etcd -listen-client-urls http://${IP}:2379 -advertise-client-urls http://${IP}:2379 2>&1 &
cat > flannel-config.json <<EOF
{
"Network": "10.2.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan"
}
}
EOF
etcdctl --endpoints=http://${IP}:2379 set /docker-test/network/config < flannel-config.json
flanneld -etcd-endpoints=http://${IP}:2379 -iface=enp0s3 -etcd-prefix=/docker-test/network 2>&1 &
mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
vim /usr/lib/systemd/system/docker.service
EnvironmentFile=-/run/flannel/docker
$DOCKER_NETWORK_OPTIONS
systemctl daemon-reload
systemctl restart docker
docker run -itd --name bbox1 busybox
docker run -itd --name bbox2 busybox

calico

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
IP=
etcd -listen-client-urls http://${IP}:2379 -advertise-client-urls http://${IP}:2379 2>&1 &
etcdctl --endpoints=http://${IP}:2379 cluster-health
mkdir /etc/calico/
cat > /etc/calico/calicoctl.cfg <<EOF
apiVersion: v1
kind: calicoApiConfig
metadata:
spec:
etcdEndpoints: http://${IP}:2379
EOF
sudo calicoctl node run --node-image=quay.io/calico/node:v2.6.10
Running command to load modules: modprobe -a xt_set ip6_tables
Enabling IPv4 forwarding
Enabling IPv6 forwarding
Increasing conntrack limit
Removing old calico-node container (if running).
Running the following command to start calico-node:
docker run --net=host --privileged --name=calico-node -d --restart=always -e NO_DEFAULT_POOLS= -e CALICO_LIBNETWORK_ENABLED=true -e CALICO_LIBNETWORK_CREATE_PROFILES=true -e CALICO_LIBNETWORK_LABEL_ENDPOINTS=false -e NODENAME=master -e CALICO_NETWORKING_BACKEND=bird -e IP_AUTODETECTION_METHOD=first-found -e IP6_AUTODETECTION_METHOD=first-found -e CALICO_LIBNETWORK_IFPREFIX=cali -e ETCD_ENDPOINTS=http://192.168.200.183:2379 -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock quay.io/calico/node:v2.6.10
Image may take a short time to download if it is not available locally.
Container started, checking progress logs.
2018-08-03 09:23:11.905 [INFO][7] startup.go 173: Early log level set to info
2018-08-03 09:23:11.906 [INFO][7] client.go 202: Loading config from environment
2018-08-03 09:23:11.906 [INFO][7] startup.go 83: Skipping datastore connection test
2018-08-03 09:23:11.910 [INFO][7] startup.go 259: Building new node resource Name="master"
2018-08-03 09:23:11.912 [INFO][7] startup.go 273: Initialise BGP data
2018-08-03 09:23:11.913 [INFO][7] startup.go 467: Using autodetected IPv4 address on interface enp0s3: 192.168.200.167/24
2018-08-03 09:23:11.913 [INFO][7] startup.go 338: Node IPv4 changed, will check for conflicts
2018-08-03 09:23:11.964 [INFO][7] startup.go 530: No AS number configured on node resource, using global value
2018-08-03 09:23:11.975 [INFO][7] etcd.go 111: Ready flag is already set
2018-08-03 09:23:11.981 [INFO][7] client.go 139: Using previously configured cluster GUID
2018-08-03 09:23:12.101 [INFO][7] compat.go 796: Returning configured node to node mesh
2018-08-03 09:23:12.165 [INFO][7] startup.go 131: Using node name: master
2018-08-03 09:23:12.230 [INFO][11] client.go 202: Loading config from environment
Starting libnetwork service
Calico node started successfully
docker network create --driver calico --ipam-driver calico-ipam cal_net1
docker container run --net cal_net1 --name bbox1 -tid busybox
docker exec bbox1 ip a
docker container run --net cal_net1 --name bbox2 -tid busybox

参考
https://segmentfault.com/a/1190000011345144
https://github.com/docker/docker-ce/blob/v17.06.2-ce/components/cli/experimental/vlan-networks.md#ipvlan-network-driver
https://docs.docker.com/network/macvlan/
https://blog.csdn.net/xie_heng/article/details/72862500
http://www.cnblogs.com/CloudMan6/tag/Docker/default.html?page=7
https://github.com/alfredhuang211/study-doc/blob/master/docker%20swarm%20mode%20%E9%85%8D%E7%BD%AE%E4%BD%BF%E7%94%A8.md
https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/
https://segmentfault.com/a/1190000011345144