Winse Blog

走走停停都是风景, 熙熙攘攘都向最好, 忙忙碌碌都为明朝, 何畏之.

Docker多主机网络配置 - Macvlan

参考

Note: In Macvlan you are not able to ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host’s eth0 it will not work. That traffic is explicitly filtered by the kernel modules themselves to offer additional provider isolation and security.

主机

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@kube-master140 ~]# ip addr show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:40:2d:15 brd ff:ff:ff:ff:ff:ff
    inet 192.168.191.140/24 brd 192.168.191.255 scope global dynamic ens33
       valid_lft 1765sec preferred_lft 1765sec
    inet6 fe80::1186:2fe5:9ee5:8790/64 scope link 
       valid_lft forever preferred_lft forever

[root@kube-worker141 ~]# ip addr show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:2e:67:4d brd ff:ff:ff:ff:ff:ff
    inet 192.168.191.141/24 brd 192.168.191.255 scope global dynamic ens33
       valid_lft 1779sec preferred_lft 1779sec
    inet6 fe80::dd23:1df6:b37:efae/64 scope link 
       valid_lft forever preferred_lft forever

创建网络

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@kube-worker141 ~]# docker network create \
-d macvlan \
--subnet=192.168.191.0/24 \
--gateway=192.168.191.2 \
-o parent=ens33 pub_net
4370998ed03024bc0057a894f1280d5b0fcdba526fd9e8da612a3abb0dbc884b

[root@kube-worker141 ~]# docker network list 
NETWORK ID          NAME                DRIVER              SCOPE
eee9236a36ba        bridge              bridge              local               
ddc7f59215c1        host                host                local               
d8dc7fbc40a6        none                null                local               
4370998ed030        pub_net             macvlan             local               

[root@kube-worker141 ~]# docker network inspect pub_net
...

使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
docker rm -f $( docker ps -a | grep -v IMAGE | awk '{print $1}' ) 

[root@kube-worker141 ~]# docker run --net=pub_net --ip=192.168.191.200 --name c200 -tid busybox /bin/sh
2e0a2ede40e80a2f1739330bb3a6c45b91ea08d78d26d165ad13945bedbea40f

[root@kube-worker141 ~]# docker ps 
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
2e0a2ede40e8        busybox             "/bin/sh"           13 seconds ago      Up 11 seconds                           c200
[root@kube-worker141 ~]# docker exec c200 ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:BF:C8  
          inet addr:192.168.191.200  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::42:c0ff:fea8:bfc8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

[root@kube-worker141 ~]# docker exec c200 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.191.2   0.0.0.0         UG    0      0        0 eth0
192.168.191.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
[root@kube-worker141 ~]# docker exec c200 ping baidu.com 
PING baidu.com (111.13.101.208): 56 data bytes
64 bytes from 111.13.101.208: seq=0 ttl=128 time=45.029 ms
64 bytes from 111.13.101.208: seq=1 ttl=128 time=44.616 ms

#201
[root@kube-worker141 ~]# docker run --net=pub_net --ip=192.168.191.201 -tid busybox /bin/sh 
c8cfd3443f2b7b3973a06470cb95442eadface8d89c8cb1749ad73ebbd7e9e39

##本地容器互通: 
#: HOST141-200 ping HOST141-201
[root@kube-worker141 ~]# docker exec c200 ping -W 10 192.168.191.201
PING 192.168.191.201 (192.168.191.201): 56 data bytes
64 bytes from 192.168.191.201: seq=0 ttl=64 time=0.523 ms

#210 
[root@kube-master ~]# docker run --net=pub_net --ip=192.168.191.210 -tid busybox /bin/sh 
7929c136c3dbc646b68b3b7302e8525a25fe2f583db2246fea0da85a448b7b78

##B访问A主机的容器: 
#: HOST140 ping HOST141-201 
[root@kube-master140 ~]# ping 192.168.191.201 
PING 192.168.191.201 (192.168.191.201) 56(84) bytes of data.
64 bytes from 192.168.191.201: icmp_seq=1 ttl=64 time=1.44 ms

##A主机容器访问B主机容器: 
#: HOST141-200 ping HOST140-210
[root@kube-worker141 ~]# docker exec c200 ping -W 10 192.168.191.210
PING 192.168.191.210 (192.168.191.210): 56 data bytes
64 bytes from 192.168.191.210: seq=0 ttl=64 time=2.049 ms
64 bytes from 192.168.191.210: seq=1 ttl=64 time=0.993 ms

#主机与所在容器互相不能访问 (--!): 
#: HOST141 ping HOST141-200
[root@kube-worker141 ~]# ping 192.168.191.200
PING 192.168.191.200 (192.168.191.200) 56(84) bytes of data.
From 192.168.191.141 icmp_seq=1 Destination Host Unreachable
From 192.168.191.141 icmp_seq=2 Destination Host Unreachable
#: HOST141-200 ping HOST141
[root@kube-worker1 ~]# docker exec c200 ping 192.168.191.141

针对主机与本机容器不能互通的问题,可以增加一张默认的网卡:Multiple Docker Networks

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#先通过默认网络创建
[root@kube-worker1 ~]# docker run --name c200 -tid busybox /bin/sh                                   
47b7c1813b95cbec471b1a6de6a870e5537cfa70d54120873a5edb4e444b373b
#然后连接pub_net!
[root@kube-worker1 ~]# docker network connect --ip=192.168.191.200 pub_net c200        
[root@kube-worker1 ~]# docker exec c200 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe12:2/64 scope link 
       valid_lft forever preferred_lft forever
16: eth1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:c0:a8:bf:c8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.191.200/24 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::42:c0ff:fea8:bfc8/64 scope link 
       valid_lft forever preferred_lft forever
       

方式1:

与主机的通信,通过 172.18.0.0/24 的网络。其他的通过 192.168.191.0/24 。还是感觉有点鸡肋!!

方式2:

增加route:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#route add -host $container_ip gw $lan_router_ip $if_device_nic2

[root@kube-worker1 ~]# route add -net 192.168.191.200 gw 172.18.0.1 netmask 255.255.255.255 dev docker0
[root@kube-worker1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.191.2   0.0.0.0         UG    100    0        0 ens33
172.17.3.0      192.168.191.140 255.255.255.0   UG    100    0        0 ens33
172.17.4.0      0.0.0.0         255.255.255.0   U     425    0        0 kbr0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.191.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.191.200 172.18.0.1      255.255.255.255 UGH   0      0        0 docker0
[root@kube-worker1 ~]# ping 192.168.191.200
PING 192.168.191.200 (192.168.191.200) 56(84) bytes of data.
64 bytes from 192.168.191.200: icmp_seq=1 ttl=64 time=0.239 ms
64 bytes from 192.168.191.200: icmp_seq=2 ttl=64 time=0.106 ms
^C
--- 192.168.191.200 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.106/0.172/0.239/0.067 ms

通过操作与pipework比较,互有优劣:

  • pipework会创建网卡,然后所有的ip都是互通的,但是绑定、还得把主机的ip配置到br0上。
  • 而docker-network的方式与主机互通需要做额外的配置。

–END

Comments