
Tenho 2 VLANs na porta ETH1 no Synology DSM 7.2. Eu uso o driver MACVLAN, então meus contêineres parecem “computadores separados” na rede. Posso acessar o contêiner pela rede e posso acessar a rede pelo contêiner. Posso até acessar o contêiner do host, mas não consigo acessar o host de dentro do contêiner.
#!/bin/bash:
docker network create -d macvlan --subnet=10.1.40.0/24 --gateway=10.1.40.1 --ip-range=10.1.40.160/29 --aux-address 'host=10.1.40.166' -o parent=eth1.10 macvlan10
ip link add macvlan10brdg link eth1.10 type macvlan mode bridge
ip addr add 10.1.40.166/32 dev macvlan10brdg
ip link set dev macvlan10brdg up
ip route add 10.1.40.160/29 dev macvlan10brdg
docker run --net=macvlan10 -it --name macvlaneth10 --ip 10.1.40.165 --privileged --cap-add=ALL --rm alpine /bin/sh
ip a:
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:11:32:cd:8b:34 brd ff:ff:ff:ff:ff:ff
7: eth1.10@eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:11:32:cd:8b:34 brd ff:ff:ff:ff:ff:ff
inet 10.1.40.16/24 brd 10.1.40.255 scope global eth1.10
valid_lft forever preferred_lft forever
8: eth1.5@eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:11:32:cd:8b:34 brd ff:ff:ff:ff:ff:ff
inet 10.2.40.16/24 brd 10.2.40.255 scope global eth1.5
valid_lft forever preferred_lft forever
14: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1
link/ether 8e:3d:a4:01:5d:42 brd ff:ff:ff:ff:ff:ff
inet 10.1.40.166/32 scope global macvlan10brdg
valid_lft forever preferred_lft forever
inet6 fe80::8c3d:a4ff:fe01:5d42/64 scope link
valid_lft forever preferred_lft forever
regra de IP:
0: from all lookup local
2: from all lookup static-table
10: from 10.2.40.16 lookup eth1.5-table
12: from 10.1.40.16 lookup eth1.10-table
32766: from all lookup main
32767: from all lookup default
rota ip:
default via 10.1.40.1 dev eth1.10 src 10.1.40.16
10.1.40.0/24 dev eth1.10 proto kernel scope link src 10.1.40.16
10.1.40.160/29 dev macvlan10brdg scope link
10.2.40.0/24 dev eth1.5 proto kernel scope link src 10.2.40.16
iptables-L:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DEFAULT_FORWARD all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DEFAULT_FORWARD (1 references)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
Chain DOCKER (0 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere ****** after "iptables -I DOCKER-USER -j ACCEPT"
RETURN all -- anywhere anywhere
gato /proc/sys/net/ipv4/ip_forward:
1
Eu até tentei o driver BRIDGE com um resultado semelhante. Posso acessar o contêiner pela rede e pelo host, posso acessar a rede pelo contêiner. Quando tento fazer ping no host a partir do contêiner, é isso que recebo no roteador.
invalid forward: in:vlan10 out:vlan10, connection-state:invalid src-mac 00:11:32:cd:8b:34, proto ICMP (type 0, code 0), 10.1.40.16->10.1.52.2, len 84
É assim que executo o contêiner em ponte:
docker network create -d bridge --subnet 10.1.52.0/24 --gateway 10.1.52.1 -o parent=eth1.10 testbrgeth110
docker run --net=testbrgeth110 -it --name bridgeeth110 --privileged --cap-add=ALL --rm alpine /bin/sh
Passei 2 dias inteiros lendo na Internet, depurando, etc. Realmente não tenho ideia do que está errado. A rede em ponte no DSM 6.2 sem VLAN na rede funcionou perfeitamente.