
virsh/libvirt/kvm을 사용하여 Linux VM을 구성했는데 브리지 네트워크에 문제가 있습니다. libvirt의 기본 NAT 네트워크는 제대로 작동했지만 공용 인터페이스에 대한 브리지가 필요합니다. virsh를 사용하여 기본 네트워크를 제거하고 virsh의 VM 네트워크 설정을 다음과 같이 변경했습니다.
<interface type='bridge'>
<mac address='02:00:00:c1:8f:95'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
호스트 시스템에서 브리지를 구성했습니다.
# brctl show br0
bridge name bridge id STP enabled interfaces
br0 8000.a0423f494574 yes enp97s0f0
vnet0
# brctl showstp br0
br0
bridge id 8000.a0423f494574
designated root 8000.a0423f494574
root port 0 path cost 0
max age 20.00 bridge max age 20.00
hello time 2.00 bridge hello time 2.00
forward delay 15.00 bridge forward delay 15.00
ageing time 300.00
hello timer 0.86 tcn timer 0.00
topology change timer 0.00 gc timer 240.41
flags
enp97s0f0 (1)
port id 8001 state forwarding
designated root 8000.a0423f494574 path cost 100
designated bridge 8000.a0423f494574 message age timer 0.00
designated port 8001 forward delay timer 0.00
designated cost 0 hold timer 0.00
flags
vnet0 (2)
port id 8002 state forwarding
designated root 8000.a0423f494574 path cost 100
designated bridge 8000.a0423f494574 message age timer 0.00
designated port 8002 forward delay timer 0.00
designated cost 0 hold timer 0.00
여기서 enp97s0f0은 공용 NIC이고 vnet0은 libvirt에서 사용하는 가상 NIC입니다.
VM에 IP와 게이트웨이를 구성했지만 핑에 대한 응답이 없습니다. vnet0과 br0 모두에서 VM에서 1.1.1.1 cloudflare dns로 ping하는 동안 호스트 시스템에서 tcpdump를 수행했습니다. 결과는 다음과 같습니다.
# tcpdump -i vnet0 -n host VM.VM.VM.VM
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vnet0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:13:50.863273 IP VM.VM.VM.VM > 1.1.1.1: ICMP echo request, id 2016, seq 80, length 64
15:13:51.887267 IP VM.VM.VM.VM > 1.1.1.1: ICMP echo request, id 2016, seq 81, length 64
15:13:52.911271 IP VM.VM.VM.VM > 1.1.1.1: ICMP echo request, id 2016, seq 82, length 64
15:13:53.935270 IP VM.VM.VM.VM > 1.1.1.1: ICMP echo request, id 2016, seq 83, length 64
# tcpdump -i br0 -n host VM.VM.VM.VM
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:12:33.039228 IP VM.VM.VM.VM > 1.1.1.1: ICMP echo request, id 2016, seq 4, length 64
15:12:33.045522 IP 1.1.1.1 > VM.VM.VM.VM: ICMP echo reply, id 2016, seq 4, length 64
15:12:33.045532 IP 1.1.1.1 > VM.VM.VM.VM: ICMP echo reply, id 2016, seq 4, length 64
15:12:34.063111 IP VM.VM.VM.VM > 1.1.1.1: ICMP echo request, id 2016, seq 5, length 64
15:12:34.069501 IP 1.1.1.1 > VM.VM.VM.VM: ICMP echo reply, id 2016, seq 5, length 64
15:12:34.069510 IP 1.1.1.1 > VM.VM.VM.VM: ICMP echo reply, id 2016, seq 5, length 64
15:12:35.087281 IP VM.VM.VM.VM > 1.1.1.1: ICMP echo request, id 2016, seq 6, length 64
15:12:35.093640 IP 1.1.1.1 > VM.VM.VM.VM: ICMP echo reply, id 2016, seq 6, length 64
15:12:35.093649 IP 1.1.1.1 > VM.VM.VM.VM: ICMP echo reply, id 2016, seq 6, length 64
따라서 VM이 vnet0을 통해 ping 요청을 보내고 있고 브리지 br0이 이 요청을 1.1.1.1로 전달하고 1.1.1.1이 브리지로 응답을 다시 보내는 것을 볼 수 있습니다. 그런 다음 패킷이 사라지고 VM 인터페이스에 도달하지 않습니다.
그렇기 때문에 그 문제는 아마도 호스트의 일종의 패킷 필터링과 연결되어 있다고 생각하지만 여러 변수를 확인했는데 모든 것이 괜찮아 보입니다.
/proc/sys/net/ipv4/ip_forward : 1
/proc/sys/net/ipv4/conf/br0/forwarding : 1
/proc/sys/net/ipv4/conf/vnet0/forwarding : 1
/proc/sys/net/ipv4/conf/enp97s0f0/forwarding : 1
/proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
# ebtables -L
Bridge table: filter
Bridge chain: INPUT, entries: 0, policy: ACCEPT
Bridge chain: FORWARD, entries: 0, policy: ACCEPT
Bridge chain: OUTPUT, entries: 0, policy: ACCEPT
# nft list ruleset
table bridge filter {
chain INPUT {
type filter hook input priority filter; policy accept;
}
chain FORWARD {
type filter hook forward priority filter; policy accept;
}
chain OUTPUT {
type filter hook output priority filter; policy accept;
}
}
table ip filter {
chain INPUT {
type filter hook input priority filter; policy accept;
}
chain FORWARD {
type filter hook forward priority filter; policy accept;
}
chain OUTPUT {
type filter hook output priority filter; policy accept;
}
}
[root@HOST ~] # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp97s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether a0:42:3f:49:45:74 brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a0:42:3f:49:45:74 brd ff:ff:ff:ff:ff:ff
inet xxx.xxx.xxx.xxx/24 scope global dynamic noprefixroute br0
valid_lft 71705sec preferred_lft 71705sec
inet6 fe80::6559:22b2:dccd:3b24/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UNKNOWN group default qlen 1000
link/ether fe:00:00:c1:8f:95 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc00:ff:fec1:8f95/64 scope link
valid_lft forever preferred_lft forever
[root@VM ~]# ip addr show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 02:00:00:c1:8f:95 brd ff:ff:ff:ff:ff:ff
inet VM.VM.VM.VM/24 scope global enp1s0
valid_lft forever preferred_lft forever
[root@HOST ~]# bridge link
2: enp97s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 100
8: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 100
[root@HOST ~]# bridge fdb show
fc:bd:67:ff:ec:6a dev enp97s0f0 master br0
fc:bd:67:ff:ec:31 dev enp97s0f0 master br0
2c:dd:e9:0d:9d:31 dev enp97s0f0 master br0
fe:ed:de:ad:be:ef dev enp97s0f0 master br0
a0:42:3f:49:45:74 dev enp97s0f0 vlan 1 master br0 permanent
a0:42:3f:49:45:74 dev enp97s0f0 master br0 permanent
01:00:5e:00:00:01 dev enp97s0f0 self permanent
33:33:00:00:00:01 dev enp97s0f0 self permanent
01:00:5e:00:00:01 dev br0 self permanent
33:33:00:00:00:01 dev br0 self permanent
33:33:ff:cd:3b:24 dev br0 self permanent
01:00:5e:00:00:6a dev br0 self permanent
33:33:00:00:00:6a dev br0 self permanent
02:00:00:c1:8f:95 dev vnet0 master br0
fe:00:00:c1:8f:95 dev vnet0 vlan 1 master br0 permanent
fe:00:00:c1:8f:95 dev vnet0 master br0 permanent
33:33:00:00:00:01 dev vnet0 self permanent
01:00:5e:00:00:01 dev vnet0 self permanent
33:33:ff:c1:8f:95 dev vnet0 self permanent
[root@HOST ~]# bridge vlan show
port vlan-id
enp97s0f0 1 PVID Egress Untagged
vnet0 1 PVID Egress Untagged
호스트 시스템은 5.13.12-1.el8.elrepo.x86_64 커널이 포함된 Centos8입니다. VM 시스템은 Centos8 기본 설치입니다.
편집하다:
호스트 br0 및 VM enp1s0 인터페이스에 개인 네트워크 IP를 구성하면 VM은 호스트를 ping할 수 있고 호스트는 VM을 ping할 수 있습니다. 하지만 여전히 로컬 네트워크 외부에서는 아무 것도 핑할 수 없습니다.
답변1
호스트에서 VM IP 주소로의 경로를 생성하여 해결되었습니다.
route add -host VM.VM.VM.VM dev br0