Problema al configurar el puente de red con netplan en el servidor dedicado Ubuntu 18.04

Problema al configurar el puente de red con netplan en el servidor dedicado Ubuntu 18.04

Necesito ayuda con un servidor, netplan y un puente de red.

Quiero configurar un kvm en mi servidor ubuntu 18.04, al que se puede acceder desde fuera. Entonces necesito un puente de red.

Así que simulé la configuración de red en un servidor alojado en mi VirtualBox local, allí pude configurar una configuración de red en ejecución con puente. Pero tuve que habilitar el modo promiscuo en la configuración de la caja virtual.

Si transfiero la configuración a mi servidor dedicado Hetzner Ubuntu, la conexión a Internet del servidor falló.

¿Alguien tiene un consejo para mí?

Debajo de la configuración de netplan enmascarada, que no se está ejecutando (el comando netplan generate && netplan apply funciona correctamente).

configuración de red de trabajo predeterminada:

network:
  version: 2
  renderer: networkd
  ethernets:
    enp2s0:
      addresses:
        - [IP4]
        - [IP6]
      routes:
        - on-link: true
          to: 0.0.0.0/0
          via: [another IP4]
      gateway6: fe80::1
      nameservers:
        addresses:
          - [another IP4]
          - [another IP4]
          - [another IP4]
          - [another IP6]
          - [another IP6]
          - [another IP6]

Ahora mi 'bridged-config' no funciona:

network:
  version: 2
  renderer: networkd
  ethernets:
    enp2s0:
      dhcp4: false
  bridges:
    br0:
      interfaces: [enp2s0]
      addresses:
        - [IP4]
        - [IP6]
      routes:
        - on-link: true
          to: 0.0.0.0/0
          via: [another IP4]
      gateway6: fe80::1
      nameservers:
        addresses:
          - [another IP4]
          - [another IP4]
          - [another IP4]
          - [another IP6]
          - [another IP6]
          - [another IP6]

Ahora tengo algunos resultados después de aplicar la configuración:

ifconfig:

br0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet [IP4]  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 [IP6]  prefixlen 64  scopeid 0x0<global>
        ether 06:54:dd:62:e6:af  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:2bff:febd:df03  prefixlen 64  scopeid 0x20<link>
        ether 02:42:2b:bd:df:03  txqueuelen 0  (Ethernet)
        RX packets 17  bytes 760 (760.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16  bytes 1088 (1.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp2s0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        ether 44:8a:5b:d4:4f:46  txqueuelen 1000  (Ethernet)
        RX packets 12143  bytes 1170508 (1.1 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13113  bytes 2062454 (2.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 4076  bytes 773559 (773.5 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4076  bytes 773559 (773.5 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth5932247: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::289c:28ff:fef6:93f0  prefixlen 64  scopeid 0x20<link>
        ether 2a:9c:28:f6:93:f0  txqueuelen 0  (Ethernet)
        RX packets 17  bytes 998 (998.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20  bytes 1448 (1.4 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:da:13:11  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

netstat-i

Kernel Interface table
Iface      MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
br0       1500        0      0      0 0             0      0      0      0 BMU
docker0   1500       17      0      0 0            16      0      0      0 BMRU
enp2s0    1500    12143      0      0 0         13113      0      0      0 BMU
lo       65536     4092      0      0 0          4092      0      0      0 LRU
veth5932  1500       17      0      0 0            20      0      0      0 BMRU
virbr0    1500        0      0      0 0             0      0      0      0 BMU

ip r - antes de aplicar el puente:

default via [IP4] dev enp2s0 proto static onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown

ip r - con puente aplicado:

default via [IP4] dev br0 proto static onlink linkdown
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown

ip un

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel master br0 state DOWN group default qlen 1000
    link/ether 44:8a:5b:d4:4f:46 brd ff:ff:ff:ff:ff:ff
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:da:13:11 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:da:13:11 brd ff:ff:ff:ff:ff:ff
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:51:4d:d0:31 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:51ff:fe4d:d031/64 scope link
       valid_lft forever preferred_lft forever
7: vethecd1ee3@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 9a:e0:6b:4c:5b:ae brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::98e0:6bff:fe4c:5bae/64 scope link
       valid_lft forever preferred_lft forever
14: br0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 06:54:dd:62:e6:af brd ff:ff:ff:ff:ff:ff
    inet [IP4]]/32 scope global br0
       valid_lft forever preferred_lft forever
    inet6 [IP6]/64 scope global
       valid_lft forever preferred_lft forever

netplan --depurar generar

** (generate:24019): DEBUG: 17:06:17.920: Processing input file /etc/netplan/01-netcfg.yaml..
** (generate:24019): DEBUG: 17:06:17.930: starting new processing pass
** (generate:24019): DEBUG: 17:06:17.930: We have some netdefs, pass them through a final round of validation
** (generate:24019): DEBUG: 17:06:17.930: enp2s0: setting default backend to 1
** (generate:24019): DEBUG: 17:06:17.930: Configuration is valid
** (generate:24019): DEBUG: 17:06:17.930: br0: setting default backend to 1
** (generate:24019): DEBUG: 17:06:17.930: Configuration is valid
** (generate:24019): DEBUG: 17:06:17.930: Generating output files..
** (generate:24019): DEBUG: 17:06:17.930: NetworkManager: definition enp2s0 is not for us (backend 1)
** (generate:24019): DEBUG: 17:06:17.930: NetworkManager: definition br0 is not for us (backend 1)

netplan --depuración aplicar

DEBUG:command generate: running ['/lib/netplan/generate']
DEBUG:netplan generated networkd configuration changed, restarting networkd
DEBUG:no netplan generated NM configuration exists
DEBUG:enp2s0 not found in {}
DEBUG:br0 not found in {}
DEBUG:Merged config:
network:
  bonds: {}
  bridges:
    br0:
      addresses:
      - [IP4]]/32
      - [IP6]/64
      dhcp4: false
      dhcp6: false
      gateway6: fe80::1
      interfaces:
      - enp2s0
      nameservers:
        addresses:
        - [another IP4]
        - [another IP4]
        - [another IP4]
        - [another IP6]
        - [another IP6]
        - [another IP6]
      parameters:
        forward-delay: 4
        stp: true
      routes:
      - on-link: true
        to: 0.0.0.0/0
        via: [another IP4]
  ethernets:
    enp2s0:
      dhcp4: false
      dhcp6: false
  vlans: {}
  wifis: {}

DEBUG:Skipping non-physical interface: lo
DEBUG:Skipping composite member enp2s0
DEBUG:Skipping non-physical interface: virbr0
DEBUG:Skipping non-physical interface: virbr0-nic
DEBUG:Skipping non-physical interface: docker0
DEBUG:Skipping non-physical interface: vethecd1ee3
DEBUG:{}
DEBUG:netplan triggering .link rules for lo
DEBUG:netplan triggering .link rules for enp2s0
DEBUG:netplan triggering .link rules for virbr0
DEBUG:netplan triggering .link rules for virbr0-nic
DEBUG:netplan triggering .link rules for docker0
DEBUG:netplan triggering .link rules for vethecd1ee3

Lamentablemente esta publicación no resuelve mi problema:https://stackoverflow.com/a/61910941/9601604

información relacionada