Netplan 無法辨識 Ubuntu 20.04 伺服器上的乙太網路連接

Netplan 無法辨識 Ubuntu 20.04 伺服器上的乙太網路連接

問題

我最近將 Ubuntu 20.04 伺服器搬到了新家,並嘗試將其連接到網路。在我以前的房子裡透過乙太網路電纜連接它沒有問題,但用我的新調製解調器這樣做卻不起作用。在確認乙太網路電纜適用於我的筆記型電腦後,我開始調試並得出結論,我使用 netplan 配置了一些錯誤的內容。

嘗試解決方案

運行$ ip a最初產生以下輸出:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether f8:b1:56:dc:47:25 brd ff:ff:ff:ff:ff:ff
3: br-cd5031a2a690: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:88:f5:00:c2 brd ff:ff:ff:ff:ff:ff
    inet 172.29.0.1/16 brd 172.29.255.255 scope global br-cd5031a2a690
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:63:c5:ed:bf brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: br-6e210ada6a5d: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:cc:35:23:8d brd ff:ff:ff:ff:ff:ff
    inet 172.30.0.1/16 brd 172.30.255.255 scope global br-6e210ada6a5d
       valid_lft forever preferred_lft forever
    inet6 fe80::42:ccff:fe35:238d/64 scope link 
       valid_lft forever preferred_lft forever
6: br-75e07944c75a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:cb:38:ca:3c brd ff:ff:ff:ff:ff:ff
    inet 172.27.0.1/16 brd 172.27.255.255 scope global br-75e07944c75a
       valid_lft forever preferred_lft forever
    inet6 fe80::42:cbff:fe38:ca3c/64 scope link 
       valid_lft forever preferred_lft forever
8: veth5501abf@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-6e210ada6a5d state UP group default 
    link/ether 3e:9d:7d:bc:07:5f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::3c9d:7dff:febc:75f/64 scope link 
       valid_lft forever preferred_lft forever
10: veth5f8842d@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-75e07944c75a state UP group default 
    link/ether f2:f4:e9:50:51:ac brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::f0f4:e9ff:fe50:51ac/64 scope link 
       valid_lft forever preferred_lft forever
12: veth6164405@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-6e210ada6a5d state UP group default 
    link/ether 5e:f2:e1:fd:1f:4d brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::5cf2:e1ff:fefd:1f4d/64 scope link 
       valid_lft forever preferred_lft forever
14: veth8f9aa5b@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-6e210ada6a5d state UP group default 
    link/ether 46:c8:0b:f7:c5:ac brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::44c8:bff:fef7:c5ac/64 scope link 
       valid_lft forever preferred_lft forever
16: vethc256b04@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-75e07944c75a state UP group default 
    link/ether b6:7c:3e:1e:ec:7e brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::b47c:3eff:fe1e:ec7e/64 scope link 
       valid_lft forever preferred_lft forever

這向我表明這eno1就是我需要修復的介面;所有的網橋和 veth 介面都可能來自我運行的 Docker 容器。

然後,我將01-netcfg.yaml檔案設定為以下建議配置,以設定與動態 DHCP 指派的 IP 的連線:

network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: true

並執行以下命令:

$ sudo netplan --debug generate

$ sudo netplan apply

$ sudo reboot

“generate”指令的偵錯輸出顯示以下內容:

** (generate:4699): DEBUG: 04:02:19.020: Processing input file /etc/netplan/01-netcfg.yaml..
** (generate:4699): DEBUG: 04:02:19.021: starting new processing pass
** (generate:4699): DEBUG: 04:02:19.021: We have some netdefs, pass them through a final round of validation
** (generate:4699): DEBUG: 04:02:19.021: eno1: setting default backend to 1
** (generate:4699): DEBUG: 04:02:19.021: Configuration is valid
** (generate:4699): DEBUG: 04:02:19.021: Generating output files..
** (generate:4699): DEBUG: 04:02:19.021: NetworkManager: definition eno1 is not for us (backend 1)
(generate:4699): GLib-DEBUG: 04:02:19.021: posix_spawn avoided (fd close requested) 

NetworkManager: definition eno1 is not for us似乎是一個問題,重新啟動後我仍然無法 ping 任何內容:

$ ping 8.8.8.8
ping: connect: Network is unreachable

我使用這個建議的配置重複了上述步驟01-netcfg.yaml,並確認我使用的是空格而不是製表符,並且我的間距是正確的:

network:
  version: 2
  renderer: networkd

使用調試標誌運行相同的設定命令會產生以下輸出:

** (generate:5041): DEBUG: 04:09:33.721: Processing input file /etc/netplan/01-netcfg.yaml..
** (generate:5041): DEBUG: 04:09:33.721: starting new processing pass
** (generate:5041): DEBUG: 04:09:33.721: We have some netdefs, pass them through a final round of validation
** (generate:5041): DEBUG: 04:09:33.721: Generating output files..
(generate:5041): GLib-DEBUG: 04:09:33.721: posix_spawn avoided (fd close requested) 

它不再有相關NetworkManager: definition eno1 is not for us訊息(因為eno1未指定),但在應用這些生成的更改並重新啟動後,我仍然沒有連接。

我遵循了許多帖子和指南,這些帖子和指南似乎推薦了這兩種配置的變體,但我堅信有一個與我遇到的問題相同這個帖子

發文者在此指出,部分問題來自核心 5.4.0-42-generic 中的一個錯誤,該錯誤已透過安裝r8168-dkms驅動程式解決。我還運行內核 5.4.0-42-generic,並手動安裝了此驅動程式/更新了 initramfs,但在重新啟動並重試上述兩個 netplan 配置後仍然沒有運氣。

$ sudo lshw -class network此外,如果有幫助的話,這是我的運行輸出:

  *-network
       description: Ethernet interface
       product: 82579LM Gigabit Network Connection (Lewisville)
       vendor: Intel Corporation
       physical id: 19
       bus info: pci@0000:00:19.0
       logical name: eno1
       version: 04
       serial: f8:b1:56:dc:47:25
       capacity: 1Gbit/s
       width: 32 bits
       clock: 33MHz
       capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
       configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=3.2.6-k firmware=0.13-4 latency=0 link=no multicast=yes port=twisted pair
       resources: irq:25 memory:f7c00000-f7c1ffff memory:f7c39000-f7c39fff ioport:f080(size=32)
  *-network:0
       description: Ethernet interface
       physical id: 1
       logical name: br-75e07944c75a
       serial: 02:42:cb:38:ca:3c
       capabilities: ethernet physical
       configuration: broadcast=yes driver=bridge driverversion=2.3 firmware=N/A ip=172.27.0.1 link=yes multicast=yes
  *-network:1
       description: Ethernet interface
       physical id: 2
       logical name: veth5f8842d
       serial: f2:f4:e9:50:51:ac
       size: 10Gbit/s
       capabilities: ethernet physical
       configuration: autonegotiation=off broadcast=yes driver=veth driverversion=1.0 duplex=full link=yes multicast=yes port=twisted pair speed=10Gbit/s
  *-network:2
       description: Ethernet interface
       physical id: 3
       logical name: vethc256b04
       serial: b6:7c:3e:1e:ec:7e
       size: 10Gbit/s
       capabilities: ethernet physical
       configuration: autonegotiation=off broadcast=yes driver=veth driverversion=1.0 duplex=full link=yes multicast=yes port=twisted pair speed=10Gbit/s
  *-network:3
       description: Ethernet interface
       physical id: 4
       logical name: br-6e210ada6a5d
       serial: 02:42:cc:35:23:8d
       capabilities: ethernet physical
       configuration: broadcast=yes driver=bridge driverversion=2.3 firmware=N/A ip=172.30.0.1 link=yes multicast=yes
  *-network:4
       description: Ethernet interface
       physical id: 5
       logical name: veth5501abf
       serial: 3e:9d:7d:bc:07:5f
       size: 10Gbit/s
       capabilities: ethernet physical
       configuration: autonegotiation=off broadcast=yes driver=veth driverversion=1.0 duplex=full link=yes multicast=yes port=twisted pair speed=10Gbit/s
  *-network:5
       description: Ethernet interface
       physical id: 6
       logical name: br-cd5031a2a690
       serial: 02:42:88:f5:00:c2
       capabilities: ethernet physical
       configuration: broadcast=yes driver=bridge driverversion=2.3 firmware=N/A ip=172.29.0.1 link=no multicast=yes
  *-network:6
       description: Ethernet interface
       physical id: 7
       logical name: docker0
       serial: 02:42:63:c5:ed:bf
       capabilities: ethernet physical
       configuration: broadcast=yes driver=bridge driverversion=2.3 firmware=N/A ip=172.17.0.1 link=no multicast=yes
  *-network:7
       description: Ethernet interface
       physical id: 8
       logical name: veth6164405
       serial: 5e:f2:e1:fd:1f:4d
       size: 10Gbit/s
       capabilities: ethernet physical
       configuration: autonegotiation=off broadcast=yes driver=veth driverversion=1.0 duplex=full link=yes multicast=yes port=twisted pair speed=10Gbit/s
  *-network:8
       description: Ethernet interface
       physical id: 9
       logical name: veth8f9aa5b
       serial: 46:c8:0b:f7:c5:ac
       size: 10Gbit/s
       capabilities: ethernet physical
       configuration: autonegotiation=off broadcast=yes driver=veth driverversion=1.0 duplex=full link=yes multicast=yes port=twisted pair speed=10Gbit/s

要求

任何人都可以幫助我解決 Netplan 的這個問題,並使乙太網路在我的伺服器上再次正常工作嗎?我非常感謝您的幫助,如果您需要我提供任何其他信息,請隨時詢問:)

答案1

輸出顯示NetworkManager: definition eno1 is not for us正確;它只是告訴您該介面正在由networkd後端處理,而不是NetworkManager由後端處理。當您從 yaml 中刪除對 的引用時eno1,您告訴 netplan 不要配置任何接口,這不是您想要的。

您的ip a輸出顯示該eno1介面被列為NO-CARRIER.這通常表示您的硬體能夠進行連結檢測,在本例中已檢測到沒有連結。因此,您可能需要嘗試不同的乙太網路電纜,驗證電纜是否固定到位,並嘗試連接到數據機以外的設備,以檢查是否不是數據機問題。

相關內容