XCP-NG 10Gbit 專用儲存網路速度慢(它的工作原理與 1Gbit 一樣)

XCP-NG 10Gbit 專用儲存網路速度慢(它的工作原理與 1Gbit 一樣)

搜尋了一段時間,找不到答案,甚至找不到前進的方向。

所以。 XCP-NG 叢集由三台伺服器 HP DL360p G8、具有 12 個 SAS 10K 磁碟機的 MSA 2060 iSCSI NAS、QNAP TS-1273U-RP、Mikrotik CRS317 交換器組成。儲存網路位於 mikrotik 的專用橋中。所有設備均透過 3 米銅纜連接。所有設備均顯示連結為10G。我甚至將所有設備的 MTU 配置為 9000。每台伺服器都有兩個介面的乙太網路卡。一個僅用於儲存網路(所有三台伺服器上的eth1)。儲存網路和管理網路的子網路不同。 Xen網路後端是openvswitch。

巨型幀正在工作:

ping -M do -s 8972 -c 2 10.100.200.10 -- QNAP
PING 10.100.200.10 (10.100.200.10) 8972(9000) bytes of data.
8980 bytes from 10.100.200.10: icmp_seq=1 ttl=64 time=1.01 ms
8980 bytes from 10.100.200.10: icmp_seq=2 ttl=64 time=0.349 ms

--- 10.100.200.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.349/0.682/1.015/0.333 ms

ping -M do -s 8972 -c 2 10.100.200.8 -- MSA 2060
PING 10.100.200.8 (10.100.200.8) 8972(9000) bytes of data.
8980 bytes from 10.100.200.8: icmp_seq=1 ttl=64 time=9.83 ms
8980 bytes from 10.100.200.8: icmp_seq=2 ttl=64 time=0.215 ms

--- 10.100.200.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.215/5.023/9.832/4.809 ms

問題:當我將虛擬機器從一個儲存 (QNAP) 複製到另一個儲存 (MSA) 時,寫入速度約為 45MB/s。當我將一些大檔案從 QNAP(例如:iso 安裝)複製到伺服器本地儲存時,速度約為 100MB/s,並且該伺服器htop顯示核心負載為 100%

可以清楚地看到網路正在像 1G 網路一樣運作。

有關硬體的一些資訊。

ethtool -i eth1
driver: ixgbe
version: 5.5.2
firmware-version: 0x18b30001
expansion-rom-version:
bus-info: 0000:07:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

ethtool eth1
Settings for eth1:
        Supported ports: [ FIBRE ]
        Supported link modes:   10000baseT/Full
        Supported pause frame use: Symmetric
        Supports auto-negotiation: No
        Supported FEC modes: Not reported
        Advertised link modes:  10000baseT/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: No
        Advertised FEC modes: Not reported
        Speed: 10000Mb/s
        Duplex: Full
        Port: Direct Attach Copper
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes

lspci | grep net
07:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
07:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

然後我在此主機上運行 iper3 伺服器:iperf3 -s -4 伺服器主機上的結果:

[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-10.04  sec  5.48 GBytes  4.69 Gbits/sec                  receiver
[  7]   0.00-10.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[  7]   0.00-10.04  sec  5.44 GBytes  4.66 Gbits/sec                  receiver
[SUM]   0.00-10.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[SUM]   0.00-10.04  sec  10.9 GBytes  9.35 Gbits/sec                  receiver

另一台主機上的客戶端:iperf3 -c 10.100.200.20 -P 2 -t 10 -4 客戶端主機上的結果:

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  5.49 GBytes  4.72 Gbits/sec  112             sender
[  4]   0.00-10.00  sec  5.48 GBytes  4.71 Gbits/sec                  receiver
[  6]   0.00-10.00  sec  5.45 GBytes  4.68 Gbits/sec  178             sender
[  6]   0.00-10.00  sec  5.44 GBytes  4.67 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec  10.9 GBytes  9.40 Gbits/sec  290             sender
[SUM]   0.00-10.00  sec  10.9 GBytes  9.38 Gbits/sec                  receiver

接下來要測試什麼或如何找到瓶頸?

iperf3 顯示連結正在以 10Gbit 速度工作,或者我錯誤地解釋了結果?

軟體版本:

xe host-list params=software-version
software-version (MRO)    : product_version: 8.2.0; product_version_text: 8.2; product_version_text_short: 8.2; platform_name: XCP; platform_version: 3.2.0; product_brand: XCP-ng; build_number: release/stockholm/master/7; hostname: localhost; date: 2021-05-20; dbv: 0.0.1; xapi: 1.20; xen: 4.13.1-9.11.1; linux: 4.19.0+1; xencenter_min: 2.16; xencenter_max: 2.16; network_backend: openvswitch; db_schema: 5.602


software-version (MRO)    : product_version: 8.2.0; product_version_text: 8.2; product_version_text_short: 8.2; platform_name: XCP; platform_version: 3.2.0; product_brand: XCP-ng; build_number: release/stockholm/master/7; hostname: localhost; date: 2021-05-20; dbv: 0.0.1; xapi: 1.20; xen: 4.13.1-9.11.1; linux: 4.19.0+1; xencenter_min: 2.16; xencenter_max: 2.16; network_backend: openvswitch; db_schema: 5.602


software-version (MRO)    : product_version: 8.2.0; product_version_text: 8.2; product_version_text_short: 8.2; platform_name: XCP; platform_version: 3.2.0; product_brand: XCP-ng; build_number: release/stockholm/master/7; hostname: localhost; date: 2021-05-20; dbv: 0.0.1; xapi: 1.20; xen: 4.13.1-9.11.1; linux: 4.19.0+1; xencenter_min: 2.16; xencenter_max: 2.16; network_backend: openvswitch; db_schema: 5.602

另外兩台伺服器則有 HP 530FLR-SFP+ 卡:

    lspci | grep net
    03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)
    03:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)

ethtool -i eth1
driver: bnx2x
version: 1.714.24 storm 7.13.11.0
firmware-version: bc 7.10.10
expansion-rom-version:
bus-info: 0000:03:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

ethtool eth1
Settings for eth1:
        Supported ports: [ FIBRE ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: No
        Supported FEC modes: Not reported
        Advertised link modes:  10000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Advertised FEC modes: Not reported
        Speed: 10000Mb/s
        Duplex: Full
        Port: Direct Attach Copper
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: off
        Supports Wake-on: g
        Wake-on: g
        Current message level: 0x00000000 (0)

        Link detected: yes

編輯1:本地儲存測試:

dmesg | grep sda
[   13.093002] sd 0:1:0:0: [sda] 860051248 512-byte logical blocks: (440 GB/410 GiB)
[   13.093077] sd 0:1:0:0: [sda] Write Protect is off
[   13.093080] sd 0:1:0:0: [sda] Mode Sense: 73 00 00 08
[   13.093232] sd 0:1:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[   13.112781]  sda: sda1 sda2 sda3 sda4 sda5 sda6
[   13.114348] sd 0:1:0:0: [sda] Attached SCSI disk
[   15.267456] EXT4-fs (sda1): mounting ext3 file system using the ext4 subsystem
[   15.268750] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
[   17.597243] EXT4-fs (sda1): re-mounted. Opts: (null)
[   18.991998] Adding 1048572k swap on /dev/sda6.  Priority:-2 extents:1 across:1048572k
[   19.279706] EXT4-fs (sda5): mounting ext3 file system using the ext4 subsystem
[   19.281346] EXT4-fs (sda5): mounted filesystem with ordered data mode. Opts: (null)

dd if=/dev/sda of=/dev/null bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 11.1072 s, 92.2 MB/s

這很奇怪,因為伺服器具有 2GB 快取的智慧陣列 P420i 控制器、6 146GB 15k SAS 硬碟的硬體 raid10。 iLo 顯示儲存空間一切正常。在另一台伺服器上結果類似1024000000 bytes (1.0 GB) copied, 11.8031 s, 86.8 MB/s

編輯2(共享儲存測試):
Qnap(SSD Raid10):

dd if=/run/sr-mount/23d45731-c005-8ad6-a596-bab2d12ec6b5/01ce9f2e-c5b1-4ba8-b783-d3a5c1ac54f0.vhd of=/dev/null bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 11.2902 s, 90.7 MB/s

MSA(HP MSA-DP+ raid):

dd if=/dev/mapper/3600c0ff000647bc2259a2f6101000000 of=/dev/null bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 11.3974 s, 89.8 MB/s

不超過 1 GB 網路...因此,如果我在共享儲存之間傳輸虛擬機器映像,則不涉及本地儲存。 openvswitch會成為瓶頸嗎?

編輯 3(更多磁碟測試):
sda = Raid10 6 x 146GB 15k sas,sdb = raid0 中的一個 146GB 15k SAS

dd if=/dev/sdb of=/dev/null bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 16.5326 s, 61.9 MB/s
[14:35 xcp-ng-em ssh]# dd if=/dev/sdb of=/dev/null bs=512k count=1000
1000+0 records in
1000+0 records out
524288000 bytes (524 MB) copied, 8.48061 s, 61.8 MB/s
[14:36 xcp-ng-em ssh]# dd if=/dev/sdb of=/dev/null bs=512k count=10000
10000+0 records in
10000+0 records out
5242880000 bytes (5.2 GB) copied, 84.9631 s, 61.7 MB/s
[14:37 xcp-ng-em ssh]# dd if=/dev/sda of=/dev/null bs=512k count=10000
10000+0 records in
10000+0 records out
5242880000 bytes (5.2 GB) copied, 7.03023 s, 746 MB/s

相關內容