英特爾SSD DC P3600 1.2TB性能

英特爾SSD DC P3600 1.2TB性能

我剛剛得到了一個工作站英特爾固態硬碟 DC P3600 1.2TB在一個華碩X99-E WS主機板.我從 Live CD 啟動 Ubuntu 15.04 並運行 Disks ( gnome-disks) 應用程式來對 SSD 進行基準測試。該磁碟安裝在 下/dev/nvme0n1。我運行了預設基準測試(使用100 個樣本,每個樣本10 MB,從整個磁碟中隨機取樣),結果令人失望:平均讀取速率為720 MB/s,平均寫入速率為805 MB/s (大於讀取速率) !此外,磁碟顯示的有關磁碟的唯一資訊是其大小 - 沒有型號名稱或任何其他資訊。

由於公司政策,我無法在設置之前將這台計算機連接到網絡,因此我無法使用任何診斷工具(我想遵循官方文檔)除了預先安裝的內容之外。文件指出NVMe 驅動程式已預先安裝在 Linux 核心中3.19,並且 Ubuntu 15.04 已安裝,3.19.0-15-generic因此這應該不是問題。這

dd if=/dev/zero of=/dev/nvme0n1 bs=1M oflag=direct

文件中的命令給我的寫入速率約為 620 MB/s

hdparm -tT --direct /dev/nvme0n1

提供 657 MB/s O_DIRECT 快取讀取和 664 MB/s O_DIRECT 磁碟讀取。

我將磁碟連線的 PCIe 連接埠固定到 BIOS 中的 PCIe v3.0 插槽,並且不使用 UEFI 開機。

編輯1: PC供應商使用SSD將SSD連接到主機板適用於 P4000 伺服器機箱 FUP8X25S3NVDK 的熱插拔背板 PCIe 組合驅動器籠套件(2.5 吋 NVMe SSD)

該裝置實際插入 PCIe 3.0 x16 插槽,但lspci在 Centos 7 和 Ubuntu 15.04 下將其列為使用 PCIe 2.0 1.0 x4(LnkSta即 2.5 GT/s,即 PCIe v1.0 的速度):

[user@localhost ~]$ sudo lspci -vvv -s 6:0.0
06:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) (prog-if 02 [NVM Express])
    Subsystem: Intel Corporation DC P3600 SSD [2.5" SFF]
    Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
    Latency: 0
    Interrupt: pin A routed to IRQ 40
    Region 0: Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
    Expansion ROM at fb400000 [disabled] [size=64K]
    Capabilities: [40] Power Management version 3
        Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
        Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
    Capabilities: [50] MSI-X: Enable+ Count=32 Masked-
        Vector table: BAR=0 offset=00002000
        PBA: BAR=0 offset=00003000
    Capabilities: [60] Express (v2) Endpoint, MSI 00
        DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 <4us
            ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
        DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
            RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
            MaxPayload 256 bytes, MaxReadReq 512 bytes
        DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
        LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <4us, L1 <4us
            ClockPM- Surprise- LLActRep- BwNot-
        LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-
            ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
        LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
        DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported
        DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
        LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
             Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
             Compliance De-emphasis: -6dB
        LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1-
             EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
    Capabilities: [100 v1] Advanced Error Reporting
        UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
        UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
        UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
        CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
        CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
        AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
    Capabilities: [150 v1] Virtual Channel
        Caps:   LPEVC=0 RefClk=100ns PATEntryBits=1
        Arb:    Fixed- WRR32- WRR64- WRR128-
        Ctrl:   ArbSelect=Fixed
        Status: InProgress-
        VC0:    Caps:   PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
            Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
            Ctrl:   Enable+ ID=0 ArbSelect=Fixed TC/VC=01
            Status: NegoPending- InProgress-
    Capabilities: [180 v1] Power Budgeting <?>
    Capabilities: [190 v1] Alternative Routing-ID Interpretation (ARI)
        ARICap: MFVC- ACS-, Next Function: 0
        ARICtl: MFVC- ACS-, Function Group: 0
    Capabilities: [270 v1] Device Serial Number 55-cd-2e-40-4b-fa-80-bc
    Capabilities: [2a0 v1] #19
    Kernel driver in use: nvme

編輯2:

我在 Centos 7 下測試了該驅動器,性能與我在 Ubuntu 上獲得的性能相同。不得不提的是,官方文件指出Intel在Centos 6.7上測試了這款SSD,但似乎不存在。相反,6.6 之後出現了 Centos 7。

另一個令人困惑的來源是:基準測試結果會因我連接磁碟機的實體 PCIe 插槽而異。插槽 1-3 提供了所描述的效能,而在插槽 4-7 上,SSD 的讀取速度提高了 100 MB/s。

電腦中唯一的其他 PCIe 設備是EVGA 英偉達 GT 210具有 512 MB RAM 的 GPU,看起來是 PCIe 2.0 x16 設備,但是,它LnkSta表示 PCIe v1.0 (2.5 GT/s) x8:

[user@localhost ~]$ sudo lspci -vvv -s a:0.0
0a:00.0 VGA compatible controller: NVIDIA Corporation GT218 [GeForce 210] (rev a2) (prog-if 00 [VGA controller])
    Subsystem: eVga.com. Corp. Device 1313
    Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
    Latency: 0
    Interrupt: pin A routed to IRQ 114
    Region 0: Memory at fa000000 (32-bit, non-prefetchable) [size=16M]
    Region 1: Memory at c0000000 (64-bit, prefetchable) [size=256M]
    Region 3: Memory at d0000000 (64-bit, prefetchable) [size=32M]
    Region 5: I/O ports at e000 [size=128]
    Expansion ROM at fb000000 [disabled] [size=512K]
    Capabilities: [60] Power Management version 3
        Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
        Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
    Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Address: 00000000fee005f8  Data: 0000
    Capabilities: [78] Express (v2) Endpoint, MSI 00
        DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 <64us
            ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
        DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
            RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+
            MaxPayload 128 bytes, MaxReadReq 512 bytes
        DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
        LnkCap: Port #8, Speed 2.5GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <1us, L1 <4us
            ClockPM+ Surprise- LLActRep- BwNot-
        LnkCtl: ASPM Disabled; RCB 128 bytes Disabled- CommClk-
            ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
        LnkSta: Speed 2.5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
        DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR-, OBFF Not Supported
        DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
        LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
             Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
             Compliance De-emphasis: -6dB
        LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
             EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
    Capabilities: [b4] Vendor Specific Information: Len=14 <?>
    Capabilities: [100 v1] Virtual Channel
        Caps:   LPEVC=0 RefClk=100ns PATEntryBits=1
        Arb:    Fixed- WRR32- WRR64- WRR128-
        Ctrl:   ArbSelect=Fixed
        Status: InProgress-
        VC0:    Caps:   PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
            Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
            Ctrl:   Enable+ ID=0 ArbSelect=Fixed TC/VC=01
            Status: NegoPending- InProgress-
    Capabilities: [128 v1] Power Budgeting <?>
    Capabilities: [600 v1] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
    Kernel driver in use: nouveau

編輯3:

我現在已將工作站連接到網絡,安裝了英特爾固態硬碟資料中心工具 ( isdct) 並更新了韌體,但基準測試結果沒有改變。有趣的是它的輸出:

[user@localhost ~]$ sudo isdct show -a -intelssd 
ls: cannot access /dev/sg*: No such file or directory
- IntelSSD CVMD5130002L1P2HGN -
AggregationThreshold: 0
Aggregation Time: 0
ArbitrationBurst: 0
AsynchronousEventConfiguration: 0
Bootloader: 8B1B012F
DevicePath: /dev/nvme0n1
DeviceStatus: Healthy
EnduranceAnalyzer: 17.22 Years
ErrorString: 
Firmware: 8DV10151
FirmwareUpdateAvailable: Firmware is up to date as of this tool release.
HighPriorityWeightArbitration: 0
Index: 0
IOCompletionQueuesRequested: 30
IOSubmissionQueuesRequested: 30
LBAFormat: 0
LowPriorityWeightArbitration: 0
ProductFamily: Intel SSD DC P3600 Series
MaximumLBA: 2344225967
MediumPriorityWeightArbitration: 0
MetadataSetting: 0
ModelNumber: INTEL SSDPE2ME012T4
NativeMaxLBA: 2344225967
NumErrorLogPageEntries: 63
NumLBAFormats: 6
NVMePowerState: 0
PCILinkGenSpeed: 1
PCILinkWidth: 4
PhysicalSize: 1200243695616
PowerGovernorMode: 0 (25W)
ProtectionInformation: 0
ProtectionInformationLocation: 0
RAIDMember: False
SectorSize: 512
SerialNumber: CVMD5130002L1P2HGN
SystemTrimEnabled: 
TempThreshold: 85 degree C
TimeLimitedErrorRecovery: 0
TrimSupported: True
WriteAtomicityDisableNormal: 0

具體來說,它列出PCILinkGenSpeed為 1 和PCILinkWidth4 NVMePowerState

我的問題:

  1. 如何讓 SSD 以 PCIe v3.0 x4 速度運轉?

答案1

這是硬體問題。

適用於 P4000 伺服器機殼 FUP8X25S3NVDK(2.5 吋 NVMe SSD)的熱插拔背板 PCIe 組合驅動器籠套件似乎與華碩 X99-E WS 主機板不相容。解決方案是使用 Asus HyperKit 連接 SSD。然而,該解決方案需要 HyperKit 和 SSD 之間有一條電纜,該電纜未與任何這些電纜捆綁在一起,目前也無法購買。這條電纜與英特爾 SSD 750 系列(2.5 吋外形尺寸)捆綁在一起,我們的供應商能夠以特殊服務的形式提供一條電纜。

注意硬體不相容問題。

相關內容