
一段時間以來,每次我重新啟動工作站後,我都會收到類似以下的電子郵件(這稱為「魅力」)。
Subject: SMART error (ErrorCount) detected on host: charm
This message was generated by the smartd daemon running on:
host name: charm
DNS domain: jj5.net
The following warning/error was logged by the smartd daemon:
Device: /dev/nvme3, number of Error Log entries increased from 324 to 326
Device info:
PNY CS3140 1TB SSD, S/N:PNY21242106180100094, FW:CS314312, 1.00 TB
For details see host's SYSLOG.
You can also use the smartctl utility for further investigation.
The original message about this issue was sent at Sat Feb 4 12:53:13 2023 AEDT
Another message will be sent in 24 hours if the problem persists.
每次重新啟動時,我都會收到四封此類電子郵件,其中一封對應我工作站中的每個 NVMe SSD 硬碟。正如您從錯誤電子郵件中看到的,我的驅動器是PNY CS3140 1TB 固態硬碟。我有四個。我不認為這個問題是我的 PNY 驅動器特有的,因為我在另一台使用 Samsung 990 PRO NVMe 驅動器的計算機上遇到了我在此處描述的相同問題。
當我第一次設定我的工作站時,我遵循了以下建議本文並將我的 ZFS zpool 上的 ashift 設定設為 14。我認為我遇到的 SMART 錯誤問題可能與此 ZFS ashift 設定有關,因此我重新安裝了我的作業系統(Ubuntu)並在沒有 ashift 設定的情況下重新建立了我的 ZFS zpool,如下所示:
DISK1=/dev/disk/by-id/nvme-eui.6479a74fb0c00509
DISK2=/dev/disk/by-id/nvme-eui.6479a74fb0c00507
DISK3=/dev/disk/by-id/nvme-eui.6479a74fb0c004b7
DISK4=/dev/disk/by-id/nvme-eui.6479a74fb0c00508
zpool create -f \
-o autotrim=on \
-O acltype=posixacl -O compression=off \
-O dnodesize=auto -O normalization=formD -O atime=off -O dedup=off \
-O xattr=sa \
best ${DISK1}-part4 ${DISK2}-part4 ${DISK3}-part4 ${DISK4}-part4
zpool create -f \
-o autotrim=on \
-O acltype=posixacl -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O atime=off -O dedup=on \
-O xattr=sa \
fast mirror ${DISK1}-part5 ${DISK2}-part5 mirror ${DISK3}-part5 ${DISK4}-part5
我讓它自動檢測 ashift 設置,它為我的所有磁碟選擇了 9:
$ zdb | grep ashift
ashift: 9
ashift: 9
ashift: 9
ashift: 9
ashift: 9
ashift: 9
您可能對此也感興趣:
$ cat /proc/partitions | grep -v loop
major minor #blocks name
259 0 976762584 nvme1n1
259 2 1100800 nvme1n1p1
259 3 1048576 nvme1n1p2
259 4 52428800 nvme1n1p3
259 5 104857600 nvme1n1p4
259 6 817324032 nvme1n1p5
259 1 976762584 nvme0n1
259 7 1100800 nvme0n1p1
259 8 1048576 nvme0n1p2
259 9 52428800 nvme0n1p3
259 10 104857600 nvme0n1p4
259 11 817324032 nvme0n1p5
259 12 976762584 nvme2n1
259 14 1100800 nvme2n1p1
259 15 1048576 nvme2n1p2
259 16 52428800 nvme2n1p3
259 17 104857600 nvme2n1p4
259 18 817324032 nvme2n1p5
259 13 976762584 nvme3n1
259 19 1100800 nvme3n1p1
259 20 1048576 nvme3n1p2
259 21 52428800 nvme3n1p3
259 22 104857600 nvme3n1p4
259 23 817324032 nvme3n1p5
9 1 104790016 md1
259 24 104787968 md1p1
9 0 2093056 md0
259 25 2091008 md0p1
11 0 1048575 sr0
和這個:
$ cat /proc/mdstat
Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
md0 : active raid10 nvme3n1p2[2] nvme1n1p2[2] nvme2n1p2[0] nvme0n1p2[3]
2093056 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md1 : active raid10 nvme2n1p3[3] nvme3n1p3[2] nvme1n1p3[0] nvme0n1p3[2]
104790016 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
unused devices: <none>
以下是一些選擇輸出df
:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1p1 100G 47G 51G 48% /
/dev/md0p1 2.0G 261M 1.6G 15% /boot
/dev/nvme2n1p1 1.1G 6.1M 1.1G 1% /boot/efi
best 325G 128K 325G 1% /best
fast 1006G 128K 1006G 1% /fast
根檔案系統是 btrfs。
正如您在我的 SSD 中所看到的,我將分割區 2 和 3 用於 mdadm RAID 陣列,將分割區 4 和 5 用於 ZFS zpool「最佳」和「快速」。
電子郵件中的錯誤訊息說要檢查系統日誌以獲取更多詳細信息,但係統日誌中沒有太多內容:
$ cat /var/log/syslog | grep smartd
Feb 8 15:20:33 charm smartd[3202]: Device: /dev/nvme2, number of Error Log entries increased from 323 to 324
Feb 8 15:20:33 charm smartd[3202]: Sending warning via /usr/share/smartmontools/smartd-runner to root ...
Feb 8 15:20:33 charm smartd[3202]: Warning via /usr/share/smartmontools/smartd-runner to root: successful
Feb 10 13:47:49 charm smartd[3233]: smartd 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-60-generic] (local build)
Feb 10 13:47:49 charm smartd[3233]: Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
Feb 10 13:47:49 charm smartd[3233]: Opened configuration file /etc/smartd.conf
Feb 10 13:47:49 charm smartd[3233]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf
Feb 10 13:47:49 charm smartd[3233]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme0, opened
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme0, PNY CS3140 1TB SSD, S/N:PNY21242106180100095, FW:CS314312, 1.00 TB
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme0, is SMART capable. Adding to "monitor" list.
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme0, state read from /var/lib/smartmontools/smartd.PNY_CS3140_1TB_SSD-PNY21242106180100095.nvme.state
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme1, opened
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme1, PNY CS3140 1TB SSD, S/N:PNY21242106180100093, FW:CS314312, 1.00 TB
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme1, is SMART capable. Adding to "monitor" list.
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme1, state read from /var/lib/smartmontools/smartd.PNY_CS3140_1TB_SSD-PNY21242106180100093.nvme.state
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme2, opened
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme2, PNY CS3140 1TB SSD, S/N:PNY21242106180100092, FW:CS314312, 1.00 TB
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme2, is SMART capable. Adding to "monitor" list.
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme2, state read from /var/lib/smartmontools/smartd.PNY_CS3140_1TB_SSD-PNY21242106180100092.nvme.state
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme3, opened
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme3, PNY CS3140 1TB SSD, S/N:PNY21242106180100094, FW:CS314312, 1.00 TB
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme3, is SMART capable. Adding to "monitor" list.
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme3, state read from /var/lib/smartmontools/smartd.PNY_CS3140_1TB_SSD-PNY21242106180100094.nvme.state
Feb 10 13:47:49 charm smartd[3233]: Monitoring 0 ATA/SATA, 0 SCSI/SAS and 4 NVMe devices
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme0, number of Error Log entries increased from 318 to 320
Feb 10 13:47:49 charm smartd[3233]: Sending warning via /usr/share/smartmontools/smartd-runner to root ...
Feb 10 13:47:49 charm smartd[3233]: Warning via /usr/share/smartmontools/smartd-runner to root: successful
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme1, number of Error Log entries increased from 321 to 323
Feb 10 13:47:49 charm smartd[3233]: Sending warning via /usr/share/smartmontools/smartd-runner to root ...
Feb 10 13:47:49 charm smartd[3233]: Warning via /usr/share/smartmontools/smartd-runner to root: successful
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme2, number of Error Log entries increased from 324 to 326
Feb 10 13:47:49 charm smartd[3233]: Sending warning via /usr/share/smartmontools/smartd-runner to root ...
Feb 10 13:47:49 charm smartd[3233]: Warning via /usr/share/smartmontools/smartd-runner to root: successful
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme3, number of Error Log entries increased from 324 to 326
Feb 10 13:47:49 charm smartd[3233]: Sending warning via /usr/share/smartmontools/smartd-runner to root ...
Feb 10 13:47:49 charm smartd[3233]: Warning via /usr/share/smartmontools/smartd-runner to root: successful
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme0, state written to /var/lib/smartmontools/smartd.PNY_CS3140_1TB_SSD-PNY21242106180100095.nvme.state
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme1, state written to /var/lib/smartmontools/smartd.PNY_CS3140_1TB_SSD-PNY21242106180100093.nvme.state
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme2, state written to /var/lib/smartmontools/smartd.PNY_CS3140_1TB_SSD-PNY21242106180100092.nvme.state
Feb 10 13:47:49 charm smartd[3233]: Device: /dev/nvme3, state written to /var/lib/smartmontools/smartd.PNY_CS3140_1TB_SSD-PNY21242106180100094.nvme.state
以下是smartctl -x
nvme3 設備的一些輸出:
# smartctl -x /dev/nvme3
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-60-generic] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Number: PNY CS3140 1TB SSD
Serial Number: PNY21242106180100094
Firmware Version: CS314312
PCI Vendor/Subsystem ID: 0x1987
IEEE OUI Identifier: 0x6479a7
Total NVM Capacity: 1,000,204,886,016 [1.00 TB]
Unallocated NVM Capacity: 0
Controller ID: 1
NVMe Version: 1.4
Number of Namespaces: 1
Namespace 1 Size/Capacity: 1,000,204,886,016 [1.00 TB]
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64: 6479a7 4fb0c00508
Local Time is: Sat Feb 11 06:57:14 2023 AEDT
Firmware Updates (0x12): 1 Slot, no Reset required
Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005d): Comp DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Log Page Attributes (0x08): Telmtry_Lg
Maximum Data Transfer Size: 512 Pages
Warning Comp. Temp. Threshold: 84 Celsius
Critical Comp. Temp. Threshold: 89 Celsius
Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 8.80W - - 0 0 0 0 0 0
1 + 7.10W - - 1 1 1 1 0 0
2 + 5.20W - - 2 2 2 2 0 0
3 - 0.0620W - - 3 3 3 3 2500 7500
4 - 0.0440W - - 4 4 4 4 10500 65000
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 2
1 - 4096 0 1
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 41 Celsius
Available Spare: 100%
Available Spare Threshold: 5%
Percentage Used: 16%
Data Units Read: 21,133,741 [10.8 TB]
Data Units Written: 151,070,190 [77.3 TB]
Host Read Commands: 202,445,947
Host Write Commands: 2,302,434,105
Controller Busy Time: 5,268
Power Cycles: 58
Power On Hours: 7,801
Unsafe Shutdowns: 33
Media and Data Integrity Errors: 0
Error Information Log Entries: 326
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Error Information (NVMe Log 0x01, 16 of 63 entries)
Num ErrCount SQId CmdId Status PELoc LBA NSID VS
0 326 0 0x1018 0x4004 0x028 0 0 -
所以我不確定錯誤到底是什麼。我不確定是什麼原因造成的。我不確定情況有多嚴重(一切似乎都正常)。特別是,我不知道如何解決這個問題。
任何建議,將不勝感激。
答案1
我也遇到同樣的問題,但使用的是 Crucial P3 CT4000P3SSD8。我在 Ubuntu 22.04(核心 5.15.0-67-generic)上的 ZFS 映像池中執行兩個相同的鏡像,每次系統重新啟動都會在每個磁碟機的 SMART 錯誤日誌中新增 2 個錯誤。
我安裝了 nvme-cli 並運行nvme error-log /dev/<your_drive>
.在日誌中我發現錯誤是0x2002(INVALID_FIELD: A reserved coded value or an unsupported value in a defined field).
我發現這個線程在nvme-cli
的 github 頁面上,特別是這則評論來自@keithbusch 說:
sqid 是管理佇列,因此該錯誤可能表示驅動程式或某些工具嘗試了您的控制器不支援的無害的可選命令。眾所周知,設備供應商在記錄這些錯誤方面過於迂腐。規範允許這種行為,但不要求這樣做,對任何人都沒有幫助。
我認為我們很安全,只是需要學會忍受這種過度活躍的日誌記錄...