問題

問題

我在兩個磁碟上運行 LVM RAID 1。以下是lvs關於我的 VG 的資訊:

root@picard:~# lvs -a -o +devices,lv_health_status,raid_sync_action,raid_mismatch_count 
  /run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  LV                 VG      Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                 Health          SyncAction Mismatches
  lv-data            vg-data rwi-aor-r- 2.70t                                    100.00           lv-data_rimage_0(0),lv-data_rimage_1(0) refresh needed  idle                0
  [lv-data_rimage_0] vg-data iwi-aor-r- 2.70t                                                     /dev/sda(0)                             refresh needed                       
  [lv-data_rimage_1] vg-data iwi-aor--- 2.70t                                                     /dev/sdb(1)                                                                  
  [lv-data_rmeta_0]  vg-data ewi-aor-r- 4.00m                                                     /dev/sda(708235)                        refresh needed                       
  [lv-data_rmeta_1]  vg-data ewi-aor--- 4.00m                                                     /dev/sdb(0)     

看起來好像出了點問題/dev/sda。該磁碟的 SMART 日誌看起來不錯,所以我希望這只是暫時的,並且我想刷新/重新同步我的 RAID。這就是我所做的:

root@picard:~# lvchange --refresh vg-data/lv-data
  /run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.

(…wait for a couple of minutes…)

root@picard:~# lvs -a -o +devices,lv_health_status,raid_sync_action,raid_mismatch_count
  /run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  LV                 VG      Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                 Health          SyncAction Mismatches
  lv-data            vg-data rwi-aor-r- 2.70t                                    100.00           lv-data_rimage_0(0),lv-data_rimage_1(0) refresh needed  idle                0
  [lv-data_rimage_0] vg-data iwi-aor-r- 2.70t                                                     /dev/sda(0)                             refresh needed                       
  [lv-data_rimage_1] vg-data iwi-aor--- 2.70t                                                     /dev/sdb(1)                                                                  
  [lv-data_rmeta_0]  vg-data ewi-aor-r- 4.00m                                                     /dev/sda(708235)                        refresh needed                       
  [lv-data_rmeta_1]  vg-data ewi-aor--- 4.00m                                                     /dev/sdb(0)               

那麼,這沒有起到任何作用嗎?我的 dmesg 表明它嘗試恢復 RAID:

[150522.459416] device-mapper: raid: Faulty raid1 device #0 has readable super block.  Attempting to revive it.

好吧,也許擦洗有幫助?讓我們試試看:

root@picard:~# lvchange --syncaction repair vg-data/lv-data
  /run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
root@picard:~# lvs -a -o +devices,lv_health_status,raid_sync_action,raid_mismatch_count
  /run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  LV                 VG      Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                 Health          SyncAction Mismatches
  lv-data            vg-data rwi-aor-r- 2.70t                                    100.00           lv-data_rimage_0(0),lv-data_rimage_1(0) refresh needed  idle                0
  [lv-data_rimage_0] vg-data iwi-aor-r- 2.70t                                                     /dev/sda(0)                             refresh needed                       
  [lv-data_rimage_1] vg-data iwi-aor--- 2.70t                                                     /dev/sdb(1)                                                                  
  [lv-data_rmeta_0]  vg-data ewi-aor-r- 4.00m                                                     /dev/sda(708235)                        refresh needed                       
  [lv-data_rmeta_1]  vg-data ewi-aor--- 4.00m                                                     /dev/sdb(0)            

這裡,有幾個奇怪的地方:

  • SyncActionidle,也就是說,看起來擦洗很快就完成了?
  • 如果擦洗完畢,數組還要刷新,不匹配計數怎麼還是0?清理不應該檢測不匹配並糾正它們(即清除“需要刷新”狀態)或將不匹配計數提高到非零嗎?

dmesg 說:

[150695.091180] md: requested-resync of RAID array mdX
[150695.092285] md: mdX: requested-resync done.

看來擦洗並沒有真正起到任何作用。

問題

  • 如何呼叫實際的清理?
  • 假設驅動器沒有故障 - 如何刷新陣列?
  • 如果驅動器發生故障(即刷新立即出錯) - 我將如何看到這一點?我假設 dmesg 應該顯示一些 I/O 錯誤? (我沒有看到他們中的任何一個......)

系統資訊

我正在運行基於 Ubuntu 16.04.4 LTS 的 Armbian。 LVM版本:

root@picard:~# lvm version
  LVM version:     2.02.133(2) (2015-10-30)
  Library version: 1.02.110 (2015-10-30)
  Driver version:  4.37.0

相關內容