我有兩個具有相同分割區表 ( sudo sfdisk -d /dev/sda | sudo sfdisk /dev/sdb
) 的磁碟,但它們mdadm
拒絕合併為 RAID-1 陣列。
$ sudo mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: /dev/sdb1 not large enough to join array
對這裡發生的事情有什麼想法嗎?
細節
$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Mar 22 19:34:24 2018
Raid Level : raid1
Array Size : 976627712 (931.38 GiB 1000.07 GB)
Used Dev Size : 976627712 (931.38 GiB 1000.07 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Aug 25 11:56:19 2020
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : hostname:0 (local to host hostname)
UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
Events : 459187
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 1 1 active sync /dev/sda1
$ sudo parted /dev/sda unit s print
Model: XXX (scsi)
Disk /dev/sda: 1953525168s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 2048s 1948268543s 1948266496s primary raid
$ sudo parted /dev/sdb unit s print
Model: XXX (scsi)
Disk /dev/sdb: 1953519616s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 2048s 1948268543s 1948266496s primary raid
$ sudo mdadm -E /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
Name : hostname:0 (local to host hostname)
Creation Time : Thu Mar 22 19:34:24 2018
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1948004352 (928.88 GiB 997.38 GB)
Array Size : 976627712 (931.38 GiB 1000.07 GB)
Used Dev Size : 1953255424 (931.38 GiB 1000.07 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=18446744073704300544 sectors
State : clean
Device UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Aug 25 12:39:03 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : f47ecd0c - correct
Events : 459193
Device Role : Active device 1
Array State : .A ('A' == active, '.' == missing, 'R' == replacing)
$ sudo mdadm -E /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
Name : hostname:0 (local to host hostname)
Creation Time : Thu Mar 22 19:34:24 2018
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1948004352 (928.88 GiB 997.38 GB)
Array Size : 976627712 (931.38 GiB 1000.07 GB)
Used Dev Size : 1953255424 (931.38 GiB 1000.07 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=18446744073704300544 sectors
State : clean
Device UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Aug 25 10:03:24 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 4e58ad84 - correct
Events : 81346
Device Role : Active device 0
Array State : A. ('A' == active, '.' == missing, 'R' == replacing)
答案1
您最初可能使用整個磁碟建置陣列,沒有任何分割區。後來您可能添加了一個分區表,這使事情變得混亂。 MADM 需要整個磁碟。
您應該考慮元資料格式,它可能無法判斷 raid 元資料是否應該應用於整個磁碟或僅應用於分割區。您可以使用較新的元資料格式重建陣列。
您可以建立一個新的 RAID 陣列,將第二個磁碟機作為跨整個分割區的唯一活動卷,減去幾 MB 作為安全裕度,以防此問題再次發生。然後將舊數組中的所有資料複製到新數組中。
最後擦除原來的陣列,然後將舊磁碟新增到新陣列中。
答案2
我也面臨這個問題。罪魁禍首是我重新排列了磁碟上的一些分割區,但分割區的(太小)大小反映了已刪除的分割區。我必須觸發重新讀取目前分割區擴大的磁碟分割表。
$ sudo partprobe /dev/sdb # to re-read current partition table of the device