我已經安裝了一個 ceph 叢集(quincy),已經有 2 個節點和 4 個 OSD。現在,我為叢集新增了第三台運行 Debian (bullseye) 的主機。新主機被正確檢測並運行媽媽。
問題是,即使有 2 個可用磁碟,新主機上也不會列出任何 OSD。當我在其中一個節點上運行命令時:
$ sudo ceph orch device ls
我只能看到其他節點的設備。但新節點並未列出
但lsblk
顯示新主機上的兩個可用磁碟:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 476.9G 0 disk
sdb 8:16 1 476.9G 0 disk
├─sdb1 8:17 1 16G 0 part [SWAP]
├─sdb2 8:18 1 1G 0 part /boot
└─sdb3 8:19 1 459.9G 0 part /
sdc 8:32 1 476.9G 0 disk
我還在ceph-volume
新主機上嘗試了該命令。但這個指令也沒有找到任何磁碟:
$ sudo cephadm ceph-volume inventory
Inferring fsid e79............
Device Path Size Device nodes rotates available Model name
我已經刪除了新主機並使用新作業系統安裝了主機。但我無法弄清楚 ceph 找不到任何磁碟的原因是什麼。
Ceph 是否可能不允許 SSD/SATA 和 SSD/NVME 混合節點?
cephadm.log
通話期間的輸出似乎ceph-volume inventory
沒有提供任何附加資訊:
2022-12-08 00:15:15,432 7fdca25ac740 DEBUG --------------------------------------------------------------------------------
cephadm ['ceph-volume', 'inventory']
2022-12-08 00:15:15,432 7fdca25ac740 DEBUG Using default config /etc/ceph/ceph.conf
2022-12-08 00:15:16,131 7fee4d4c8740 DEBUG --------------------------------------------------------------------------------
cephadm ['check-host']
2022-12-08 00:15:16,131 7fee4d4c8740 INFO docker (/usr/bin/docker) is present
2022-12-08 00:15:16,131 7fee4d4c8740 INFO systemctl is present
2022-12-08 00:15:16,131 7fee4d4c8740 INFO lvcreate is present
2022-12-08 00:15:16,176 7fee4d4c8740 INFO Unit ntp.service is enabled and running
2022-12-08 00:15:16,176 7fee4d4c8740 INFO Host looks OK
2022-12-08 00:15:16,444 7f370bfbf740 DEBUG --------------------------------------------------------------------------------
cephadm ['--image', 'quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45', 'ls']
2022-12-08 00:15:20,100 7fdca25ac740 INFO Inferring fsid 0f3cd66c-74e5-11ed-813b-901b0e95a162
2022-12-08 00:15:20,121 7fdca25ac740 DEBUG /usr/bin/docker: stdout quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45|cc65afd6173a|v17|2022-10-18 01:41:41 +0200 CEST
2022-12-08 00:15:22,253 7f6f2e30a740 DEBUG --------------------------------------------------------------------------------
cephadm ['gather-facts']
2022-12-08 00:15:22,482 7f82221ce740 DEBUG --------------------------------------------------------------------------------
cephadm ['--image', 'quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45', 'list-networks']
2022-12-08 00:15:24,261 7fdca25ac740 DEBUG Using container info for daemon 'mon'
2022-12-08 00:15:24,261 7fdca25ac740 INFO Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 01:41:41 +0200 CEST
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
ceph-volume.log
輸出:
[2022-12-07 23:24:00,496][ceph_volume.main][INFO ] Running command: ceph-volume inventory
[2022-12-07 23:24:00,499][ceph_volume.util.system][INFO ] Executable lvs found on the host, will use /sbin/lvs
[2022-12-07 23:24:00,499][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-12-07 23:24:00,560][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL
[2022-12-07 23:24:00,569][ceph_volume.process][INFO ] stdout NAME="sda" KNAME="sda" PKNAME="" MAJ:MIN="8:0" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="Crucial_CT500MX2" SIZE="465.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO ] stdout NAME="sda1" KNAME="sda1" PKNAME="sda" MAJ:MIN="8:1" FSTYPE="swap" MOUNTPOINT="[SWAP]" LABEL="" UUID="51f95805-2d5f-4cba-a885-775a0c19ad53" RO="0" RM="1" MODEL="" SIZE="32G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO ] stdout NAME="sda2" KNAME="sda2" PKNAME="sda" MAJ:MIN="8:2" FSTYPE="ext3" MOUNTPOINT="/rootfs/boot" LABEL="" UUID="676438b6-3214-4c05-bc6b-94bd7a88c26f" RO="0" RM="1" MODEL="" SIZE="1G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO ] stdout NAME="sda3" KNAME="sda3" PKNAME="sda" MAJ:MIN="8:3" FSTYPE="ext4" MOUNTPOINT="/rootfs" LABEL="" UUID="a251c9b0-a91c-4768-bd42-5730e032ce58" RO="0" RM="1" MODEL="" SIZE="432.8G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO ] stdout NAME="sdb" KNAME="sdb" PKNAME="" MAJ:MIN="8:16" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="Crucial_CT500MX2" SIZE="465.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2022-12-07 23:24:00,573][ceph_volume.util.system][INFO ] Executable pvs found on the host, will use /sbin/pvs
[2022-12-07 23:24:00,573][ceph_volume.process][INFO ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o pv_name,vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size
答案1
在安靜地搜尋了一段時間後,無法偵測到節點中的 SAS 設備,我設法透過使用以下命令手動新增 HDD 作為 OSD:
cephadm shell
ceph orch daemon add osd --method raw host1:/dev/sda