Por que o ceph não está detectando o dispositivo SSD em um novo nó?

Por que o ceph não está detectando o dispositivo SSD em um novo nó?

Instalei um cluster ceph (quincy) já com 2 nós e 4 OSDs. Agora adicionei um terceiro host rodando Debian (bullseye) ao cluster. O novo host é detectado corretamente e comanda uma mãe.

O problema é que nenhum OSD está listado no novo host, mesmo que haja 2 discos disponíveis. Quando executo o comando em um dos meus nós:

$ sudo ceph orch device ls

Só consigo ver os dispositivos dos outros nós. Mas o novo nó não está listado

Mas lsblkmostra os dois discos disponíveis no novo host:

$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    1 476.9G  0 disk 
sdb      8:16   1 476.9G  0 disk 
├─sdb1   8:17   1    16G  0 part [SWAP]
├─sdb2   8:18   1     1G  0 part /boot
└─sdb3   8:19   1 459.9G  0 part /
sdc      8:32   1 476.9G  0 disk 

Também tentei o ceph-volumecomando no novo host. Mas também este comando não encontrou nenhum disco:

$ sudo cephadm ceph-volume inventory
Inferring fsid e79............
Device Path               Size         Device nodes    rotates available Model name

Já removi o novo host e instalei o host com um sistema operacional novo. Mas não consigo descobrir qual pode ser o motivo pelo qual o ceph não encontra nenhum disco.

É possível que o Ceph não permita misturar nós com SSD/SATA e SSD/NVME?


A cephadm.logsaída durante a chamada ceph-volume inventoryparece não fornecer nenhuma informação adicional:

2022-12-08 00:15:15,432 7fdca25ac740 DEBUG --------------------------------------------------------------------------------
cephadm ['ceph-volume', 'inventory']
2022-12-08 00:15:15,432 7fdca25ac740 DEBUG Using default config /etc/ceph/ceph.conf
2022-12-08 00:15:16,131 7fee4d4c8740 DEBUG --------------------------------------------------------------------------------
cephadm ['check-host']
2022-12-08 00:15:16,131 7fee4d4c8740 INFO docker (/usr/bin/docker) is present
2022-12-08 00:15:16,131 7fee4d4c8740 INFO systemctl is present
2022-12-08 00:15:16,131 7fee4d4c8740 INFO lvcreate is present
2022-12-08 00:15:16,176 7fee4d4c8740 INFO Unit ntp.service is enabled and running
2022-12-08 00:15:16,176 7fee4d4c8740 INFO Host looks OK
2022-12-08 00:15:16,444 7f370bfbf740 DEBUG --------------------------------------------------------------------------------
cephadm ['--image', 'quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45', 'ls']
2022-12-08 00:15:20,100 7fdca25ac740 INFO Inferring fsid 0f3cd66c-74e5-11ed-813b-901b0e95a162
2022-12-08 00:15:20,121 7fdca25ac740 DEBUG /usr/bin/docker: stdout quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45|cc65afd6173a|v17|2022-10-18 01:41:41 +0200 CEST
2022-12-08 00:15:22,253 7f6f2e30a740 DEBUG --------------------------------------------------------------------------------
cephadm ['gather-facts']
2022-12-08 00:15:22,482 7f82221ce740 DEBUG --------------------------------------------------------------------------------
cephadm ['--image', 'quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45', 'list-networks']
2022-12-08 00:15:24,261 7fdca25ac740 DEBUG Using container info for daemon 'mon'
2022-12-08 00:15:24,261 7fdca25ac740 INFO Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 01:41:41 +0200 CEST
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45

ceph-volume.logsaída:

[2022-12-07 23:24:00,496][ceph_volume.main][INFO  ] Running command: ceph-volume  inventory
[2022-12-07 23:24:00,499][ceph_volume.util.system][INFO  ] Executable lvs found on the host, will use /sbin/lvs
[2022-12-07 23:24:00,499][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S  -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-12-07 23:24:00,560][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL
[2022-12-07 23:24:00,569][ceph_volume.process][INFO  ] stdout NAME="sda" KNAME="sda" PKNAME="" MAJ:MIN="8:0" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="Crucial_CT500MX2" SIZE="465.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO  ] stdout NAME="sda1" KNAME="sda1" PKNAME="sda" MAJ:MIN="8:1" FSTYPE="swap" MOUNTPOINT="[SWAP]" LABEL="" UUID="51f95805-2d5f-4cba-a885-775a0c19ad53" RO="0" RM="1" MODEL="" SIZE="32G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO  ] stdout NAME="sda2" KNAME="sda2" PKNAME="sda" MAJ:MIN="8:2" FSTYPE="ext3" MOUNTPOINT="/rootfs/boot" LABEL="" UUID="676438b6-3214-4c05-bc6b-94bd7a88c26f" RO="0" RM="1" MODEL="" SIZE="1G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO  ] stdout NAME="sda3" KNAME="sda3" PKNAME="sda" MAJ:MIN="8:3" FSTYPE="ext4" MOUNTPOINT="/rootfs" LABEL="" UUID="a251c9b0-a91c-4768-bd42-5730e032ce58" RO="0" RM="1" MODEL="" SIZE="432.8G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO  ] stdout NAME="sdb" KNAME="sdb" PKNAME="" MAJ:MIN="8:16" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="Crucial_CT500MX2" SIZE="465.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2022-12-07 23:24:00,573][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2022-12-07 23:24:00,573][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o pv_name,vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size

Responder1

Depois de procurar por silêncio por algum tempo e não conseguir detectar os dispositivos SAS em meu nó, consegui colocar meu HDD como OSD adicionando-os manualmente com os seguintes comandos:

cephadm shell
ceph orch daemon add osd --method raw host1:/dev/sda

informação relacionada