After the deploy of Openstack via Juju the ceph-osd results in blocked
$: juju status
ceph-osd/0 blocked idle 1 10.20.253.197 No block devices detected using current configuration
ceph-osd/1* blocked idle 2 10.20.253.199 No block devices detected using current configuration
ceph-osd/2 blocked idle 0 10.20.253.200 No block devices detected using current configuration
I have juju ssh into the first machine with the ceph-osd/0
$: juju ssh ceph-osd/0
and I run the following commands:
$: sudo fdisk -l
Disk /dev/vda: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xaa276e23
Device Boot Start End Sectors Size Id Type
/dev/vda1 2048 1048575966 1048573919 500G 83 Linux
Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CAA6111D-5ECF-48EB-B4BF-9EC58E38AD64
Device Start End Sectors Size Type
/dev/vdb1 2048 4095 2048 1M BIOS boot
/dev/vdb2 4096 1048563711 1048559616 500G Linux filesystem
$: df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 856K 1.6G 1% /run
/dev/vda1 492G 12G 455G 3% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
tmpfs 100K 0 100K 0% /var/lib/lxd/shmounts
tmpfs 100K 0 100K 0% /var/lib/lxd/devlxd
tmpfs 1.6G 0 1.6G 0% /run/user/1000
$: lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 500G 0 disk
└─vda1 252:1 0 500G 0 part /
vdb 252:16 0 500G 0 disk
├─vdb1 252:17 0 1M 0 part
└─vdb2 252:18 0 500G 0 part
If our environment is already deployed , I've resolved that using these two tasks:
1° Task
then
then
then
I've repeated this task also for the other machine ceph-osd/1 'n ceph-osd/2
2° Task
on Juju Gui I've changed on 3 ceph-osd the string /dev/sdb in /dev/vdb1, save 'n commit that
now its own status is "idle"
While if we have to run the deploy of Openstack, before to make that, we must change in Juju Ui the string osd-devices (string) from /dev/sdb to /dev/vdb in 3 ceph-osd. Then we can proceed with its commit.
The default disk path of ceph-base is currently set to: '/dev/sdb'. You have to set it to the path of your disk for the ceph-osd data ('/dev/vdb'):
The disk should have no partitions on it when you configure it. After that the ceph-osds should become active.