I have a 2 OSDs ceph cluster. The initial size of backing volumes was 16GB. Then I shutdown OSDs, did a lvextend
on both, and turn OSDs on again.
Now ceph osd df
shows:
But ceph -s
show it's stucked at active+remapped+backfill_toofull
for 50 pgs:
I tried to understand the mechanism by reading CRUSH algorithm but seems a lot of effort and knowledge is required. It would be very appreciated if anyone could describe the behaviour (why stuck in toofull despite the free space is increased significantly.) and help me to resolve this state.