I have a 2 OSDs ceph cluster. The initial size of backing volumes was 16GB. Then I shutdown OSDs, did a lvextend
on both, and turn OSDs on again.
Now ceph osd df
shows:
But ceph -s
show it's stucked at active+remapped+backfill_toofull
for 50 pgs:
I tried to understand the mechanism by reading CRUSH algorithm but seems a lot of effort and knowledge is required. It would be very appreciated if anyone could describe the behaviour (why stuck in toofull despite the free space is increased significantly.) and help me to resolve this state.
Your
RAW USE
is times larger thanDATA
. Note: solution not tried, still this is what I've found.Re: Raw use 10 times higher than data use
Similar advice: https://stackoverflow.com/questions/68185503/ceph-df-octopus-shows-used-is-7-times-higher-than-stored-in-erasure-coded-pool/68186461#68186461
Another way to fix the problem one may try to follow instructions on your second screen: