We have a 12 TB RAID 6 array which is supposed to be set up as a single partition with an XFS file system. On creating the new file system, it says it has 78 GB in use, but there are no files on the drive.
[root@i00a ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 11M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sdb3 154G 3.9G 150G 3% /
/dev/sdb2 1014M 153M 862M 16% /boot
/dev/sdb1 599M 6.7M 593M 2% /boot/efi
/dev/sdc1 187G 1.6G 185G 1% /var
tmpfs 6.3G 0 6.3G 0% /run/user/0
/dev/sda1 11T 78G 11T 1% /export/libvirt
Did I do something wrong? Is this by design?
It looks like the file system log only takes up about 2 GB, and I can't figure out what else could be using the space.
[root@i00a ~]# xfs_info /export/libvirt/
meta-data=/dev/sda1 isize=512 agcount=11, agsize=268435455 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=2929458688, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Partition information:
[root@irb00a ~]# parted /dev/sda1
GNU Parted 3.2
Using /dev/sda1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Unknown (unknown)
Disk /dev/sda1: 12.0TB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 12.0TB 12.0TB xfs
This is a Dell FX2 with four FC430 compute nodes and two FD332 storage nodes, running Red Hat Enterprise Linux 8 (Ootpa).
All filesystems have an overhead for their own internal data structures. This internal information is used for the filesystem to create files and directories in future, and to keep track of where everything is allocated. This data is collectively known as "metadata". It's data "about" the data on the filesystem. The metadata is considered an overhead, as it takes up space but is not user data. This overhead is an unavoidable side effect of using any filesystem.
According to this blog post, XFS has an overhead of around 0.5% of the total disk space. (Note that this post is from 2009, but there's no reason this should have drastically changed). He got that result by testing filesystem overhead of over a dozen different filesystems using
guestfish
.0.5% of your 12TB space is 60GB, so it sounds like that's pretty close to the expected usage. I suspect his number should have been slightly higher than 0.5%, but that it was rounded.
For XFS, the empty filesystem "Size Used" as shown by
df -h
seems to depend a lot on which metadata features you enable atmkfs.xfs
time.Testing with an empty 12TB file:
Default settings (on my current ArchLinux system):
Using
reflink=1
:Using
crc=0
,reflink=0
: (for some reason, that also turnsfinobt=0
,sparse=0
)In short:
So "Used" space on a fresh 12TB filesystem is 78G, 12G or as low as 33M depending on which metadata features you enable at mkfs time.