This is an obscure question, I know. I'm trying to do some performance testing of some disks on a Linux box. I'm getting some inconsistent results, running the same test on the same disk. I know that disks have different performance depending on which part of the disk is being accessed. In particular, reads and writes to the outside of the disk have much higher throughput than reads and writes to the inside part of the disk, due to near-constant data density and constant rotational speed.
I'd like to see if my inconsistencies can be attributed to this geometry-induced variance in throughput. Is it possible, using existing tools, to find out where on the disk a file has been placed?
If not, I suppose I can write something to directly seek, read, and write to the device file itself, bypassing (and destroying) the filesystem, but I'm hoping to avoid that. I'm currently using ext4 on a 3.0 kernel (Arch Linux, if it matters), but I'm interested in techniques for other filesystems as well.
You can use the FIBMAP ioctl, as exemplified here, or using hdparm:
You could use
debugfs
for this:Change the hard/partition drive accordingly and make sure the drive is unmounted. You will get a list with all the blocks used:
This thread may give you some insight into ext4 file placement algorithm.
debugfs
has abmap
function, which seems to give the data you want. You should be able to give it consecutive blocks of a file and get the physical block numbers.The question is rather old, but there is another answer that could be useful for those finding this on Google:
filefrag
(in Debian it is inside packagee2fsprogs
).It has the advantage that it works also for other filesystems (I used it for UDF), which do not appear to be supported by other tools described here.
The offset presented in the output are meant to be in multiple of the block size written in the second line (4096 here). Beware that logical offsets might not be contiguous, as a file can have holes in it (when supported by the filesystem).