I'm seeing what seems to me to be contradictory results when looking at disk performance with dd versus iostat on two hosts (EC2 instances with an EBS drive). The hosts are identical except that one uses EXT4-formatted ebs and the other XFS-formatted ebs.
If I look at iostat, the EXT4 host seems to outperform the XFS host. TBoth are doing roughly the same write throughput (about 25MB/s) at about 100% utilization, but the EXT4 host on average has less await (lower disk latency). It's this smaller await that makes me say EXT4 is outperforming XFS:
EXT4:
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
xvdf 0.00 11.00 0.00 6331.00 0.00 26.96 8.72 71.00 11.32 0.16 99.60
XFS:
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
xvdf 0.00 2.00 0.00 6211.00 0.00 27.38 9.03 144.95 23.24 0.16 100.40
But, if using dd
to measure performance, XFS is the clear winner, as it takes much less time to complete a fully synchronous write. The command is dd bs=1M count=256 if=/dev/zero of=./test conv=fdatasync
:
EXT4:
2.8 MB/s
XFS:
24.0 MB/s
What would be the reason for EXT4 looking much better with iostat
but looking much worse with dd
?
UPDATE 5/25/2018:
After running the hosts for a couple days, dd
and sync
now show equivalent response times for both ext4 and xfs. I suspect this has to do with a difference (if any?) in the way they handle something like sparse files. The first day the hosts were up they were both busy laying down a bunch of new files on the filesystem (this is a graphite carbon-cache application). This has settled down to where small updates are being written to these files, but new files are no longer being created and the total amount of used disk space is no longer increasing.
So, there must be something fundamentally different in the way XFS allocates new disk blocks versus EXT4. Any insight into what this could be, would be welcome.
Correct me if I'm wrong, but I believe you use EBS volumes.
Try to format the
ext4
partition withoutlazy initialization
, as it can affect testing performance.EBS Volumes
Several factors will determine EBS volume performance. They are not entirely intuitive.
Type of the volume
. Provisioned IOPSio1
, General Purposegp2
, Throughput Optimized HDDst1
, Cold HDDsc1
. characteristics.Size of the volume
. Generally, bigger the volume, better the performance. Except for the EBS Provisioned IOPS (io1), volumes use a burst model [1] and I/O can fluctuate or drop significantly ifall I/O credits are used
. In short, each volume gets base minimum 100 IOPS, and for every +1 GB added (after ~33GB) performance will increase for 3 IOPS. Also the volume can burst up to 3000 IOPS if there are enough I/O credits available.EC2 Instance type
. Larger the instance, faster the network performance. It's also important whether it's an EBS-Optimized instance or not.If you restored your volume from a snapshot, you'd have to pre-warm the volume to get the maximum performance.
See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html
More detauls:
[1] https://aws.amazon.com/ebs/details/
Update 1
It's difficult to tell why you see those results as there's just not enough information in your post for me to deduce it.
This CloudFormation template is my attempt to recreate your results.
EXT4 XFS df mounthttps://gist.github.com/an2io/c68b2119f18192d83a685651905623e9