I guess I'm missing something here, perhaps I need to do some configuration to enable my hardware full speed under Linux - here's my problem.
I recently got this Intel NUC 9 Extreme kit.
I added 64 GB of RAM, and since I wanted to setup a RAID1 drive, I put two Samsung 970 EVO 1TB M.2 NVME drives in it:
Since they are PCIe Gen 3 x4 devices, I was expecting to be able to get something close to 3 GB/s in sequential reads from a single drive and a little less than 6 GB/s from a RAID1 device, but actual speeds disappointed me, since I couldn't get more than 2.1 GB/s, even in RAID1.
To test storage speed I used dd
, like this:
$ dd if=/dev/zero of=/path/to/test/64GB bs=1G count=64 && dd if=/dev/zero of=/media/ubuntu/evo/fill-cache bs=1G count=64 && dd if=/path/to/test/64GB of=/dev/null
64+0 record dentro
64+0 record fuori
68719476736 bytes (69 GB, 64 GiB) copied, 43,3462 s, 1,6 GB/s
64+0 record dentro
64+0 record fuori
68719476736 bytes (69 GB, 64 GiB) copied, 49,6182 s, 1,4 GB/s
134217728+0 record dentro
134217728+0 record fuori
68719476736 bytes (69 GB, 64 GiB) copied, 139,781 s, 492 MB/s
I run these commands 6-7 times and sequential write speeds were less than 1.2 GB/s. Sequential read speeds were always less than 1.9 GB/s. Running bonnie++
I got similar results.
I did my tests booting that system from an external USB drive running Ubuntu 20.04 with kernel 5.4 and from a Fedora 33 install disk running kernel 5.8, with similar results (in Fedora speeds were even worse). I tried both with ext4 and xfs, with no appreciable difference.
Since the results are the same in RAID1 and even when I create a striped LVM volume, I guess there must be a bottleneck somewhere that prevents my hardware to run at full speed.
The hardware seems to work fine, since when I tried to boot that system from an external USB disk running Windows 10, I got the expected performance:
------------------------------------------------------------------------------
CrystalDiskMark 8.0.0 x64 (C) 2007-2020 hiyohiyo
Crystal Dew World: https://crystalmark.info/
------------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes
[Read]
SEQ 1MiB (Q= 8, T= 1): 2924.425 MB/s [ 2788.9 IOPS] < 2865.38 us>
SEQ 128KiB (Q= 32, T= 1): 3523.300 MB/s [ 26880.6 IOPS] < 1179.77 us>
RND 4KiB (Q= 32, T=16): 2077.373 MB/s [ 507171.1 IOPS] < 1005.95 us>
RND 4KiB (Q= 1, T= 1): 53.885 MB/s [ 13155.5 IOPS] < 75.87 us>
[Write]
SEQ 1MiB (Q= 8, T= 1): 2208.972 MB/s [ 2106.6 IOPS] < 3788.09 us>
SEQ 128KiB (Q= 32, T= 1): 2117.792 MB/s [ 16157.5 IOPS] < 1978.06 us>
RND 4KiB (Q= 32, T=16): 1962.843 MB/s [ 479209.7 IOPS] < 1067.32 us>
RND 4KiB (Q= 1, T= 1): 130.946 MB/s [ 31969.2 IOPS] < 31.13 us>
Profile: Default
Test: 64 GiB (x5) [G: 0% (0/932GiB)]
Mode: [Admin]
Time: Measure 5 sec / Interval 5 sec
Date: 2020/12/15 10:18:14
OS: Windows 10 Professional [10.0 Build 18363] (x64)
My devices are also running latest firmware.
Should I do something special on my linux system to to enable these speeds?
Update
Someone suggested me that my benchmark approach with dd
could be wrong, and to try the benchmark feature of Gnome Disks.
I tried and got read speeds closer to the specifications, so I was doing things the wrong way. Still, write speeds are 1/4 of those I saw with CrystalDiskMark on Windows, so I'm still puzzled.
0 Answers