I have a system that's set up with a four-disk RAID1 array based on btrfs. Two disks are 1TB traditional HDDs, and the other two are 128GB SSDs. Most of each disk is filled up with a LUKS container. Inside all four LUKS containers I have a single btrfs filesystem that does the RAID1.
The above configuration should get pretty decent performance, but unfortunately, it's super slow. Here are some benchmarks, run generally around the same time:
% dd if=/dev/zero of=tmpfile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.18302 s, 908 MB/s
% dd if=/dev/zero of=tmpfile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.66737 s, 644 MB/s
% dd if=tmpfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.61369 s, 665 MB/s
% echo 3 | sudo tee /proc/sys/vm/drop_caches > /dev/null
% dd if=tmpfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 8.05449 s, 133 MB/s
% echo 3 | sudo tee /proc/sys/vm/drop_caches > /dev/null
% dd if=/dev/zero of=tmpfile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.66874 s, 402 MB/s
% echo 3 | sudo tee /proc/sys/vm/drop_caches > /dev/null
% dd if=tmpfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.164 s, 208 MB/s
Not really sure what to make of them, but the disk certainly seems slow. And generally when I look at iotop
, I see write throughputs of 5-10 MB/s or so. Not so great.
Here's some more info:
% cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1 834853 iterations per second
PBKDF2-sha256 548418 iterations per second
PBKDF2-sha512 366122 iterations per second
PBKDF2-ripemd160 508031 iterations per second
PBKDF2-whirlpool 175229 iterations per second
# Algorithm | Key | Encryption | Decryption
aes-cbc 128b 379.4 MiB/s 1552.0 MiB/s
serpent-cbc 128b 49.3 MiB/s 216.5 MiB/s
twofish-cbc 128b 129.3 MiB/s 258.3 MiB/s
aes-cbc 256b 325.3 MiB/s 1158.4 MiB/s
serpent-cbc 256b 65.0 MiB/s 213.1 MiB/s
twofish-cbc 256b 136.3 MiB/s 259.8 MiB/s
aes-xts 256b 1326.8 MiB/s 1333.5 MiB/s
serpent-xts 256b 224.4 MiB/s 216.7 MiB/s
twofish-xts 256b 255.5 MiB/s 257.2 MiB/s
aes-xts 512b 1034.8 MiB/s 1009.7 MiB/s
serpent-xts 512b 225.8 MiB/s 214.1 MiB/s
twofish-xts 512b 255.1 MiB/s 257.1 MiB/s
% lsmod | grep aes
aesni_intel 167936 18
aes_x86_64 20480 1 aesni_intel
lrw 16384 5 serpent_sse2_x86_64,aesni_intel,serpent_avx_x86_64,twofish_avx_x86_64,twofish_x86_64_3way
glue_helper 16384 5 serpent_sse2_x86_64,aesni_intel,serpent_avx_x86_64,twofish_avx_x86_64,twofish_x86_64_3way
ablk_helper 16384 4 serpent_sse2_x86_64,aesni_intel,serpent_avx_x86_64,twofish_avx_x86_64
cryptd 20480 9 ghash_clmulni_intel,aesni_intel,ablk_helper
% uname -a
Linux steevie 4.7.0-0.bpo.1-amd64 #1 SMP Debian 4.7.5-1~bpo8+1 (2016-09-30) x86_64 GNU/Linux
As you can see, I'm on Debian 8 running a kernel from jessie-backports
.
Also, at the suggestion of https://askubuntu.com/questions/246102/slow-ssd-dm-crypt-with-luks-encryption-in-ubuntu-12-10, I had the initramfs load the cryptd
, aes_x86_64
, and aesni_intel
modules during early boot. However, rebooting and benchmarking again gave almost the exact same speed.
Any idea where this terrible performance could be coming from?
0 Answers