I would be thankfull if someone who understands how LVM works, could tell me a rough estimate, how much slower using LVM (with a Software RAID1) will be.
(What I do not want to know how much slower LVM will be if the LVM Volume is currently in snapshot mode doing Copy on Write). I only need some rough estmiate how much LVM will slow down reads and writes in a normal operation scenario.
Any links are also very much appreciated I was not able to find any good performance benachmarks about this question.
LVM is fairly lightweight for just normal volumes (without snapshots, for example). It's really just a table lookup in a fairly small table that block X is actually block Y on device Z. I've never done any benchmarking, but I've never noticed any performance differences between LVM and just using the raw device. It's some small extra CPU overhead on the disc I/O, so I really wouldn't expect much difference.
My gut reaction is that the reason there are no benchmarks is that there just isn't that much overhead in LVM.
The convenience of LVM, and being able to slice and dice and add more drives, IMHO, far outweighs what little (if any) performance difference there may be.
I am installing a 48T Dell MD-1200 and I was curious about this question. The MD1200 is connected to a hardware RAID card set up as RAID-6, so it looks to Linux like just a (big) drive. I tested an XFS filesystem on an LVM physical volume vs. an XFS filesystem on a straight disk partition. I used a Dell R630 machine with two E5-2699 CPUs in it. The system was set for Performance; whatever energy saving features I could find in the BIOS were turned off.
I installed CentOS 6.7 on it. Kernel is 2.6.32-573.el6.x86_64 (sorry for the oldie kernel but that's what I need for production). LVM is version 2.02.118.
I let CentOS create an XFS partition during the build. It is 1T in size. Then I created another 1T partition on the disk and created a logical volume:
My XFS-only filesystem was called
/data_xfs
. The LVM-backed XFS filesystem was called/data_lvm
. I tested using bonnie++ v 1.03e.The commands were:
bonnie++ -u 0:0 -d /FILESYSTEM -s 400G -n 0 -m xfsspeedtest -f -b
where FILESYSTEM was either /data_xfs or /data_lvm . Results are summarized as follows:Results seemed comparable in my view. In the Sequential Input test, LVM actually seemed to perform a little better.
There is a short paper published 2015 by Borislav Djordjevic and Valentina Timcenko which used a few 7200RPM 80GB Western Digital drives using EXT3, tested using PostMark software that 'simulates loading an internet mail server' with Linux kernel 2.6.27. They found that past research that had looked at just
bonnie
ordd
tests alone had varied results.The tests seem to suggest the performance drop can be from 15% to 45% with LVM, compared to when not using it. They found an even bigger drop when two physical partitions are used within one LVM setup. They concluded that the biggest performance impacts were the use of LVM, as well as the complexity of it's use.
https://www.researchgate.net/publication/284897601_LVM_in_the_Linux_environment_Performance_examination http://hrcak.srce.hr/index.php?show=clanak&id_clanak_jezik=216661
with snapshot active lvm performs ... badly.
take a look here to see in-depth benchmark
There is an excellent (be it old) whitepaper, written by a SUSE guy, about LVM and it's overhead here. It shows some (simple) benchmarks and explains the tech behind LVM. Good read.