I am looking for factual knowledge or actual measurements to validate or not my idea. Please, hold answers without data just saying it is a weird idea because not the most common setup.
short question: Are the concrete problems on running two raids such as
sda sdb
sda1 + sdb1 => raid0 md1
sda2 + sdb2 => raid1 md2
both disks are GPT partitions with same model and layout. there's also spare disks with a single partition acting as spares for md1.
long version: I have a small server that had 2 HDD software raid1 (mdadm) for backup data. Data usage is very low (<10%). and 2 HDD for raid0 (disposable vm images).
I am now replacing the (still good) old drivers with new ones and the capacity is much bigger. My plan was to just increase the raid capacities, but now i am thinking about keeping the same and using partition raid instead of whole disk (that alone seems to offer many benefits)
My idea now is to partition the new driver (double capacity) in two partitions. One for raid0 another for raid1. both doesn't see much use other than occasional spikes (that are almost guaranteed to not happen at the same time as backups happen when the vm pool is offline)
Yes, you can do what you described without issues.
Partitions are true block devices and so you can arrange them in any RAID level do you want. For example on a test machine I have a 4-way RAID1 boot array, while using a RAID10 array for the main data filesystem.