Let's say I've got six identical drives and I'm going to use them all in a RAID10,f2 array constructed using mdadm. I've always put a single partition on each disk and constructed the array from /dev/sd[bcdefg]1 rather than the whole disk. But, I'm wondering if that's the best thing to do with a modern kernel and mdadm.
I don't think there is a big difference either way. But I would generally do whole disk, to keep the configuration simple.
The way you're doing it (one large partition that you create the mdadm array from) there's no major difference, but since you're effectively using the whole disk anyway I'd do as Antonius Bloch suggested and use the whole disk device rather than creating a partition -- it just seems more correct to me to create your RAID using the full physical device rather than a chunk of it.
If you are creating multiple partitions and setting up mdadm volumes across those you may actually experience a performance decrease (if you split your disks in half and one array is the first half of a set of disks and the other array is the second half your drives will have to seek back and forth when reading/writing on both disks -- head travel time will kill your performance), but the solution there is not to do that :-)
If you have a small setup and swap is going on these drives, you may want to keep swap separate, since it can do its own round-robining between devices.
Or, you may need to have
/boot
separate (without LVM), but want LVM for the rest of the disk. This is relatively a relatively common thing if you're trying to mirror system drives. (And while you're doing that, since disks are so gigantic these days and way too big for just the OS, you might choose to have only a portion of the disk be mirrored and make the rest non-mirrored scratch space.)