I'm setting up my db production environment and I have some questions. The server has two NVMe disks, but they cannot be used in the hardware RAID controller. Do I need RAID with NVMe?
Would it be enough to use them as JBOD in production? Or should I use software RAID 1/0 (that's the RAID suggested by MongoDB)? Otherwise, I have to remove the NVMe drives and order the SATA interface to be used in the RAID hardware controller.
You can RAID NVMe, but you just can't RAID them with a traditional RAID controller. For example, if you're on an Intel CPU and running compatible Intel drives then you can use their Rapid Storage Technology enterprise software to create a RAID between the two disks. I know some Dell servers have a special PCIe controller that does the RAIDing instead of the CPU.
The other option is to use your OS's native RAID functionality. On Windows this would be Storage Spaces and on Linux this might be mdadm or zfs.
If this is a multi-socket system, then you need to make sure that all the drives you are putting in a single RAID array are connected to the same CPU socket, otherwise the performance of your array will suffer. If these are PCIe-based NVMe disks then your server should have which CPU each PCI socket is connected to in its documentation. If these are U.2 or M.2 drives then you might have to dig further to find out which CPU they are connected to if it's not marked or documented.
In fact the best way (at least from my point of view) is to do a zfs mirror on the two NVMes, which is perfectly possible for example with the FreeBSD operating system (also others, of course). In this configuration you have the highest level of protection, as no RAID controller is needed.
In short, the answer depends on the operating system you intend or need to use.
There are TriMode controllers from Broadcom — 94xx and newer 95xx series. They support hardware RAID with 2.5" NVMe drives, but require special cables and/or compatible backplanes.
I've done some performance comparison recently on 4 Samsung PM1733 drives in RAID-10 on Broadcom 9460-16i vs mdadm. IOPS/latency for 4k random loads were nearly similar. 9460 showed lower CPU utilization, but I can say that mdadm performed quite well.