Here is the setup: We have a hardware (server) that presents a service to the customer (or "outside world", if you so will). Internally, the services and functions are distributed over several virtual machines (of different roles). We use kvm for virtualisation and libvirt for the management of it. The hardware has two NICs which are both forwarded to one of the virtual machines (called gateway-vm) by the means of "Single Root I/O Virtualization" (Intel VT-D / AMD IOMMU).
We use openvswitch in the underlying host operating system to create a internal network with private addresses between all VMs and the host. This because we had problems binding a bridge to a virtual interface. (As opposed to binding it to a physical interface, the operation/command was accepted, but network communication not possible anymore)
That works fine. A proxy server in the gateway-vm offers all the internal services to the outside world. Fine, that works.
We even have several of these machines.
For redundancy, we would like one of the internal virtual machines (which is not the gateway-vm, of course, let's call it the database-vm) to talk to the database-vm of another hardware. Therefore we make sure that the internal private addresses do not collide, e.g. 10.10.1.5 and 10.10.2.5 (each /24).
Our approach is to set up gre tunnel with ipsec between the two gateway-machines, which does already work. 10.10.1.1 can ping 10.10.2.1 and vice-versa.
Now we are stuck at getting the database-vm of the one hardware to talk/ping to the other gateway-vm of the other hardware, vice-versa and ultimately letting the two database-vms talk to each other.
What would be the proper way to achieve this ? We are not sure whether we need ip_forwarding in either of the gateways, and how a route should look like. Or should all be one big subnet without routing ?
We are currently using ovs v1.7.1, kernel 3.2.6, libvirt v0.10.2, QEMU 1.1.1