i'm thinking about using dynamic routing [ OSPF or RIP ] via OpenVPN tunnels. right now i have few offices connected in full mesh, but this is not scalable solution as we add more locations. i would like to avoid situation when plenty of internal traffic is affected if one of two vpn termination points that i plan to use is down.
do you have similar configuration working in production? if so - what routing daemon did you use - quagga? something else? did you encounter any problems?
thanks!
I have implemented something along these lines before, but my setup was fairly complicated, perhaps too much so. I am currently investigating implementing a simpler solution along the lines of / influenced by what is described at the following URL, but I will describe what I have built in the mean time. The URL is http://www.linuxjournal.com/article/9915
One option which has worked quite well for me in the past is to build my OpenVPN tunnels using tap devices instead of tun devices. This encapsulates Ethernet over the tunnel instead of layer 3, and it allows you to workaround the inherent limitations of OpenVPN maintaining its own routing table separate from the kernel. The downside is you incur a lot of overhead from tunneling this way... imagine TCP over Ethernet over TCP encrypted SSL... you get the idea. Upside is it has worked and scaled out horizontally fairly well.
Assuming your VPN servers and clients are Linux endpoints (I have only tested on Linux), you can create a new virtual bridge interface and assign the tap interface to the bridge to get your layer 3. In my topology, I gave each VPN server its own 10.x.0.0/16 subnet and also deployed a local DHCP server to assign addresses to connecting clients. The DHCP server needs to be there because OpenVPN is no longer aware of IP addresses; it is tunneling Ethernet. Clients run dhclient to get an IP over the VPN interface after connecting, and this is all managed by connect-scripts tied to the OpenVPN configuration.
Once you have IP addresses on both sides via DHCP you can use a dynamic routing protocol to advertise routes between connected clients. I have used Quagga in the past, and it works quite reliably.
Example server configuration using tap:
Example commands to add tap interface to new bridge:
Example teardown commands:
Once you have the vpnbr0 bridge interface, you can run a DHCP server on it or assign IP addresses manually. You can then treat it like any other Ethernet interface. You would probably want to make additional changes to adjust MTU size, and you might try different protocols and encryption options until you found the right balance between efficiency and security. I don't have any good specs to offer anymore on overall throughput, and there are a whole lot of moving parts here.
If I had it to do over again, I would stick with tun devices in OpenVPN, and I would follow the instructions in the article I linked in the first paragraph to update the Linux kernel's routing table anytime OpenVPN's internal address table updated. This would eliminate DHCP from the stack, reduce tunneling overhead, and would allow my clients to connect and operate without participating in dynamic routing.
We currently have multiple instances of OpenVPN AS running with static routes pointing to each one. We assigned a /24 subnet to each OpenVPN server. Currently we have users manually pointed to each server but you could use a variety of technologies to point users to the correct one.
The only issue here is that in the event an OpenVPN serve goes down, users will need to connect to another server to get traffic. This is due to the fact that we are redistributing a static route to the OpenVPN server since OpenVPN AS doesn't support OSPF.
There are open source routers that support OpenVPN such as Vyatta but we prefer the web interface of OpenVPN AS.