I have a couple servers at Linode. I'm trying to set them up so I have a VPN into one of the machines and can then access all the other machines using the private linode network. Public access to private services (SSH, etc.) would then be restricted to only those who have VPN access.
Note: I have no firewalls running on these servers yet.
root@internal:~# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
internal server (running openvpn server)
eth0 Link encap:Ethernet HWaddr f2:3c:91:db:68:b4
inet addr:23.239.17.12 Bcast:23.239.17.255 Mask:255.255.255.0
inet6 addr: 2600:3c02::f03c:91ff:fedb:68b4/64 Scope:Global
inet6 addr: fe80::f03c:91ff:fedb:68b4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:80780 errors:0 dropped:0 overruns:0 frame:0
TX packets:102812 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:14317079 (14.3 MB) TX bytes:17385151 (17.3 MB)
eth0:1 Link encap:Ethernet HWaddr f2:3c:91:db:68:b4
inet addr:192.168.137.64 Bcast:192.168.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:172.20.1.1 P-t-P:172.20.1.2 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:2318 errors:0 dropped:0 overruns:0 frame:0
TX packets:1484 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:174573 (174.5 KB) TX bytes:170941 (170.9 KB)
Comments on the above:
- eth0 is the public interface
- eth0:1 is the interface to the private network
- The VPN tunnel works correctly. From a client connected to VPN, I can ping 172.20.1.1 and 192.168.137.64.
- net.ipv4.ip_forward=1 is set on this server
database server (nix03):
root@nix03:~# ifconfig eth0 Link encap:Ethernet HWaddr f2:3c:91:73:d2:cc
inet addr:173.230.140.52 Bcast:173.230.140.255 Mask:255.255.255.0
inet6 addr: 2600:3c02::f03c:91ff:fe73:d2cc/64 Scope:Global
inet6 addr: fe80::f03c:91ff:fe73:d2cc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:12348 errors:0 dropped:0 overruns:0 frame:0
TX packets:44434 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1166666 (1.1 MB) TX bytes:5339936 (5.3 MB)
eth0:1 Link encap:Ethernet HWaddr f2:3c:91:73:d2:cc
inet addr:192.168.137.63 Bcast:192.168.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Comments on the above:
- eth0 is the public interface
- eth0:1 is the interface to the private network
- I can ping the internal server on the private interface (192.168.137.64).
Current problem
I want to be able to hit the database server through the VPN. From my vpn client (laptop at my office), I'd like to be able to ping 192.168.137.63. However, that currently fails.
In my attempts to troubleshoot, I decided to approach it from the db server side and see if I could ping the VPN tunnel on the internal server (172.20.1.1). I realized that I would need to setup a route on the database server to tell it where to send packets destined for the 172.20.1.0/24 network, so I did that:
root@nix03:~# ip route add 172.20.1.0/24 via 192.168.137.64 root@nix03:~# ip route list default via 173.230.140.1 dev eth0
172.20.1.0/24 via 192.168.137.64 dev eth0
173.230.140.0/24 dev eth0 proto kernel scope link src 173.230.140.52
192.168.128.0/17 dev eth0 proto kernel scope link src 192.168.137.63 root@nix03:~# ip route get 172.20.1.1
172.20.1.1 via 192.168.137.64 dev eth0 src 192.168.137.63
cache
So, I think based on the above, when I ping 172.20.1.1, my server should send the packets to 192.168.137.64 (internal server). That server should, because of ip forwarding, take the packet from eth0:1 and route it to tun0 (172.20.1.1).
But, as you might have guessed, pinging 172.20.1.1 from nix03 (db server) does not work.
I did some packet capturing to see which MAC address my ICMP packets were getting sent to:
root@nix03:~# tcpdump -i eth0 -e icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 16:41:39.623759 f2:3c:91:73:d2:cc (oui Unknown) > f2:3c:91:db:68:b4 (oui Unknown), ethertype IPv4 (0x0800), length 98: 192.168.137.63 > 172.20.1.1: ICMP echo request, id 3324, seq 33653, length 64 root@nix03:~# arp Address HWtype HWaddress Flags Mask Iface 192.168.137.64 ether f2:3c:91:db:68:b4 C eth0
And that tells me the packets should be getting to the internal server. At least, they are being sent to the right NIC. However, when I run tcpdump on eth0 and eth0:1 of the internal server, I don't see any icmp packets coming in from the db server.
What else can I try? Thanks in advance.
Update #1
Routing table for "internal" server:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gw-li686.linode 0.0.0.0 UG 0 0 0 eth0
23.239.17.0 * 255.255.255.0 U 0 0 0 eth0
172.20.1.0 172.20.1.2 255.255.255.0 UG 0 0 0 tun0
172.20.1.2 * 255.255.255.255 UH 0 0 0 tun0
192.168.128.0 * 255.255.128.0 U 0 0 0 eth0
I ended up having to add a NAT rule to the internal server. I'm not sure its necessary, but it is what worked:
I encountered the same problem and came to the conclusion that Linode is not well suited for this kind of VPN configuration.
First of all: what you tried to do (setup a route) from 192.168.137.63 (eth0:1 on nix03) to 172.20.1.1 (tun0 on internal) is indeed correct and works in non-Linode setups. I described the same setup in Linode forums and I got a reply from an ex Linode employee telling me that Linode forbids that kind of setups.
Moreover, even if NATting VPN traffic to the internal network as you did is indeed another correct approach, keep in mind that 192.168.128.0/24 subnet is not private to you, but to all the Linode customers with VMs in the same datacenter as you. Try nmap to check what I'm saying:
So, in the Linode case, if you really want:
you need to carefully setup your firewall to allow only exact IP addresses accesses, as the subnet is private only in the Linode datacenter customers meaning of words.