I installed FortiClient VPN on my Azure VM. As soon as I connect the VPN, my Remote Desktop connection gets terminates and I can no longer reach it unless I restart the VM. I am thinking it overwrote my routing or something similar. Is there a way differentiate the connection in someway? To tell the VM once this VPN is connected then the RDP should not be broken. Anyone ever faced anything similar? In some connections with a Checkpoint VPN it is a similar process.
I've setup my openvpn server, but I have this problem:
Clients connects in this order:
- client_A connects succesfully to server using it's own key.
- client_B connects to server using client_A's key and gets same ip as client_A.
When I try to ping client_A packets goes to client_B instead of client_A.
I don't have duplicate-cn
in my server config.
How can I prevent this behavior? I want to kick client_B immediately and keep only client_A.
I set up an OpenVPN server-client with VPN IP: 10.99.0.0/20, but the client has the subnet of /24
Server:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: thcclnohiyi2frl: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
link/ether 46:ee:65:f4:78:a7 brd ff:ff:ff:ff:ff:ff
inet 10.99.0.1/20 brd 10.99.15.255 scope global thcclnohiyi2frl
valid_lft forever preferred_lft forever
310: eth0@if311: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:16 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.18.0.22/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
Client:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if1384: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 8e:ec:e1:90:78:d3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.244.2.239/24 brd 10.244.2.255 scope global eth0
valid_lft forever preferred_lft forever
4: kengine: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
link/ether 0a:36:9d:4a:67:cb brd ff:ff:ff:ff:ff:ff
inet 10.99.0.2/24 brd 10.99.0.255 scope global kengine
valid_lft forever preferred_lft forever
Server config:
port 10021
proto tcp-server
reneg-sec 0
explicit-exit-notify 0
dev-type tap
dev thcclnohiyi2frl
ca ca.crt
cert bke-vpn.crt
key bke-vpn.key
dh dh.pem
key-direction 0
<tls-auth>
</tls-auth>
server 10.99.0.0/20 255.255.240.0
route-up scripts/thcclnohiyi2frl-fw-up.sh
down scripts/thcclnohiyi2frl-fw-down.sh
script-security 3
client-config-dir ccd/thcclnohiyi2frl
client-to-client
keepalive 20 60
comp-lzo
persist-key
persist-tun
status /var/log/openvpn/thcclnohiyi2frl/openvpn-status.log
log-append /var/log/openvpn/thcclnohiyi2frl/openvpn.log
verb 3
mute 20
Client config
client
dev kengine
dev-type tap
reneg-sec 0
proto tcp-client
remote xxx.xx.xx.xxx 10021
resolv-retry infinite
nobind
<ca>
<key>
<cert>
remote-cert-tls server
key-direction 1
script-security 3
keepalive 10 60
persist-key
persist-tun
comp-lzo
verb 3
pull-filter ignore "route-gateway"
So can anyone help me know why and how to fix the problems ?
I have found some old threads regarding this, but they seems to be some years old. E.g. openVPN - Split-tunneling DNS priority
I'm on Windows.
I'm connected to a local network with local DNS.
I use OpenVPN to connect to another network, this network have resources available at *.example.com.
Is it even possible to have one DNS for *.example.com and another for everything else? I know I can setup a local DNS on my machine to solve it, or enter all the hostnames of example.com in my local hosts-file.
I have a setup pretty similar to this except the LAN clients are behind a DHCP relaying router. The outermost router forwards traffic to the OpenVPN server on port 1194 and I can connect clients succesfully, routing traffic going into the VPN server, out through it's own NAT. My VPN virtual ip range is 172.31.0.0/24
+-------------------------+
(public IP)| |
{INTERNET}=============={ Router |
| |
| LAN switch |
+------------+------------+
| (192.168.5.1)
|
| +-----------------------+
| | |
| | OpenVPN | eth0: 192.168.5.96/24
+--------------{eth0 server | tun0: 172.31.0.0/24
| | |
| | {tun0} |
| +-----------------------+
|
+--------+-----------+
| Router B |
| Other LAN clients |
| |
| 192.168.1.0/24 |
| (internal net) |
+--------------------+
Connecting as a VPN client outside the network, I am therefore able to get traffic on the internet, as well as to all the other clients connected to the first router hosting its own DHCP. (192.168.5.0/24). But when I try to access the second routers inner LAN I get the following response to pings:
PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data.
From 172.31.0.1 icmp_seq=1 Destination Host Unreachable
The OpenVPN server is hosted on a box with restricted access so I can only retrieve the .conf files through the web ui, of which it only displays a limited amount of information. Connecting from the client gives me the following information:
Thu Dec 29 13:36:30 2016 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
Thu Dec 29 13:36:30 2016 Socket Buffers: R=[87380->131072] S=[16384->131072]
Thu Dec 29 13:36:30 2016 Attempting to establish TCP connection with [AF_INET]<public ip>:1194 [nonblock]
Thu Dec 29 13:36:31 2016 TCP connection established with [AF_INET]<public ip>:1194
Thu Dec 29 13:36:31 2016 TCPv4_CLIENT link local: [undef]
Thu Dec 29 13:36:31 2016 TCPv4_CLIENT link remote: [AF_INET]<public ip>:1194
Thu Dec 29 13:36:31 2016 TLS: Initial packet from [AF_INET]<public ip>:1194, sid=1081d793 4873f1e6
Thu Dec 29 13:36:31 2016 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
Thu Dec 29 13:36:32 2016 VERIFY OK: depth=1, CN=*, OU=RV320, O=*., L=*, C=*, ST=*
Thu Dec 29 13:36:32 2016 VERIFY OK: depth=0, C=*, OU=*, CN=*
Thu Dec 29 13:36:32 2016 Data Channel Encrypt: Cipher 'AES-256-CBC' initialized with 256 bit key
Thu Dec 29 13:36:32 2016 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
Thu Dec 29 13:36:32 2016 Data Channel Decrypt: Cipher 'AES-256-CBC' initialized with 256 bit key
Thu Dec 29 13:36:32 2016 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
Thu Dec 29 13:36:32 2016 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
Thu Dec 29 13:36:32 2016 [com] Peer Connection Initiated with [AF_INET]<public ip>:1194
Thu Dec 29 13:36:35 2016 SENT CONTROL [com]: 'PUSH_REQUEST' (status=1)
Thu Dec 29 13:36:35 2016 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,dhcp-option DNS 192.168.1.3,dhcp-option DNS 192.168.1.10,dhcp-option DOMAIN <company>.LOCAL,route 172.31.0.0 255.255.255.0,topology net30,ping 10,ping-restart 120,ifconfig 172.31.0.6 172.31.0.5'
Thu Dec 29 13:36:35 2016 OPTIONS IMPORT: timers and/or timeouts modified
Thu Dec 29 13:36:35 2016 OPTIONS IMPORT: --ifconfig/up options modified
Thu Dec 29 13:36:35 2016 OPTIONS IMPORT: route options modified
Thu Dec 29 13:36:35 2016 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
Thu Dec 29 13:36:35 2016 ROUTE_GATEWAY <client ip>/255.255.255.240 IFACE=eth1 HWADDR=*
Thu Dec 29 13:36:35 2016 TUN/TAP device tun0 opened
Thu Dec 29 13:36:35 2016 TUN/TAP TX queue length set to 100
Thu Dec 29 13:36:35 2016 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0
Thu Dec 29 13:36:35 2016 /sbin/ip link set dev tun0 up mtu 1500
Thu Dec 29 13:36:35 2016 /sbin/ip addr add dev tun0 local 172.31.0.6 peer 172.31.0.5
Thu Dec 29 13:36:35 2016 /etc/openvpn/update-resolv-conf.sh tun0 1500 1559 172.31.0.6 172.31.0.5 init
dhcp-option DNS 192.168.1.3
dhcp-option DNS 192.168.1.10
dhcp-option DOMAIN <company>.LOCAL
Illegal option -x
Thu Dec 29 13:36:35 2016 /sbin/ip route add <public ip>/32 via <client ip>
Thu Dec 29 13:36:35 2016 /sbin/ip route add 0.0.0.0/1 via 172.31.0.5
Thu Dec 29 13:36:35 2016 /sbin/ip route add 128.0.0.0/1 via 172.31.0.5
Thu Dec 29 13:36:35 2016 /sbin/ip route add 172.31.0.0/24 via 172.31.0.5
Thu Dec 29 13:36:35 2016 Initialization Sequence Completed
My clients (linux boxes) has ip.forwarding enabled and their routing tables looks like this, connected from the outside:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.31.0.5 128.0.0.0 UG 0 0 0 tun0
0.0.0.0 <client ip> 0.0.0.0 UG 0 0 0 eth1
<public ip> <client ip> 255.255.255.255 UGH 0 0 0 eth1
128.0.0.0 172.31.0.5 128.0.0.0 UG 0 0 0 tun0
<client ip> 0.0.0.0 255.255.255.240 U 1 0 0 eth1
172.31.0.0 172.31.0.5 255.255.255.0 UG 0 0 0 tun0
172.31.0.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
I've also tried setting up a static route as suggested here https://community.openvpn.net/openvpn/wiki/BridgingAndRouting but without any luck.