I have a very simple scenario, Azure Vnet with a subnets 10.140.1.0/24 (GatewaySubnet, SKU=Gw2 gen1) and 10.140.10.0/24 (VirtualMachineSubnet). Then an OnPrem network with 10.190.0.0/16.
I have a successful Site2Site connection to the OnPrem network. I can ping the 10.190.x.x. from a VM in my Azure VirtualMachineSubnet (with IP 10.140.10.4). But when I introduce a NAT rule this no longer works. The OnPrem device is only allowing traffic from my VirtualMachineSubnet(+an extension on that range), but I want to widen this on my side, hence the NAT.
I tried to simplify this to the most simple NAT rule, a rule that does not actually translate. So the NAT rule I have is: Static, EgressSnat, InternalMapping: 10.140.10.4/32, ExternalMapping: 10.140.10.4/32, meaning, there should be no change in IP.
The moment I link this NAT rule to the S2S-connection the pinging stops working. Whats going on?
It looks like the moment I link any NAT rule to the connection, the connection stops working.
Note: Routebased, no policytraffic filters, no BGP, I also tried to translate the whole VirtualMachineSubnet to VirtualMachineSubnet and same result. I have also tried completely random Subnets that are not even in use, yet the pinging from my Vm-Subnet stops working. Also tried adding an Ingress rule mapping 1:1 the onPrem IP range, not working.
I've seen this when using Windows Server RRAS as your on-prem device.
If so, by creating an internal and external (with NAT) rule in RRAS, you will be given the two-way communication.
Also, you could get away with using the basic SKU (cheapest) virtual network gateway which is useful if you are just testing things out.
where you able to spot any root cause for this? I have a similar behavior, after setting up NAT rules and linking them in Azure, I can't open the comnnection that was working before any longer