I have linux ec2 instances with two nics (eth0 and eth2). Both the nics have public ip's attached to it and are able to get out to the internet. This linux instance is acting as a gateway node for me, forwarding traffic from ec2 in private subnets who dont have internet access.
Am able to forward traffic from ec2 instance in private subnet to this gateway instance in public subnet. All the traffic in the gateway instance is received on eth0. Now I would like to be able to get out to the internet via eth1 and not eth0. I want to do so because i want the outside world to see eth1's IP as the source IP.
Performing these steps helps me achieve this
ip route add 10.0.0.0/23 <subnet of gateway cidr> dev eth2 table 2
ip rule add from <private ec2 instance's ip> table 2
ip route add default via 10.0.0.1 dev eth2 table 2
ip route flush cache
iptables -A FORWARD --in-interface eth0 -j ACCEPT #accepts traffic on eth0
iptables --table nat -A POSTROUTING --out-interface eth2 -j MASQUERADE #forwards the traffic outside via eth2
With this setup, when i curl outside of ec2 instance in private subnet (via gateway node) i see what i expect to see (which is that the outside world sees the request is coming from eth1's ip), but the curl requests takes a really long time (and sometimes it even times out). I would say about 5 out of 10 requests succeed.
What am i missing here?
Thanks Kay
Apparently, the order of the rules matters a lot. I was able to specify the order of the rules by doing something like this.
ip rule add preference 512 table 2
Once I got the ordering right, I was able to egress via a specific interface correctly.