(Before getting into details, I'm presenting this problem with Apache and SSH as an example, but this is not specific to TCP traffic, it is the same problem with both TCP and UDP based protocols.)
I have a multilinked, multihomed server running Ubuntu 9.04, with eth0 connected to an outside network and eth1 connected to an inside network. The outside network is presented to the "rest of the world" and the inside network contains all of the developers workstations and workhorse servers. There is a firewall blocking traffic from the "rest of the world" to the inside network, but not blocking outgoing requests.
$ /sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:30:18:a5:62:63
inet addr:xxx.yyy.159.36 Bcast:xxx.yyy.159.47 Mask:255.255.255.240
[snip]
eth1 Link encap:Ethernet HWaddr 00:02:b3:bd:03:29
inet addr:xxx.zzz.109.65 Bcast:xxx.zzz.109.255 Mask:255.255.255.0
[snip]
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
xxx.yyy.159.32 0.0.0.0 255.255.255.240 U 0 0 0 eth0
xxx.zzz.109.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
0.0.0.0 xxx.yyy.159.33 0.0.0.0 UG 100 0 0 eth0
Apache is listening on port 80, sshd listening on 22:
$ netstat --tcp -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:www *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
[snip]
From my development machine on the inside, xxx.zzz.109.40, I can connect to the inside address and all is well. From the outside I can connect to the outside address and everything works like it should.
But for some kinds of testing I would like to connect to the outside address from my development machine, yet the server is refusing the connection request. I'm guessing that it is looking in its routing tables and since the incoming data is coming from an address that should be on eth1 but is arriving on eth0 that it's dropping it, probably as a security precaution.
Is there a way I can relax this restriction?
The odd thing is that this used to work on 8.04, but does not work on 8.10 or 9.04, so sometime during the last year the kernel is doing some extra checking. For the connection to work, the return path needs to be the same as the source path, so that means that messages from my development machine arriving on eth0 would have to go back out on eth0 to be routed back to my machine.
Here is a diagram, there is no NAT anywhere.
Assuming you don't have any iptables rules that would prevent this, you need to disable the return path filtering. You can do this using:
You can also use a particular interface name instead of all and there is also a default, which would affect newly created interfaces.
From:
You haven't shown your firewall config, but I'll bet there's a rule in there that's doing something untoward with filtering connections to the external IP address from inside. A badly screwed up NAT config could also be at fault, but I'm having trouble envisaging something so pathological resulting from an ordinary attempt at configuration.
Just to clarify one point, too: just because a packet comes from one interface addressed to the IP address of a different interface will not cause the kernel to reject that packet. Whatever's happening, it's not the kernel's fault.
Also, a tcpdump of traffic (both on the internal and external interfaces of the server, and on the client) would be diagnostically useful.
What is default gateway for your development machine? The default gateway assuming Layer 3 Switch / Router should know how to reach the other interface of the server.
You should have something like
This should make things work. But in case this is not sufficient, then enable IP forwarding on machine xxx.zzz.109.65 using
Also edit /etc/sysctl.conf and enable ip forwarding there too.
Make sure iptables FORWARD chain of filter table on xxx.zzz.109.65 allows packet forwarding for your development machine.
I'm guessing you're using that box as your internet nat router using iptables NAT - obviously if that isn't the case then this won't help, otherwise you'll probably need something like:
That will skip the source natting that you're doing for packets travelling from inside to outside for anything destined for the external IP of that server.
Just to be anal I would not call the above network configuration multihomed. It's merely forwarding between two networks within the same AS. Multihomed these days really means connected to two or more neighbours (AS's), or at least multiple routes to the same upstream neighbour (AS), in the case of a leaf network with a single upstream connectivity provider.
Lots of useful ideas already given. I would a) Double-check it's not the firewall/NAT. Add a -j LOG rule before your REJECT and DROP rules and see if anything suspicious gets caught. b) Run tcpdump on both interfaces (-i any) and look for your packets.
BTW, what exactly do you mean by "the server is refusing the connection request"? Do you see some (immediate) refusal message or does the connection just time out?
This has got to be a problem with your firewall rules. Most likely the box's own external IP is getting lumped in with 'all external traffic' and so isn't being allowed back through to the internal client. Try putting an exception in the firewall rules for that IP address; that is, explicitly allow the webserver's external IP address to send traffic to the internal networks.
More debugging will require you to post your firewall rules.