I am attempting to set up a web server (nginx) that needs access to the internet while also being hooked up to an ELB but I am having trouble getting this solution working.
Details:
- 3 public subnets
- a
- b
- c
- 3 private subnets
- a
- b
- c
- 1 ELB
- This has the three public subnets included in its default pool
- 1 NAT instance in public subnet
The various guides I have found useful for getting the public subnets hooked up to the ELB by pointing the subnets to the IGW. These set ups do not take in to consideration the need for internet access.
When I change the route table for the single NIC from the IGW to the nat instance I get internet access but I lose the ability to connect to the ELB and visa versa, when I update the route table to use the IGW I lose the ability to connect to the internet.
I can allocate an EIP and give it access that way but was hoping to avoid exposing this server the internet via a public address if possible.
What I am thinking is that I can either 1) add a second NIC to the web servers that need internet access so they have a leg out via a NAT instance and also have a leg in the public subnet to connect to the load balancer or 2) I can use the nat instance as the gateway for my network.
Has anybody set something similar to this up?
You're missing a point, here.
It a machine has a public IP, it goes in a public subnet and uses the IGW as its default route.
If it doesn't have a public IP, it goes in a private subnet and uses a NAT instance as its default route.
That's it. End of story.
If you have web servers without public addresses, behind an ELB... then the ELB itself goes in the public subnet, but the instances go in the private subnet.
The subnets where an ELB is provisioned have nothing to do with the subnets where the instances it is balancing are provisioned.
It's counterintuitive, but true nonetheless.
VPC isn't a LAN, so cross-subnet traffic does not "go through a router" in the same sense it would on a LAN, so there's no inefficiency with this approach.