We use a service who's API will reject requests unless the source IP has been previously whitelisted. They only give us 3 slots which is a problem because we have more than 3 machines that need to use the API.
What is the most common technique to workaround this issue?
Note: I'm not trying to do anything against the Terms & Conditions of the 3rd-party API. We are using ResellerClub and I contacted them to ask for more slots but they replied:
I request you to kindly route your servers to a few set of IPs.
Hence this question.
Thoughts:
- I was thinking we could solve the problem by running a sort of proxy that acts as a man-in-the-middle. Instead of making API requests to the 3rd-party we make them to our proxy which bounces the request to the 3rd-party so that all requests seem to all come from one IP in their eyes. Is there common software for doing this kind of thing? It seems simpler to do this than the below thoughts, but am I wrong?
- Is using "a NAT instance" something I should be looking into? e.g. http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html . It looks complicated - Is there not a simpler solution? (Running a extra instance with extra networking complexity is a shame).
- Since we use Docker, could Weave be relevant?
- Could we attach a static IP to the VPC gateway? I saw it's possible with the AWS Storage Gateway (source) - not sure about a regular vpc igw though?
Our architecture: We use AWS and have our instances in a VPC running behind a ELB. We frequently bring up new instances without knowing the IP addresses in advance. We run identical software on all machines. The machines run CoreOS and our app runs in Docker containers.
A fairly common infrastructure is one where none of the actual application servers have public IPv4 IP-addresses, they will be in a RFC 1918 private network range behind a load-balancer and any outgoing request they make is either:
I thought I would post a update since the project is now completed successfully using a NAT instance.
Now that the NAT instance is all set up, the initial feeling of complexity has passed and it actually feels quite simple - even cleaner than before actually (because being forced to use private subnets is a security boost).
The official NAT instance setup instructions from AWS worked well: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html . We used the AMI that Amazon provide for booting the NAT instance. Having gone through the process, it made me realise how "industry standard" it is, maybe even "best practice".
The disadvantages:
t2.small
isn't very expensive, and the stock AMI doesn't need modifying, so isn't a huge maintenance burden )..ssh/config
and read up on "ProxyCommand
" you can make things 100% transparent so that using simplyssh server1
works.