I'm trying to design a system that allows for multiple public endpoints that funnel into a single web service. The web service must be able to determine which endpoint was the intended destination of the request. Here's a little sample configuration that might fit the bill:
In this system, the "reverse proxies" (for lack of a better term) add an HTTP header to the incoming requests before they hit the web service. Otherwise, the proxies are entirely transparent to the request and response.
We're a Windows shop using IIS7/WCF.
The goal is 1) to maintain only a single web service, rather than one per domain, and 2) to decouple domain/web site management from the business logic in the web service. That is, if we know that context will always be specified with a key in the HTTP header, then we don't have to worry about domains changing or the specific content of Request.Headers["HOST"]
.
My questions are: is this a reasonable approach? If so, is there an app out there that will do the job of the "reverse proxies"? (Squid? IIS itself?)
Thank you for the help!
It sounds to me as though what you're trying to do is have a single app that does 2 wholly separate things. At a certain point theres going to be overlap in the code base and functionality, but I don't know your app well enough to help you with that. I have 2 possible solutions for you:
I would try to separate the view related code out from your controllers and models if you're going the MVC direction. That would give you a cleaner separation of the business logic. One possible way to do that is to put your views into 2 separate directories, but then include code from a shared 3rd directory. That gives you one shared library that handles the backend logic while cleanly separating the presentation logic.
I wouldn't be afraid of the HTTP Host header. It's required by HTTP/1.1 and all modern browsers use it. Heck, virtual hosting is entirely dependent on it and you'd be hard pressed to find an IIS or Apache admin who would tell you not to use virtual hosts on production. The big downside of course, if you're doing the header check on the application side, is you can run into some pretty ugly if/case statements. The only thing a reverse proxy setup does is look for a Host and add another header in addition to the Host. So you're really just adding more headers and complexity to your architecture with the reverse proxying.
Firstly, your HTTP Host field should be preserved by your reverse proxy as it passes the request through. If it's not, then your proxy is wrongly (and weirdly!) configured.
Secondly, passing extra headers and identifying traffic by that is the standard and accepted way to identify traffic handled by a load balancer. For example, that's how load balancers are usually configured to inform web servers that traffic is coming in over SSL (as SSL can't be proxied at that level).
Finally, I think you may be overcomplicating your problem. Realistically how often do domain names change compared to the content behind them?! That's not really how the web is meant to work... There's nothing wrong with your application serving content based on domain name in the HTTP Host header, it's very common practice.
One way to do this would be to have a small array of cloud load-balancers each aimed at your web service. Rackspace provides LBaas, and possibly others do as well.
http://www.rackspace.com/cloud/cloud_hosting_products/loadbalancers/
Full disclosure: I do work for Rackspace, however not in sales.