Some time in the future, I may need to build a dedicated SSL farm (as described in Making applications scalable with Load Balancing) or something similar to handle lots of SSL traffic. While it's not an immediate issue for me, I'd like to plan a little bit ahead. So my question is:
Is it more cost effective to use dedicated hardware for this, or can I reuse application servers, maybe with a hardware add-on card? Or is it better to have this integrated in load balancers (contrary to what the above-mentioned article stated in 2006)?
A few links to specific hardware would be nice, too - I currently don't really know where to start looking.
AFAIK the article still stands.
If you really need a farm with several load balanced SSL reverse proxies and a fair few web/application servers behind them, I would suggest looking at a blade solution. That's not cheaper than simple 1 U rackmount servers, but it will save you some rack space. Most major server manufacturers do blade solutions (Dell, HP, IBM, etc.). Some links: IBM | Dell | HP
I would build the load balancers from Linux servers (redundant pairs connected via Heartbeat, see LVS project), and have dedicated little networks for the proxy traffic and the traffic from the second load balancer to the web/application servers.
Most cost effective solution is NGINX as reverse proxy, as price/performance beats most hardware solutions like F5 Networks Big-IP 6900.
My NGINX config: http://gist.github.com/553235
Fig2 in your link give the state of art way to build an SSL farm.
Regarding way to build your farm and cost, it will depend of your need.
Having SSL termination on load balancer is probably cheaper today (even with dedicated load balancer like Cisco CSS, Cisco ACE, F5 BIG-IP, ... but it still depend of load balancer manufacturer).
The load balancer will be able to do L7 balancing as it see unencrypted data. So you will not need 2 layer of load balancer and some SSL reverse proxy. This can reduce the cost. (less hardware to buy, less rack space, ...)
But having SSL termination on load balancer is not very scalable, so if you see that your load balancer start to be overloaded by SSL you will have a problem. If you took a dedicated device you will need to upgrade it and this will be expensive. If you build you own load balancer with a server you will need to offload SSL on new dedicated server.
Having the card on application server can be an option if L4 load balancing is enough and if your application give an high throughput for a low cpu usage.
I mean: an hardware SSL card is expensive so you want to use it as much as possible.
With a dedicated SSL termination hardware you will use the card as often as possible. If the card is in the application server and the application has a low throughput, you will not use the card a lot of time. But if the application is something fast, not using too much CPU but with a high throughput having the SSL termination on the server with a dedicated card can be an option. This is genrally not the case. This also reduce high availability.
I'm assuming you are talking about HTTP traffic here (there's a big difference between stateful and stateless protocols).
The problem is that to get best performance you want SSL session resumption to work - which favours a sticky session approach - but if your sessions are too sticky, then you won't have any failover. The big expensive boxes from f5, Cisco et al can cope with that, but its difficult to do across commodity boxes running (for instance) stunnel.
I still think that the best solution to most load balancing problems is round-robin DNS - where failure detection is on the only place that a failure can be reliably detected (the client) and this is where the failover is implemented - it provides for server affinity but still allows failover of requests (note that it does not support resumption of requests - but I've yet to come across anything which supports this for HTTP).
One other thing to bear in mind is that Microsoft's keep-alive support for HTTP over SSL is different from that implemented by everyone else. This is not just an openSSL thing - other vendors give the same advice. Given the additional overheads in SSL negotiation and the huge pay-off using keep-alives for HTTP traffic it may be worth considering using MS-ISA for SSL termination - although I'm guessing that it is possible to configure the software as such and I've never been impressed by the products scalability/reliability. So if I had lots of money to spend then I'd probably look at MSISA for SSL termination but not using Microsoft's clustering software and moving the failover elsewhere (e.g. to the client!).
For a cheap solution, terminate the SSL on the webserver boxes with round-robin DNS. Add lots of webservers. Optionally use a cryptographic accelerator card (not an SSL capable network card) in the webserver for additional oomph.
For a very fast solution - (possibly) multiple MSISA nodes addressed via round-robin DNS, talking to a LVS cluster of webservers.
HTH
HTTPs traffic can generate a very high load due to the encryption requirements. They make add-in cards which allow you to offload SSL encryption/decryption to specifically designed hardware. As mentioned above, you can terminate SSL on a load balancer which will reduce costs because (at least for the F5) these devices come with this SSL offload equipment. Alternatively, these can be purchased and installed directly in you server, though I'm not sure how this would work with VMWare. Compression can also be offloaded using a load balancer as well.