I currently have a site running on an Nginx platform with PHP-FPM and APC. It is performing very well with the tests I have been trying.
I would now like to add failover functionality and can't afford a hardware loadbalancer so was looking at using HAProxy.
This is more a theoretical question however will the two Nginx servers will be able to serve more pages than HAProxy and mean that HAProxy becomes a potential bottleneck.
It's unlikely that HAProxy will become your bottleneck, as it simply routes connections and doesn't do all the things that web servers typically have to do.
However, you may want to ensure that a solitary HAProxy instance can't become a single point of failure.
Out of the box haproxy uses bi-directional routing, if you have it in your stack, the responses to any requests it passes return back through it. There is a way (which I've never tried so can't offer any insight) to make it transparent wherein the original IP is passed in the request it forwards. However this involves recompiling the kernel, probably not the route you want to go down. If you want to explore that, google 'tproxy'.
I use haproxy to balance Plone client backends in a nginx-squid-haproxy-zope_clients stack and have not experienced any bottlenecks attributed to haproxy. Like vesterday stated haproxy just uses an in-memory ruleset to route traffic. Provided your server(s) have enough resources and you are not dipping into swap, haproxy will be the least of your concerns - and even then I would be looking at other culprits first.
HAProxy will be the least of your concerns--it just routes requests. Although there are times you'll see error message coming from it, upon careful inspection the problems usually originates from the backend servers (capacity issues, availability, application errors etc).
And in my experience, I have a failover setup for HAProxy and for 5 years (and counting), it wasn't used! On a funny note, if you're a paranoid like me, you can setup failover on all layers! (I use keepalived for that.)