I've been asked to look into placing a firewall between a webserver (Debian/Apache/PHP) in the DMZ and a backend MySQL database to achieve "isolation". Right now, iptables is running on the MySQL server and is only permitting TCP 22 and 3306 for SSH and MySQL respectively. However, this is apparently not good enough, and a hardware firewall is recommended.
Looking at Cisco's ASA 5505 for example, the max. throughput is 150Mbps, which seems like quite a step down compared to the Gigabit throughput that the Webserver and MySQL server enjoy now being on the same GbE switch.
Is this a concern? I can't really give you any numbers right now, but say your typical form-driven, data entry CRUD webapp with perhaps 100 concurrent user sessions at any time.
If this is impossible to determine without any real throughput numbers, can anyone suggest any methods of measuring? I was thinking of grabbing JMeter, simulating some load, and measuring bandwidth on a port mirror of the MySQL interface (or perhaps on the MySQL server itself) with ntop.
EDIT:
I bolded the item about the Gigabit Ethernet, which should have a theoretical throughput of 125MB/s, whereas the Cisco 5505 has a max throughput of 150Mbps (or ~18MB/s) and that's not accounting for NAT or ACL parsing etc (although I can't see NAT or ACL parsing being a big deal for a one-node network). Even still, the firewall would definitely be a potential bottleneck between the webserver and the MySQL server, seeing as a good, RAID1 setup with high-quality SAS disks and other server components should at least be able to push 50-75MB/s.
What about two NICs on the webserver, one on the DMZ and one on the LAN?
Edit: Since the answer was accepted I am putting more details.
The webserver is necessarily public facing, the idea is to firewall that so only ports 80,443 are publicly accessible. Then internally it can communicate with the database server on a LAN interface. This also has the advantage to putting your public traffic on a separate interface from your internal traffic. This is a very common configuration and provides extra security because public traffic and internal traffic are physically separated, instead of relying on a firewall.
I'm not quite sure where you're getting your max. throughput numbers because Cisco's website tells a different story (150Mbps).
It's a 100Mb ethernet connection and of course your real-world throughput will depend entirely on numerous factors, including what kind of filtering you have going on on the ASA. The advantage to having the ASA there is that you can add the AIP-SSC card and get intrusion prevention/detection also.
You could always try the ASA5505 from a vendor that allows returns. I can't speak to your throughput as I only have 5510s and 5520s. I use a 5505 personally from home and see no issues with throughput, but of course it's just me and my family.
Yes, it would be a bottleneck, and if you want to handle 1Gbps linespeed, you will probably need a bigger firewall.
However, do you really need to run at 1Gbps today? It might be a future requirement, but if you are currently only really using, for example, 5Mbps, you'd still have plenty of capacity left for now.
On the switch that is connecting the SQL and web server, you could use something to retrieve the port utilisation status from the MIBs to see how much bandwidth you really need. We use Cacti at work as it was free, quick and easy to set up. We can monitor switchport utilisations when we anticipate/experience performance issues, and use the evidence to decide what to do next.
You're going to need to hook up some monitoring (there should be munin packages that get it all going easy cheesy) and get an idea of what you really need.
If you find you really are pushing beyond 100Mbps then you're simply going to need a faster firewall (or even something like a couple OpenBSD boxes in failover with carp+pfsync).
The throughput of asa 5505 is 150 mbit/s. I do not see the benefit security wise. Probably decided by someone who does not know much about firewalling.