I currently have a dedicated web and database server running in datacenter (DC) 1. Im about to expand with yet another web server in datacenter 2 that will talk to the db in DC 1.
The latency between the two datacenters is about 2-5 ms. Is that appropriate for a high-traffic website (400.000+ visitors a day)?
Maybe(TM). It certainly won't be helping your performance, and I'd be wary of having my machines spread out like that just from the potential failure modes it introduces, but it's probably not your biggest performance problem.
On the other hand, it's amazing what hosting providers are capable of if pushed, so I'd tell your provider that their proposed solution is unacceptable for your needs, and if they can't provision another server in the same network as your existing machines, then you'll take your business elsewhere. Chances are they'll miraculously find some space close by.
2-5ms is not too bad imho. Then again, it also depends on how your application handles database calls.
More interesting though, might be looking at fault-tolerance, you might want to set up an identical solution in DC2, so you have a DB server in both, that way you can quickly recover from any disaster in one datacenter. A 2-5ms delay would be very appropriate for database replication, and if you use mySQL, you might even be able to set up multimaster replication and get better performance between web servers and nearest DB server
Though there are nothing wrong with pushing your service provider to commit to an acceptable level of performance, you also need to "face the facts". If the Datacenters are 300km apart, the first 2ms is simply due to the fact that not even fiberoptic WAN lines exceed speed of light (roughly 300.000km/s). The ping time is a roundtrip time, so 300km would in fact mean the time taken to travel 600km (300km each way). That takes 2ms at the speed of light, and no hosting provider can tamper with a physical constant like the speed of light
It isn't really possible to answer your question -- you haven't provided any measurements of the impact on your site, and you also haven't defined what your performance goals are.
But one important pointer: Many webapp frameworks use a blocking reads strategy, i.e. while the webapp framework is making SQL queries, the webapp thread in questions is flagged as busy to the operating system. So you could see CPU use on your frontend servers increase significantly because of the added SQL query latency.
Some webapp frameworks like ASP.NET MVC provide an asynchronous I/O interface, which could alleviate the above problem. But async I/O is not used by default, and it would almost certainly be too expensive to re-architect your site code to use this.
Why not just migrate your old servers to the new DC, and set up a nice private VLAN for all your servers in the new DC?
That seems rather high to me, and while I've had databases on different networks, they've always been in the same physical location, so latency wasn't an issue. Depending on how many queries per request you're servicing on average (and how little caching you're doing in the Web/application tier), that could really start to add up. I would want to test that thoroughly while simulating load that's appropriate for your usage patterns.
Is there a reason why you're setting up a Webserver in a different data center? If it's for redundancy, perhaps setting up a replication scheme to another local database at Data Center 2 would be more appropriate.