I have read multiple times (although I can't find it right now) that data centers take great effort to make sure that all server have the exact same time. Including, but not limited to worrying about leap seconds.
Why is it so important that servers have the same time? And what are the actual tolerances?
Security
In general, timestamps are used in various authentication protocols to help prevent replay attacks, where an attacker can reuse an authentication token he was able to steal (e.g. by sniffing the network).
Kerberos authentication does exactly this, for instance. In the version of Kerberos used in Windows, the default tolerance is 5 minutes.
This is also used by various one-time password protocols used for two-factor authentication such as Google Authenticator, RSA SecurID, etc. In these cases the tolerance is usually around 30-60 seconds.
Without the time being in sync between client and server, it would not be possible to complete authentication. (This restriction is removed in the newest versions of MIT Kerberos, by having the requester and KDC determine the offset between their clocks during authentication, but these changes occurred after Windows Server 2012 R2 and it will be a while before you see it in a Windows version. But some implementations of 2FA will probably always need synchronized clocks.)
Administration
Having clocks in sync makes it easier to work with disparate systems. For instance, correlating log entries from multiple servers is much easier if all systems have the same time. In these cases you can usually work with a tolerance of 1 second, which NTP will provide, but ideally you want the times to be as closely synchronized as you can afford. PTP, which provides much tighter tolerances, can be much more expensive to implement.
Mainly, it's so that you can correlate incidents from logs on different devices. Suppose you have a security incident where someone accesses your database through your web server -- you want the timestamps on your firewall, your load balancer, your web server and your database server to all match up so that you can find the logs on each device that relate to the incident. Ideally, you'd like everything to be within a few milliseconds. And it needs to be in sync with the actual external time, so that you can also correlate your logs with third-party logs if that should become necessary.
Not only is it important from an administration perspective, but having clocks in sync my be important from application level correlation too. This depends on how solution is designed, how the applications running get their timestamp for any transactions they may work with. I have seen transaction validation fail because of an application running on a server with too much offset (was about 20 seconds in the future) compared to the others it was interacting with.
Also if virtualizing on for example VMWare ESXi server, and time of the VM is not in sync with that of the hypervisor, then an action such like vmotion may re-sync the VM clock with the hypervisors and this in turn can lead to unpredictable results if the time difference is big enough.
I do not know what any actual tolerances are, because I think it depends a lot on what type of systems there are, but I think generally speaking it is achievable to keep the servers in the datacenter having less than one seconds offset between one another.
Since you mentioned leap seconds it should be noted that they require particularly difficult handling.
They're usually added by injecting a second as 23:59:60, this is problematic if you're validating timestamps as 0-59 for the minutes and seconds fields. The alternative of repeating 23:59:59 to make it 2 seconds long isn't much better since that will mess with anything that is timing sensitive down to a per second level.
Google actually came up with a good solution a while back that seems to have not been widely adopted yet. Their solution was to apply a leap "smear" and split the change over a period of time, the whole process being managed by an NTP server. They published a blog about it back in 2011, makes for interesting reading and seems relevant to this question.
Whenever timestamps are involved, de-synchronized devices can create logical incoherences, like: A sends a query to B, and the reply of B comes with a timestamp earlier than that of the query, possibly causing A to ignore it.
I agree with all the point above. I want to throw more thoughts
Some database such as Cassendra rely heavily on time stamp. That is the way it deal with concurrency.
Different timestamp will compeletly mess up the database.