This is a software design question
I used to work on the following rule for speed
cache memory > memory > disk > network
With each step being 5-10 times the previous step (e.g. cache memory is 10 times faster than main memory).
Now, it seems that gigabit ethernet has latency less than local disk. So, maybe operations to read out of a large remote in-memory DB are faster than local disk reads. This feels like heresy to an old timer like me. (I just spent some time building a local cache on disk to avoid having to do network round trips - hence my question)
Does anybody have any experience / numbers / advice in this area?
And yes I know that the only real way to find out is to build and measure, but I was wondering about the general rule.
edit:
This is the interesting data from the top answer:
Round trip within same datacenter 500,000 ns
Disk seek 10,000,000 ns
This is a shock for me; my mental model is that a network round trip is inherently slow. And its not - its 10x faster than a disk 'round trip'.
Jeff attwood posted this v good blog on the topic http://blog.codinghorror.com/the-infinite-space-between-words/