I was wondering the other day why wouldn't one place all the htdocs related files in a RAM disk.
It seems to me, that this would greatly improve the time it takes for the disk to lookup and read files, greatly improving performance - specially for high websites with a high concurrency hit rate.
Is there a package that can implement this in a transparent way for reading operations, while at the same time making sure the write operations are safely stored on disk? If not, is there a reason not to?
Yes there is, it's part of the operating system.
All modern operating systems have what is called a filesystem cache. This is a portion of RAM that is unused for any applications that gets used by the kernel to store recently accessed files. It is also used to store recently written files until they are periodically flushed to the disk.
When an application needs the RAM, it is transparently given up by the kernel.
Because modern operating systems already cache stuff in memory as needed, and can usually manage this process more efficiently if left alone to get on with it. All you're doing with a RAM disk is creating your own manual version of this cache.
Since most websites are dynamic - accelerating access to just static data (htdocs) does not help that much. But most production sites do use Memcached/MemcacheDB like technologies.