I've a pretty simple question...
Why is the MySQL query cache disabled by default?
In most packaged versions of MySQL I've come across the defaults values are:
query_cache_type = 1
query_cache_size = 0
This essentially disables the query cache by default.
There are many other buffers and limits which are set with a sensible default value. Since the query cache is completely transparent to applications, why have it disabled?
I find myself enabling the query cache and the slow query log automatically every time I deploy a new MySQL box, so just curious as to the logic of the defaults.
It's not always disabled by default (it depends on the version and distributor) but there is a good reason for it: it's not always better for performance to enable the query cache, and at larger cache sizes it can actually be detrimental to performance as cache pruning (pushing less-used data out of memory to make way for new entries) takes longer. When invalidation and pruning take longer than the query takes to execute, you've got serious problems.
It's not a silver bullet for performance problems either. From here:
The most obvious (caveat-heavy) reason for disabling the query cache by default is the distributor deflecting potential flak for defaulting a setting that can possibly cause performance problems dependent on application.
Here is a good primer on the MySQL Query Cache