We are using Automatic Memory Management on Oracle 11.1.0.7. Last week Friday the Shared Pool unexpectedly increased from ~600MB to ~850MB. The memory came from the buffer cache which went from ~400 to ~150. Needless to say the system is doing significantly more disk reads than before. So far I haven't been able to prove that this was an incorrect change, but it sure doesn't seem right to me. I'd like to know what caused it so I can hopefully reverse it.
I did an AWR comparison from before and after and I noticed that the Shared Pool increase was almost exclusively in the sql area. I looked at the SQL being stored there using queries such as the following, but didn't find anything that would have changed recently:
--Replace anything in quotes and numbers with % so that statements not using
--bind variables group together.
SELECT count(*), Trunc(sum(sharable_mem)/1024/1024) SharableMemory,
regexp_replace(regexp_replace(
sql_text,'.\''.+\''','''%'''),'[0-9]+','%') || ''';' sql
FROM v$sql
GROUP BY regexp_replace(regexp_replace(
sql_text,'.\''.+\''','''%'''),'[0-9]+','%')
HAVING sum(sharable_mem) > 2*1024*1024
ORDER BY Trunc(sum(sharable_mem)/1024/1024) DESC;
I also tried flushing the shared pool. The system doesn't give up any of the shared pool to the buffer cache, perhaps because the size increases too rapidly after the flush.
I also tried setting a minimum size for the buffer cache to force it to give up the memory from the shared pool. On my test system this method allows me to increase the buffer cache by about 100MB, but on my live system it won't even increase 3MB before giving me an error indicating there isn't enough memory to perform the operation.
The alert log shows "Sweep Incident[77273]: completed" near the time when the change happened. I couldn't find much information on metalink about it. It also shows that DIA0 was restarted near that time, but I don't see how that could be related.
Here are several potentially unrelated notes. The AWR showed almost double the number of pin requests for all Namespaces in the Library Cache than there were before the problem started. The database was upgraded from 10.2.0.4 about a month ago. The number of sessions increased from ~200 to ~250 about a week before the problem occurred.
I'm leaving an instance restart as a last resort since I don't know if it will help.
Any suggestions would be appreciated.
Reboot!