I'm dealing with a database system that can have thousands of tables. The problem is, when I try to back it up using pg_dump
, I sometimes get the following error:
pg_dump: WARNING: out of shared memory
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
pg_dump: The command was: LOCK TABLE public.foo IN ACCESS SHARE MODE
I could of course increase the max_locks_per_transaction
setting. But the problem is that the number of tables can vary. I don't want to constantly have to revisit this setting every time there is a failure (assuming we notice the failure at all, given that this is in a cron job!).
What would be the best way to tackle this problem? Currently I'm working on a Perl script that will list all the tables and then call pg_dump
in "chunks" to keep a limit on the number of table locks, but I bet I could do better.
If you want a consistent backup, you must increase the max_locks_per_transaction.
Doing it in chunks from a script will make the backup inconsistent if you have concurrent access, which is probably not what you want.
Your other option is to use PITR and do a filesystem level backup. That will not take out any locks at all in the database.