I am regularly saving and later restoring a smallish PostgreSQL database, which is used for testing. Its data is updated regularly as a result of tests, then a new dump must be made, and the dumps are regularly used to recreate the database in a well-defined state.
I noted that the dump (using pg_dump -Fc database
) only takes a few seconds, but the restore (pg_restore -d database
) takes about a minute. This seems weird. I would have expected both to take about the same time (assuming both tasks are I/O-bound).
Is there some problem with the restore? Could I maybe make it faster? Or is it normal for restore to take much longer than dump? (And if yes, then why?)
The dump file usually has about 3-4 MiB; the DBMS is PostgreSQL V8.4, running on a Pentium4 3GHz with 1GiB RAM under Ubuntu Linux.
The content of an index is not part of the backup, only the definition of the index. And that will only take a few bytes. When the index is created during restore and all data is indexed, it will be much bigger. This will take time, but it depends on your situation how much time.
pg_restore does have an option for concurrent restore (as of version 8.4), use
--jobs=number-of-jobs
For a restore, the database has to do a lot of additional work:
Some things come to mind immediately:
Not sure if this amounts to that time difference, though.