We use pg_dump nightly to make a snapshot of our database. We did for a long time with a simple command
pg_dump -Fc database_name
This takes about an hour and produces a file of 30+GByte.
How can we speed up things?
We use pg_dump nightly to make a snapshot of our database. We did for a long time with a simple command
pg_dump -Fc database_name
This takes about an hour and produces a file of 30+GByte.
How can we speed up things?
Faced a situation we need to consolidate few small postgres instances to a bigger one
Cant figure out how to replicate "old" db new data to "new" db when the replacement happens
Ill try to simple it :
old db instance name is X
new db instance is Y
X has 10 GB of data and its takes 15~ minutes to dump & restore
in the meanwhile X receives more data (1-2 MB)
** HOW do i make X replicate data to Y so this data wont get lost ? **
I have this small script for dump & restore
## remote db to remote db
PGPASSWORD=$source_pass pg_dump -h $source_host -d $source_db_name -U $source_user -p $source_port > test.sql
psql postgres://$dest_user:$dest_pass@$dest_host:$dest_port/postgres -c "CREATE DATABASE ${source_db_name}" \
-c "CREATE USER ${source_user} WITH PASSWORD '${source_pass}'" \
-c "ALTER USER "${source_user}" WITH CREATEDB" \
|| echo "database already exists"
psql postgres://$dest_user:$dest_pass@$dest_host:$dest_port/$source_db_name < test.sql \
&& echo "loaded Data succesfuly !" || echo "Couldnt load data"
I'm trying to create a cronjob that creates db backups every night.
My crontab has the job:
* * * * * /home/user/scripts/backup.sh
(have it set to go off every min for testing)
In backup.sh, I have:
docker exec -it dbContainer pg_dump -U username -d dbName > /home/user/backups/testingBackup.sql
The file is always empty:
-rw-rw-r-- 1 user user 0 Jul 14 08:39 testingBackup.sql
However, if I run the file on my own and type the command /home/user/scripts/backup.sh
,
it is not empty:
-rw-rw-r-- 1 user user 30813 Jul 14 08:45 testingBackup.sql
I feel like it makes sense something is off on the permissions, but everything is done from the "user". I haven't done anything from root such as sudo crontab... sudo /home/user/backups/testingBackup.sql, etc.
I am confused as to why in one scenario, the resulting file is empty, and in the other, it is not.
Thanks for any help
We just upgraded our PostgreSQL servers to v13. We heavily use pg_dump on Ubuntu 18 systems to transfer data between databases. After upgrading the servers, pg_dump would complain about a version mismatch.
Easy enough, I installed the postgres-client-13 from the Apt Postgres repo (http://apt.postgresql.org/pub/repos/ap). pg_dump would still complain about a version mismatch so I uninstalled postgresql-client-11. After that pg_dump would complain that "PostgreSQL version 11 is not installed".
No amount or ordering of reinstalling and removing PostgreSQL clients clears up this error. Can anyone point me in the right direction to solving this issue?
I am trying to run this command
sudo pg_dump -U bm_clients -Z 9 -v baydb |aws s3 cp - s3://thebay.com/bay.dump.gz
The output is as follows:
pg_dump: reading extensions
pg_dump: identifying extension members
pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: [archiver (db)] query failed: ERROR: permission denied for relation provider_seq
pg_dump: [archiver (db)] query was: LOCK TABLE londiste.provider_seq IN ACCESS SHARE MODE
While connecting to database and checking the permission I find it as chief user.
I am also not able to find londiste in the \dt.
I also have run
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO bm_clients;
GRANT