How do you free up PostgreSQL connections that aren't properly being closed by client applications?
I have a data-mining app that spins up multi processes, all connection to a local PostgreSQL 9.1 database to retrieve data. It runs fine for a few hours, but then dies with the error:
FATAL: remaining connection slots are reserved for non-replication superuser connections
Researching this shows that this is most likely caused by the app not properly closing it's connections. However, even with the app killed, these connections are never freed. Isn't there some sort of timeout where PostgreSQL will automatically close a connection?
I also tried increasing Postgres's max_connections from 100 to 200, but restarting gave me the error:
2014-02-23 10:51:15 EST FATAL: could not create shared memory segment: Invalid argument
2014-02-23 10:51:15 EST DETAIL: Failed system call was shmget(key=5432001, size=36954112, 03600).
2014-02-23 10:51:15 EST HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 36954112 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.
The PostgreSQL documentation contains more information about shared memory configuration.
My system is Ubuntu 12.04 and has 8GB of memory, and all other PG settings are defaults, so I'm not sure why it thinks the system doesn't have enough memory.
I then tried to use pgbouncer to pool and reuse connections. This seemed to work a little better, but even this eventually ran out of connections, giving me the error:
ERROR: no more connections allowed
How do I further diagnose and fix this problem?
You can increase the max number of connections by changing your max shared memory settings, however if the problem is that your connections are not being closed, then you should really resolve that. If the software is out of your control and is buggy by not closing connections, you can use some cron job like:
That's what I do to kill leaking connections from a similar buggy software.
Alternatively, you may be able to run your buggy software through a connection pool which has a similar functionality to kill idle connections such as pgpool.
Note: Newer versions of Postgres has slightly different column names:
For newer versions of PostgreSQL:
The above will help you terminate your idle connections. I had the same issue but it turned out to be a problem with my Flask and SQLAlchemy way to connect to database.
*usename is not a typo
PG document on managing kernel resources . This should help you increase the memory limit in kernel.
tcpkeepalive options in postgresql.conf
It will automatically terminate an idle tcp connection according to TCP KeepAlive protocol.