I have a WSGI Python app running in via Gunicorn:
CONFIG = {
'bind': "unix:{}".format(os.path.join(RUN_DIR, "brain.sock")),
'preload_app': False,
# supervisord requires that the child process runs in foreground
'daemon': False,
...
}
It receives HTTP requests via a socket file from Nginx:
server {
...
location / {
proxy_pass http://unix:$root/run/brain.sock:;
...
}
The Gunicorn is run via Supervisord:
[program:myapp]
command = venv/bin/gunicorn -c gunicorn.conf.py myapp.wsgi:application
...
I am thinking of a way to deploy my app without downtime and waiting time. Each worker can take up to 30 seconds to fill up the cache.
My idea to deploy like this:
Start a second Gunicorn with new code which would listen to another socket file.
Wait until the app starts with all caches filled.
Rename the socket file to point to the location used by Nginx. Nginx would still send requests to old socket.
Shut down old Gunicorn with old app version. Nginx will see that the socket is closed and will reopen a new socket from the same location.
Will this work?
Am I reinventing the wheel?