Context
I'm trying to implement a deployment script which implements the following idea:
- Start queuing new incoming requests, while waiting for current requests to be finished
- Wait for all current requests to finish (I think this is called "draining")
- Run app-specific deployment script
- Process all requests that were queued in step (1) and get haproxy back to normal. No incoming connections should be dropped by haproxy. If the client times out, that is acceptable.
Question
Given this context, I can find a number of ways to implement this in the haproxy docs:
set server mybackend/myserver state drain
followed byset server mybackend/myserver state ready
set maxconn frontend myfrontend 0
followed byset maxconn frontend myfrontend 1000
set maxconn backend mybackend/myserver 0
followed byset maxconn backend mybackend/myserver 1000
Which of these is the correct way of implementing what I'm trying to implement?
More context
This is probably related to https://serverfault.com/a/450983/117598 , but the following from haproxy docs is causing me to re-confirm:
Sets the maximum per-process number of concurrent connections to . It is equivalent to the command-line argument "-n". Proxies will stop accepting connections when this limit is reached. [..]
vs another conflicting snippet:
The "maxconn" parameter specifies the maximal number of concurrent connections that will be sent to this server. If the number of incoming concurrent requests goes higher than this value, they will be queued, waiting for a connection to be released. [..]
Expand your design considerations beyond the network proxy.
As mentioned in a blog post based on that answer you linked, the usual way of doing things involves data conversions with backward compatability:
During the transition, consider if your application should pause processing. Incoming data could be stored in a queue or database, and worked once processing was resumed.
An advantage this has over holding open network connections is the luxury to take your time. Network time outs may be very short.