I'm a bit of a HAProxy newbie - I've got 3 docker containers, one running HAProxy with the following config:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
chroot /var/lib/haproxy
user haproxy
group haproxy
stats socket /var/run/haproxy.sock mode 600 level admin
# daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
balance source
listen stats :80
stats enable
stats uri /haproxy?stats
stats realm Strictly\ Private
stats auth test:test1234
balance roundrobin
#option forwardfor
default_backend myserv-legacy
backend myserv-legacy
cookie SERVERID insert indirect preserve
server myserv-A ${MYSERVA_PORT_8080_TCP_ADDR}:8080 cookie A check
server myserv-B ${MYSERVB_PORT_8080_TCP_ADDR}:8080 cookie B check
The other two servers are running a webapp using Tomcat.
I brought my two servers down with sudo docker stop myservA myservB
, and though I've started them and can connect to them through their exposed ports, they both show as down in HAProxy with a L4TOUT
in 2000ms.
Any clue why they wouldn't be showing up as available?
edit:
If I run
$ sudo docker stop haprox && sudo docker start haprox
(haprox is the name of my HAProxy container) then my servers are available again...
I was just about to ask if the value of the address variables was changing, heh.
The way I've been seeing this done in dynamic containered environments is to use a service discovery tool like etcd or Consul to help the load balancer find the backends - looks like Consul has a tool specifically for the HAProxy use case.