I have one server. On this server I have one haproxy container, 2x node containers. I am trying to split up the following.
https://mydomain -> 1e node container (x.x.x.x:8080)
https://mydomain:81 -> 2e node container (x.x.x.x:8080)
My config:
global
log 127.0.0.1 local0 notice
maxconn 2048
tune.ssl.default-dh-param 2048
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
defaults
log global
mode http
option httplog
option dontlognull
retries 3
timeout connect 5000
timeout client 10000
timeout server 10000
listen stats
bind *:1988 ssl crt /srv/ssl.io.pem
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /haproxy
stats auth myuser:mypassword
frontend http-in-beta
bind *:81 ssl crt /srv/ssl.io.pem
acl host_mydomain_com_beta hdr_beg(host) -i mydomain.com
use_backend mydomain_beta if host_mydomain_com_beta
frontend http-in
bind *:80
redirect scheme https code 301 if !{ ssl_fc } # redirect all traffic to https
frontend https-in
bind *:443 ssl crt /srv/ssl.io.pem
acl host_mydomain_com hdr_beg(host) -i mydomain.com
use_backend mydomain_cluster if host_mydomain_com
backend mydomain_beta
balance roundrobin
option forwardfor
server mydomain_beta 172.17.0.110:8080
backend mydomain_cluster
balance roundrobin
option forwardfor
server mydomain_node_s1 172.17.0.109:8080
Now the issue I am getting, is that sometimes it works. Sometimes I receive a 503. It's frustrating!
Feels like port 81 and 80 are somehow colliding?
In the post: https://stackoverflow.com/questions/13994629/haproxy-random-http-503-errors
Matthew Jones answered my question. I was running not one but TEN HAProxy instances each probably with their own config files.
I have NO idea how this happened, I did update the config around 10 times but always used the command
service haproxy restart
. Weird...I used the following to acquire the processes and kill them: