We are facing the following problem: How to run RabbitMQ in a Docker-Swarm with persistent Data.
Currently we have the following setup in place:
- Docker-Swarm 3 Nodes
- GlusterFS as replicated Filesystem between all nodes
- RabbitMQ with Consul Image: gavinmroy/alpine-rabbitmq-autocluster
This works most of the times fine.. but now we have to use durable queues to persist data.
We have tried to use --hostnames or to set the RABBITMQ_NODENAME, than we get a subdirectory for every started node like "rabbit@CONTAINERID" the problem: when the containers are restarted a new Folder is used to persist the data (new ContainerID).. any suggestions how to get a working setup, with usage of the the Docker Swarm features?
Although this has been dormant for a while: This is what we use.
docker-compose.yaml:
rabbitmq.conf:
definitions.json as exported from https://your-rabbitmq-admin-ui/api/definitions
Notice that the rabbitmqs are deployed in "global" mode and then constrained to node labels.
We have three nodes in our cluster which bear the "rabbitmqX" label like so:
One with "rabbitmq1": "true", the other with "rabbitmq2": "true", the last with "rabbitmq3": "true".
Thus a single instance of RMQ is deployed onto each of those three nodes, so we end up with a 3 node RMQ cluster with one node on each of the three Docker Swarm members.
A Consul is in place, reachable at "consul.server" for the RMQ nodes to use as the clustering backend.
This will fix the problem of RMQ using their container hostnames to create their data-directories, as the hostnames are fixed as "rabbitmq-0X" in the docker-compose file.
Hope this helps whoever comes by!
Find this at https://github.com/olgac/rabbitmq