colinmollenhour / mariadb-galera-swarm

MariaDb Galera Cluster container based on official mariadb image which can auto-bootstrap and recover cluster state.
https://hub.docker.com/r/colinmollenhour/mariadb-galera-swarm
Apache License 2.0
217 stars 103 forks source link

Node Command wrong service names in docker-compose #14

Closed davidhiendl closed 6 years ago

davidhiendl commented 7 years ago

Shouldn't the command in docker-compose.yml for nodes be prefixed with the stack name?

According to the documentation:

"node" - Join an existing cluster. Takes as a second argument a comma-separated list of IPs or hostnames to resolve which are used to build the --wsrep_cluster_address option for joining a cluster.

So when I execute docker stack deploy -c docker-compose.yml mysql The arguments should be prefixed with 'mysql_' because that is what the stack deployment does. command: node mysql_seed,mysql_node

Referenced section from docker-compose.yml:

  node:
    image: colinmollenhour/mariadb-galera-swarm
    environment:
...
    command: node seed,node
...
colinmollenhour commented 7 years ago

So would that be "${COMPOSE_PROJECT_NAME}_node,${COMPOSE_PROJECT_NAME}_seed" or is there some other variable? I haven't used Docker swarm mode recently...

davidhiendl commented 7 years ago

Didn't know that existed, doesn't seem to be working for me either. Currently solving this by using envsubst in a bash script like this:

deploy.sh


#!/bin/bash

STACK_NAME=$1;

if [ -z "${STACK_NAME}" ]; then echo "Missing argument: stack name" echo "Usage: ./deploy.sh stackname" exit 1; fi

export STACK_NAME envsubst < "docker-compose.yml" > "docker-compose.processed.yml"

CMD="docker stack deploy -c docker-compose.yml ${STACK_NAME}" echo $CMD exec $CMD

rm docker-compose.processed.yml


> docker-compose.yml

... command: ["node", "${STACK_NAME}_seed,${STACK_NAME}_node"] ...

colinmollenhour commented 7 years ago

I poked around on docker/docker and there were a few feature requests for these environment variables or similar that appear to have been effectively shot-down... I don't know of a better alternative to modifying the compose file offhand, unfortunately.

cpjolly commented 7 years ago

To use this in a real application one needs to extend the docker-compose.yml file to add the application services etc, so editing docker-compose.yml is always going to be necessary.

I use a shell script to semi-automate the multi-step install that is needed to first launch the galera cluster and then the application service. In this script I set the necessary environment variables that I reference in the script and in the docker-compose file. These variables include the name of the stack, the mysql_user and mysql_database.

Using this approach, for example this means the node command in docker-compose.yml becomes command: node ${STACKNAME}_seed,${STACKNAME}_node

This means the changes in the docker-compose.yml are kept to a minimum.

davidhiendl commented 7 years ago

Sure, that's basically what I do with the envsubst bash script. But sometimes you do not extend but instead allow access via the network when deploying multiple stacks / small applications that do not require dedicated mysql nodes. My current cluster spawns ~50 servers but over half of the workload uses a central, multi-tennant MySQL database.

Basically I think this is a question of opinion on how to solve it and is ultimately up to the user but it would have been nice to be able to fully automate and error-proof the docker-compose file (my opinion, I personally dislike ANY possible source of human error)

You can link MySQL network to other stacks by defining the networks as external in the other stack docker-compose.yml:

...
services:
    some-service:
        image: ...
        networks:
          - mysql_net
...
networks:
    mysql_net:
        external: true
cpjolly commented 7 years ago

@DavidHiendl,

I agree that automating as much as possible is the best way to reduce errors, which is why I have no hard-coded assumptions in my docker-compose files, only environment variables, that are populated from an application specific script.

A little off topic, but something I have struggled with, do you have a solution for automating the creation of the different application databases and database users, or is there just one database on your galera cluster ?

davidhiendl commented 7 years ago

@cpjolly Yeah, we have lots of database. Some stick around (prod) but a lot are temporary in nature for the CI/CD pipline. We use Ant "SQL Task" integrated into CI/CD pipeline to backup, modify and restore a database snapshot and create users required for the application.