sorintlab / stolon

PostgreSQL cloud native High Availability and more.
https://talk.stolon.io
Apache License 2.0
4.66k stars 447 forks source link

Hashicorp nomad integration. #101

Open sgotti opened 8 years ago

sgotti commented 8 years ago

The sentinels and proxies should be able to run inside nomad with docker/rkt drivers. Since with docker their external IP and port (with the default config which uses the docker bridge network) are different from the container's ones, a new --advertise-address and pg-advertise-address option will be needed.

For the keeper, since it needs persistent data, we should wait for hashicorp/nomad#150

jsierles commented 8 years ago

Is nomad support still on the table?

sgotti commented 8 years ago

@jsierles Yes, but I'm waiting on nomad persistent volumes (and also evolutions on nomad networking model).

jsierles commented 8 years ago

0.5 looks like it will have preliminary volume override support.

What networking model changes need to happen?

c4milo commented 7 years ago

I believe this can be revisited again, 0.5.x allows to use Docker's volume support as @jsierles mentioned. Generic volume support is supposed to land in 0.6.0. It would also be fair to do it with host networking first.

LordFPL commented 7 years ago

Hello, many thx for this tool :)

For information, i'm actually testing it under nomad... and all seem to be ok with this :

job "postgresclust" {
  datacenters = ["dc1"]
  type = "service"
  priority = 30

  update {
    stagger = "60s"
    max_parallel = 1
  }

  constraint {
    distinct_hosts = true
  }

  group "postgresclust" {
    count = 3
    task "sentinel" {
      driver = "raw_exec"
      config {
        command = "stolon-sentinel"
        args = [
          "--cluster-name=stolon-cluster",
          "--store-backend=consul",
        ]
      }
      artifact {
        source = "http://mystorage/bin/stolon-v0.6.0-linux-amd64/stolon-sentinel"
      }
      service {
        name = "stolon-sentinel"
        tags = [
          "postgres",
        ]
      }
      logs {
        max_files     = 2
        max_file_size = 10
      }
      resources {
        cpu = 200
        memory = 300
        network {
          mbits = 100
        }
      }
    }
    task "keeper" {
      driver = "docker"
      config {
        image = "myregistry/infra/postgres-keeper:9.6.3"
        network_mode = "host"
        args = [
          "--cluster-name=stolon-cluster",
          "--store-backend=consul",
          "--data-dir=/data/postgres",
          "--pg-listen-address=${attr.unique.network.ip-address}",
          "--pg-port=${NOMAD_PORT_postgresnode}",
          "--pg-su-password=supassword",
          "--pg-repl-username=repluser",
          "--pg-repl-password=replpassword",
          "--pg-bin-path=/usr/lib/postgresql/9.6/bin/",
        ]
        volumes = [
          "/local/postgres:/data/postgres",
          "/etc/localtime:/etc/localtime:ro"
        ]
      }
      user = "postgres"
      service {
        name = "stolon-sentinel"
        tags = [
          "postgres",
        ]
      }
      logs {
        max_files     = 2
        max_file_size = 10
      }
      resources {
        cpu = 400
        memory = 1000
        network {
          mbits = 100
          port "postgresnode" {}
        }
      }
    }
    task "proxy" {
      driver = "raw_exec"
      config {
        command = "stolon-proxy"
        args = [
          "--cluster-name=stolon-cluster",
          "--store-backend=consul",
          "--listen-address=${attr.unique.network.ip-address}",
          "--port=5432",
        ]
      }
      artifact {
        source = "http://mystorage/bin/stolon-v0.6.0-linux-amd64/stolon-proxy"
      }
      service {
        name = "stolon-proxy"
        tags = [
          "postgres",
        ]
      }
      logs {
        max_files     = 2
        max_file_size = 10
      }
      resources {
        cpu = 200
        memory = 300
        network {
          mbits = 100
        }
      }
    }
  }
}

Just 3 things to do before lauch :

And of course, do stolonctl init before ;)

stolonctl status
=== Active sentinels ===

ID              LEADER
17eeb35e        false
2afc367e        true
4bfd8962        false

=== Active proxies ===

ID
282b8fde
53240b6e
c121b388

=== Keepers ===

UID             PG LISTENADDRESS        HEALTHY PGWANTEDGENERATION      PGCURRENTGENERATION
5600ba68        xxxxxxx:33793      true    2                       2
6bb9f682        xxxxxxx:29111      true    15                      15
c782f104        xxxxxxx:45772      true    4                       4

=== Cluster Info ===

Master: 6bb9f682

===== Keepers tree =====

6bb9f682 (master)
├─5600ba68
└─c782f104

On front, i have a keepalived for a floating vip.

Only thing i have to do on this nomad file is to change user from sentinel and proxy (no need to be root i think).

Hope it can help you for a nomad integration... i will test it more on next week.

LordFPL commented 7 years ago

I little update : it's more interesting to separate nomad file in three parts. IMHO :

codekoala commented 7 years ago

@LordFPL thank you for describing your setup. I'm interested to learn more about your setup after splitting things into different parts. Have you noticed any other possible tweaks since 2 days ago?

LordFPL commented 7 years ago

Hi @codekoala , all seem to be ok, tweaks are mainly on postgres now as stolon is only here for availibility. I don't have many time actually, so tests are mainly with pgbench, and now i'm installing iRODS on it. Since all my needs are pretty simple, i feel confident ;)

scalp42 commented 5 years ago

Anyone knows how to pass an ACL Consul token in that scenario so that Stolon can access the KV?

sgotti commented 5 years ago

@scalp42 please ask on gitter or mailing list (not related to this issue). BTW you should just export the CONSUL_HTTP_TOKEN env var before starting all the stolon components. If this doesn't work please open a new issue with the steps to reproduce it.