hashicorp / boundary-reference-architecture

Example reference architecture for a high availability Boundary deployment on AWS.
https://boundaryproject.io
Mozilla Public License 2.0
213 stars 105 forks source link

Uncertain what to specify in public_cluster_addr for k8s running in private subnet #35

Open neolunar7 opened 3 years ago

neolunar7 commented 3 years ago

Hi, I'm looking at boundary-reference-architecture/deployment/kube/kubernetes/boundary_config.tf, and I'm curious what to specify at public_cluster_addr for the controller, and the address, controllers, public_addr for worker configuration.

The configmap.yaml I'm using is as below. I'm running my kubernetes cluster in AWS private subnet, and thus have no idea what to specify at public_cluster_addr for controller. Also, I believe the example runs the controllers and workers in the same pod, and thought that the worker address, controllers, and public_addr should be localhost. Is it correct? (By the way, I am using Helm Chart I have made to implement the /kubernetes part, as the example is in Terraform. I prefer Helm)

apiVersion: v1
kind: ConfigMap
metadata:
  name: boundary-config
data:
  boundary.hcl: |
    disable_mlock = true
    controller {
        name = "kubernetes-controller"
        description = "A controller for a kubernetes demo!"
        database {
            url = "env://BOUNDARY_PG_URL"
        }
        public_cluster_addr = "boundary-controller.boundary.svc.cluster.local"
    }
    worker {
        name = "kubernete-worker"
        description = "A worker for a kubernetes demo"
        address = "localhost"
        controllers = ["localhost"]
        public_addr = "localhost"
    }
    listener "tcp" {
        address = "0.0.0.0"
        purpose = "api"
        tls_disable = true
    }
    listener "tcp" {
        address = "0.0.0.0"
        purpose = "cluster"
        tls_disable = true
    }
    listener "tcp" {
        address = "0.0.0.0"
        purpose = "proxy"
        tls_disable = true
    }
    kms "aead" {
        purpose = "root"
        aead_type = "aes-gcm"
        key = "sP1fnF5Xz85RrXyELHFeZg9Ad2qt4Z4bgNHVGtD6ung="
        key_id = "global_root"
    }
    kms "aead" {
        purpose = "worker-auth"
        aead_type = "aes-gcm"
        key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ="
        key_id = "global_worker-auth"
    }
    kms "aead" {
        purpose = "recovery"
        aead_type = "aes-gcm"
        key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ="
        key_id = "global_recovery"
    }

This configuration seems to be wrong, as I'm getting some kind of connection error as below when I try to access the redis using the example.

❯ boundary connect -exec redis-cli -target-id ttcp_er1Yy3ROiI -- -h http://boundary.dev.mydomain.cloud -p 80
Could not connect to Redis at http://boundary.dev.mydomain.cloud:80: nodename nor servname provided, or not known
not connected>
alexkim-avant commented 3 years ago

Hello, any updates on this question? I had the same questions for my configuration as well.

malnick commented 3 years ago

The public cluster address is the address advertised to the workers as a means to connect to your controllers. We do this so the controllers can live behind a well known domain name or elastic IP address which often translates to a load balancer for ensuring high availability of the controller nodes: https://www.boundaryproject.io/docs/configuration/controller#public_cluster_addr