Open neolunar7 opened 3 years ago
Hello, any updates on this question? I had the same questions for my configuration as well.
The public cluster address is the address advertised to the workers as a means to connect to your controllers. We do this so the controllers can live behind a well known domain name or elastic IP address which often translates to a load balancer for ensuring high availability of the controller nodes: https://www.boundaryproject.io/docs/configuration/controller#public_cluster_addr
Hi, I'm looking at
boundary-reference-architecture/deployment/kube/kubernetes/boundary_config.tf
, and I'm curious what to specify at public_cluster_addr for the controller, and the address, controllers, public_addr for worker configuration.The configmap.yaml I'm using is as below. I'm running my kubernetes cluster in AWS private subnet, and thus have no idea what to specify at public_cluster_addr for controller. Also, I believe the example runs the controllers and workers in the same pod, and thought that the worker address, controllers, and public_addr should be localhost. Is it correct? (By the way, I am using Helm Chart I have made to implement the /kubernetes part, as the example is in Terraform. I prefer Helm)
This configuration seems to be wrong, as I'm getting some kind of connection error as below when I try to access the redis using the example.