Open shipperizer opened 1 year ago
issue seems to be related to the proxy-config
property in the juju controller configuration, see thread https://chat.charmhub.io/charmhub/pl/w3y9mwzyxirpirha6cb6rp4jcr for more info
same issue here. hard to understand when/why it started happening
How does your controllers.yaml
look like? pylibjuju should pick it up if it's in there, otherwise we'll need to improve the connect_controller
to accept some kwargs for the proxy, currently it looks like the controllers.yaml is the only way to get that https://github.com/juju/python-libjuju/blob/b56963442a6deffe46ce1fb5cb41970344263649/juju/client/connector.py#L143.
the definition for my k8s cloud looks like:
microk8s-localhost:
uuid: 351b04f0-19b5-4ddf-86d5-5a2497dda9ce
api-endpoints: ['10.152.183.40:17070']
dns-cache: {localhost: [127.0.0.1]}
ca-cert: SNIP
cloud: microk8s
region: localhost
type: kubernetes
agent-version: 3.5.0
controller-machine-count: 1
active-controller-machine-count: 0
machine-count: 1
proxy-config:
type: kubernetes-port-forward
config:
api-host: 127.0.0.1:16443
ca-cert: SNIP
namespace: controller-microk8s-localhost
remote-port: "17070"
service: controller-service
service-account-token: SNIP
I tried manually editing and replace 127.0.0.1 with the IP that I get from microk8s config
:
but that made no difference whatsoever.
I'm having issues running a simple test while using the
ops_test
fixtureit tries to connect to
127.0.0.1:16443
while myu local k8s config points to a completely different ipbelow stack trace and execution
my juju environment seems to be working just fine