Closed juditnovak closed 1 year ago
github-pr-86a89-lxd:testing on unspecified cloud
seems to hint you need to also specify the --cloud
switch
Dear @addyess , thanks much for your response and the advice.
Unfortunately it's impossible to use both the --model
and the --cloud
switch together :-(
And specifying the cloud still leaves me with the same problem -- I can't specify the model name :-(
perhaps the issue with using --cloud
and --model
simultaneously are that this block ignores the --cloud
argument when connecting to an existing model. Note the None
there. I bet this is something we could patch and try
I see your point, gave it a quick try locally. However no change, the error is the same :-(
@juditnovak do you have an example of this "all going fine" on k8s? My understanding of the switch --model
is to use an existing model. I looked in your pipeline and i don't see that model already existing -- only that you've bootstrapped a lxd controller. Is it really important to use an existing model or is it ok for ops_test
to create the model for you by omitting the --model
switch entirely?
Yes, this pipeline: https://github.com/canonical/mongodb-k8s-operator/actions/runs/5623068946/job/15240177776?pr=178#step:8:1
I run the tests using the --model=testing
parameter, which allowed me to switch to the model in the next step, and replay the debug-logs
nice, on this linked pipeline -- i see that after bootstrapping the controller, a model named testing
is created
I've never seen this before -- but that's because the actions-operator
github action randomly makes this model and that's why it works on k8s. Perhaps this issue should be linked to actions-operator
rather than pytest-operator
https://github.com/charmed-kubernetes/actions-operator/blob/main/src/bootstrap/index.ts#L272-L280
I don't think so... As I can use the --model
operator locally (precisely as demonstrated on the pipeline) on a k8s multipass instance without any issues. No Github action involved.
However, I can't do the same on an LXD multipass isntace... Failure is exactly as on the corresponding "failure" pipeline.
In fact, I don't think it's the github action that's suppsoed to create the model name, but the test framework (I could dig in for a more precise response)... As model name is generated exactly the same way locally as on the pipeline.
So, maybe this issue could be titled:
"Feature: reuse or create a model name if provided via the --model
switch"
Unless there's some kind of magic hidden with pylibjuju's kubernetes client to create a model if it doesn't already exist during the model.connect
phase, there's nothing in ops_test
to currently do this. One should first create a juju model before trying to connect to it as an existing model.
The ops_test
docs fail to mention that the model must first exist -- we could also improve those doc to further clarify the juju model must first exist otherwise a connection failure to that model will occur
Thank you very much :-) :-) :-)
In k8s the model is sure created if it didn't exist. The issue only appears when using LXD.
to say it another way -- in kubernetes something special is going on -- but in lxd or any machine based model (openstack, aws, vsphere, gce, azure....) or metal (maas, equinox) -- the model must first be there before we can connect to it.
All right! Thanks very much, understood :-)
Resolved with 0.29.0 release
When using the
--model
switch 1 on k8s, all goes fine. However on an LXD environment I'm getting the following error (both locally and on pipelines ):Enviroment:
Are there any hitns or known workarounds available perhaps? Thank you.