canonical / istio-operators

Charmed Istio
2 stars 17 forks source link

test: trust kubeflow-volumes #382

Closed DnPlas closed 4 months ago

DnPlas commented 4 months ago

This charm now requires to be trusted at deployment time.

https://github.com/canonical/kubeflow-volumes-operator/pull/125 refactored kubeflow-volumes, which now requires to be trusted.

ca-scribner commented 4 months ago

This change lgtm, but I don't understand why the CI is failing. It feels like it is unrelated

ca-scribner commented 4 months ago

I can't reproduce this issue locally, but I do see https://github.com/canonical/charmcraft/issues/1456 I think reporting the same thing. Tried pushing an update to the charmcraft.yaml file to force a newer pip version - let's see how that goes

ca-scribner commented 4 months ago

I'm not sure how to fix this, but pinning to charmcraft 2.2 and trying to pin pip didn't work so I've reverted those attempts

ca-scribner commented 4 months ago

Also it seems like the errors might be intermittent, and that they only happen during the integration tests. The publish step, which also builds these charms, always succeeded for me(???)

DnPlas commented 4 months ago

Hey @ca-scribner not sure why you were looking at pip and other dependencies, but this PR is trying to fix the permission issues that the kubeflow-volumes was throwing in a previous CI run

unit-kubeflow-volumes-0: 10:50:31 ERROR unit.kubeflow-volumes/0.juju-log ingress:2: Failed to compute status for kubernetes:auth during assessment of prerequisites for container:kubeflow-volumes.  Got err: customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:test-istio:kubeflow-volumes" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope

Because the kubeflow-volumes charm was re-written to sidecar, we need to trust it in any tests that depend on it.

ca-scribner commented 4 months ago

@DnPlas check out the failures in the original CI runs like here - something intermittent is happening. Agreed that this PR did not cause them, but it is affected by them

ca-scribner commented 4 months ago

The CI for this PR during merge also failed. I don't think we should have merged this PR yet, since it had a good chance of failing the CI and not publishing