Closed BlueSeph28 closed 1 year ago
now it's working, I've added the label openpolicyagent.org/policy: rego
to the policies and now works!.
Maybe it's worth to add this on the README.md that the labels are necessary now, by reading I thought that was optional... can you confirm if it's right or just coincidence?
I'm using the default namespace opa and just adding the label to the policies make it work. ✨
Hello
It is already in README, explaining that kube mgmt detect config maps with policies or data, when they contain specific labels.
https://github.com/open-policy-agent/kube-mgmt/blob/master/README.md#policies-and-data-loading
It's still interesting why this worked in previous versions though. Did we change something with regards to this?
yeah, same thought, thats the reason I didn't closed the issue, it was working without labels until 4.1.1
I'm trying to upgrade my OPA and kube-mgmt stack in a k8s cluster. I'm not using a chart, all resources are deployed by separate with terraform.
I was using OPA 0.38.1 and I managed to upgraded it to 0.57.1, it works as it is.
the issue is when I try to upgrade kube-mgmt from 2.0.1 to some version greater than 4.1.1
when I use 4.1.1 everything works and my configmaps with rego policies work as expected, also the annotation
openpolicyagent.org/policy-status: {"status":"ok"}
is in place.when I upgrade to 6.0.0 all configmaps are ignored, I'm not sure if there is something that I'm missing, a new annotation or new type of connection, didn't find anything in the docs.
all configmaps are created in the OPA namespace, I'm expecting that all configmaps are discovered and labeled as ok or in error, but didn't get any annotation.
kube-mgmt 6.0.0 logs
kube-mgmt 4.1.1 logs