Closed willemm closed 1 month ago
Attention: Patch coverage is 0%
with 9 lines
in your changes missing coverage. Please review.
Project coverage is 28.04%. Comparing base (
a3d4371
) to head (d5737e5
). Report is 11 commits behind head on main.
Files | Patch % | Lines |
---|---|---|
internal/server/kubernetes_api_workflow.go | 0.00% | 6 Missing :warning: |
internal/server/kubernetes_api.go | 0.00% | 3 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Description
Removes the default setting where not specifying the kubernetes-namespace defaults it to whatever namespace the controller is running in, and changes the workflowID to namespace/name so it can find the workflows
Why is this needed
https://github.com/tinkerbell/cluster-api-provider-tinkerbell/issues/385
With this change, you can create hardware and workflow resources in different namespaces.
Fixes: #
How Has This Been Tested?
We have a cluster-api setup where we're adding some bare metal nodes to a cluster. With this change, the workflows that previously only worked from the tink-system namespace now also work from a different namespace. I also tested the old working setup and that still works as well. The change is minimal, so it shouldn't impact much. I haven't tested if the --kube-namespace setting would restrict it to one namespace again.
How are existing users impacted? What migration steps/scripts do we need?
No migration steps are needed, unless users have multiple instances of tink-server running in different namespaces, or have another reason why they specifically don't want resources in a different namespace to be picked up.
This could probably be avoided by having the helm chart add the kube-namespace argument to the deployment and have it pull the value from the downward api somehow, but it seems to me that having it default to looking at all namespaces would be preferrable for most users.
Checklist:
I have: