Closed invidian closed 6 months ago
The CAPZ controller does not require credentials. In fact, the controller credentials will be deprecated in a future version.
Each cluster should have it's own identity reference. Please take a look at https://capz.sigs.k8s.io/topics/multitenancy.html. wdyt?
Each cluster should have it's own identity reference. Please take a look at https://capz.sigs.k8s.io/topics/multitenancy.html. wdyt?
This makes it IMO even more important to do sanity checks, if it's possibly up to the user to provide the credentials. Is there anything like this in place?
What would you think about validating the cluster by checking the ENV for creds and if missing, require the cluster to have an identity ref?
@nader-ziada wdyt?
the identityRef
is part of the workload cluster definition, so that check would happen at the creation time of that cluster which would not help with the problem described here in this issue.
The problem with checking the credentials at the time of creating the management cluster is that it would require an extra call to azure to validate its values, which would be an extra step that some would argue is not needed, otherwise just checking for the existence of some random values would not necessarily be valuable
the identityRef is part of the workload cluster definition, so that check would happen at the creation time of that cluster which would not help with the problem described here in this issue.
I was just saying we check for a valid set of env vars, like AZURE_TENANT_ID
, AZURE_CLIENT_ID
and AZURE_CLIENT_SECRET
are not ""
when a workload cluster is being created. If there isn't a valid set of env vars and no identityRef
, reject the cluster.
I agree we would have a tough time figuring out if creds are valid, but we have an easy time figuring out if they are present. If we know they are not present, then we know that a workload cluster will not be able to be deployed if there is no identityRef
.
I think that will be a common error case until identityRef
is the only way.
I think that will be a common error case until
identityRef
is the only way.
yes, makes sense
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
wouldn't be this function validating the credentials as requested?
wouldn't be this function validating the credentials as requested?
That's a decent short term solution, though I think something more nested in the controller itself would be nice as well.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@invidian we already validate for the existence of AZURE_CLIENT_ID
and AZURE_CLIENT_SECRET
env vars. Additionally, Capz expects AZURE_CLIENT_SECRET
to be set as a kubernetes secret as well. Do you think that checking that will improve the experience?
@shysank is it validated by the controller itself or only via Tilt/clusterctl?
Validation is (will be) done on the client (tilt), Controller will return an error if it can't find the secret.
Hmm, based on my recent development work, I still think the situation could be improved. Right now if one passes wrong credentials, for example with no access to subscription at all and tries to create a cluster, the cluster cannot even be removed, which is quite annoying as one have to remove the finalizers by hand to get rid of the objects.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/kind feature
Describe the solution you'd like Right now, if one deploys CAPZ using Tilt and forgets to specify some credentials via environment variables, CAPz controller runs as usual until you try deploying a cluster, which then fails.
It would be nice to do any kind of check for credentials and either reject cluster operations or crash if the credentials are wrong. This would provide better experience for people deploying and developing CAPz.
Environment: