Open CecileRobertMichon opened 3 years ago
/assign
Noting that there's a similar issue in #1865. I kept the behavior the same when refactoring but I'll come back to it when this issue gets fixed.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@CecileRobertMichon Any luck on this issue?
I have not been able to work on this one. Will unassign for now in case you or someone else wants to pick it up.
/unassign /help
@CecileRobertMichon: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the PR is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I can take this, but I need some help like, what resources I should look at or what could be my approach and some example code where there will be the ID change 😺
We might want want to hold on this one until after the switch to ASO (https://github.com/kubernetes-sigs/cluster-api-provider-azure/issues/3402), @nojnhuh wdyt?
I think the only main difference between the SDK and ASO for this would be where we get the resource IDs from, but otherwise I don't see ASO affecting this much if it mostly has to do with how/where that data lands in CAPZ resources.
/kind feature
Describe the solution you'd like [A clear and concise description of what you want to happen.]
Currently, some of the services (subnets, route tables for example) set the Azure resource ID of existing resources in the spec as part of Reconcile(). There are a couple of problems with this: 1) The ID field is part of the spec, not status, but is not user configurable. If a user were to set an ID, it would be overwritten by the controller. This field should not be in the spec or be clearly identified as read only. 2) The implementation is inconsistent across the codebase, some resources have this while others don't. Furthermore, even when the ID is set, we don't use it most of the time, instead reconstructing the ID from scratch when needed (eg. route table ID in the subnet service). 3) Modifying the spec mid reconcile can cause bugs if some of the previous fields are not respected, as seen in #1589
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):