Closed mkarroqe closed 2 months ago
/priority backlog
@mkarroqe thanks for opening this issue. There is some previous discussion in https://github.com/kubernetes-sigs/cluster-api-provider-azure/issues/1674 that you might find relevant.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
related to #4699
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/kind bug
What steps did you take and what happened: When creating a cluster with a
.
character in the cluster name, no warning is generated that it is an invalid regex character. This leads to the cluster stuck in a failed provisioning state when creating:When creating the cluster, all that was seen in the capz logs was the reconciling AzureManagedControlPlane:
When attempting to delete, the cluster is unable to delete, and only then the following error can be seen in the capz logs:
Invalid input: autorest/validation: validation failed: parameter=resourceName constraint=Pattern value="test.cluster.name" details: value doesn't match pattern ^[a-zA-Z0-9]$|^[a-zA-Z0-9][-_a-zA-Z0-9]{0,61}[a-zA-Z0-9]$
Full error when deleting:
kubectl logs deploy/capz-controller-manager -n capz-system manager | grep test.cluster.name | grep err
What did you expect to happen: I expected there to be an error when creating the cluster, preventing me from attempting to provision in the first place.
Anything else you would like to add: I have drafted some code changes to add a condition to check for this when the cluster is created; I will push the PR up shortly
Environment:
kubectl version
): 1.26.3/etc/os-release
): macos Ventura 13.3.1