Closed lzhecheng closed 11 months ago
/cc @jackfrancis @feiskyer
/assign @mboersma
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/remove-lifecycle rotten /lifecycle active /assign @Jont828 /reopen
cc @dtzar
@CecileRobertMichon: Reopened this issue.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/retitle Support autoscaling on self-managed MachinePool using MachinePool Machines
This should be done since #3998 is merged, closing this issue out.
/kind feature
Describe the solution you'd like [A clear and concise description of what you want to happen.] Now autoscaling is supported only for managed clusters (aks). And regular clusters should be supported as well. Issue for managed one: https://github.com/kubernetes-sigs/cluster-api-provider-azure/issues/1669
I am add autoscaling CI for cloud-provider-azure with CAPZ. The original job defined with aks-engine: https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes-sigs/cloud-provider-azure/cloud-provider-azure-config.yaml#L422-L483 The current cluster template: https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/tests/k8s-azure/manifest/linux-autoscaler.json
Config is:
related: https://github.com/kubernetes-sigs/cloud-provider-azure/issues/919
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):