Open andrewrynhard opened 3 years ago
Ah, interesting. I don't see any reason we shouldn't support this use case.
I think the biggest blocker right now is that we only allow configuration of the ProjectID on the PacketCluster, if we allowed for overriding it on the PacketMachine, then I think we could successfully decouple the need for a PacketCluster.
Ah, interesting. I don't see any reason we shouldn't support this use case.
I think the biggest blocker right now is that we only allow configuration of the ProjectID on the PacketCluster, if we allowed for overriding it on the PacketMachine, then I think we could successfully decouple the need for a PacketCluster.
Sounds reasonable to me!
@detiber Any idea when this could land? We could help and contribute the work if that helps. I would love to show this off at Kubecon!
@andrewrynhard biggest blocker right now is getting https://github.com/kubernetes-sigs/cluster-api-provider-packet/pull/269 across the line and wrapped up, which has been fighting me with continued edge cases creeping up and I'd like to avoid having to try to rebase additional changes in complicating it further.
More than happy to accept PRs for the changes needed against my fork/branch to include with that PR if you don't want to wait for it to merge first, though.
@andrewrynhard biggest blocker right now is getting #269 across the line and wrapped up, which has been fighting me with continued edge cases creeping up and I'd like to avoid having to try to rebase additional changes in complicating it further.
More than happy to accept PRs for the changes needed against my fork/branch to include with that PR if you don't want to wait for it to merge first, though.
We are working on this migration as well. No particular rush on our side to make this PR with the next couple weeks. We can wait. Thanks!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
isn't the requested functionality an explicit non-goal [1] of the cluster-api? Won't there be fundamental cluster-api impossibilities? Or am I misunderstanding the requested functionality?
[1]: https://github.com/kubernetes-sigs/cluster-api/blob/46df46a9b6d3efe44d88148105fe63380ac531bd/docs/scope-and-objectives.md -- "non-goal: To manage a single cluster spanning multiple infrastructure providers."
/lifecycle frozen
isn't the requested functionality an explicit non-goal [1] of the cluster-api? Won't there be fundamental cluster-api impossibilities? Or am I misunderstanding the requested functionality?
[1]: https://github.com/kubernetes-sigs/cluster-api/blob/46df46a9b6d3efe44d88148105fe63380ac531bd/docs/scope-and-objectives.md -- "non-goal: To manage a single cluster spanning multiple infrastructure providers."
It's complicated. It has been discussed upstream in various ways. While it's not something that the community would necessarily recommend without caveats, things should not be so tightly coupled that it should prevent one from doing so.
For example, consider the case of a pod-based control plane, that infrastructure provider would be quite different than the infrastructure provider one would use for the worker nodes.
There is a similar ask in the Tinkerbell and Rancher community Slacks.
Is it on the CAP\<provider> roadmap to support provisioning of hybrid clusters where say control plane nodes are vms provisioned via vsphere and worker nodes are bare metal provisioned via tinkerbell?
@richardcase:
this isn't on the \<provider> roadmap specifically. However, the ability to create mixed-provider clusters is being discussed more generally within CAPI and a feature group has been formed. The goal is that this will be supported at some point in the future across different providers.
https://mobile.twitter.com/fruit_case/status/1555554512529653761?s=61&t=zAZ8pZ78GnN59Qr_4a0cXA
There are changes that could be made (i.e. removing the direct coupling between the machine reconcilers and a specific infra provider for the cluster) to enable this before the feature group makes recommendations/changes. This what we did in our demo to the capmvm and capbyoh providers...but we didn't upstream these changes.
User Story
As a user I would like to manage packet machines with
cluster-api-provider-packer
, but join them to a control plane that is not in Equinix Metal. The idea is that I can use Sidero for bare metal, and then burst out to Equinix Metal when I need to.Detailed Description
I want to create a
MachineDeployment
forPacketMachine
s, and have them join a non-PacketCluster
cluster:The
atl2
cluster reference is a Sidero based cluster./kind feature