kubernetes-sigs / cluster-api-provider-cloudstack

A Kubernetes Cluster API Provider implementation for Apache CloudStack.
https://cluster-api-cloudstack.sigs.k8s.io/
Apache License 2.0
37 stars 35 forks source link

Feature: Support externally managed cluster infrastructure #307

Closed hrak closed 6 months ago

hrak commented 1 year ago

Adds support for the ResourceExternallyManaged predicate

Issue #, if available:

Description of changes:

This PR implements this CAEP allowing for cluster infrastructure to be externally managed. This allows f.e. to use an external (possibly already existing) control plane, while still leveraging CAPI/CAPC for the deployment of workers using MachineDeployments.

CloudStackClusters marked with cluster.x-k8s.io/managed-by annotation should be skipped from reconciliation.

Related links:

📖 CAEP: Add support for infrastructure cluster resources to be managed externally ✨ Add externally managed annotation and predicate

Testing performed:

make test-sanity make test tested with kind based dev mgmt cluster

With the following CloudStackCluster excerpt:

apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: CloudStackCluster
metadata:
  name: hrak-test-cluster
  namespace: default
  annotations:
    cluster.x-k8s.io/managed-by: "external"

Observe CAPC waiting for ready status on CloudStackCluster, MachineDeployment waiting for ready status on CloudStackCluster.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

k8s-ci-robot commented 1 year ago

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: hrak Once this PR has been reviewed and has the lgtm label, please assign jweite-amazon for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files: - **[OWNERS](https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/blob/main/OWNERS)** Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
netlify[bot] commented 1 year ago

Deploy Preview for kubernetes-sigs-cluster-api-cloudstack ready!

Name Link
Latest commit 58f7ba701ee39a7e2e5429c5a50ab6dcc358e9ee
Latest deploy log https://app.netlify.com/sites/kubernetes-sigs-cluster-api-cloudstack/deploys/6560969b7a9b830008fb869e
Deploy Preview https://deploy-preview-307--kubernetes-sigs-cluster-api-cloudstack.netlify.app
Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

k8s-ci-robot commented 1 year ago

Hi @hrak. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
hrak commented 1 year ago

Turning this into a draft since its not working the way its supposed to, due to the way we handle failuredomains. This probably means that whatever is doing the external management will also have to add a failuredomain to the InfraCluster (CloudStackCluster) object when marking it as Ready. I will add some documentation to the PR on how to use this feature.

chrisdoherty4 commented 1 year ago

Feel free to reassign when ready.

/unassign chrisdoherty4

chrisdoherty4 commented 1 year ago

/uncc chrisdoherty4

weizhouapache commented 12 months ago

/run-e2e -c 4.18

blueorangutan commented 12 months ago

@weizhouapache a jenkins job has been kicked to run test with following paramaters:

blueorangutan commented 12 months ago

Test Results : (tid-347) Environment: kvm Rocky8(x3), Advanced Networking with Management Server Rocky8 Kubernetes Version: v1.27.2 Kubernetes Version upgrade from: v1.26.5 Kubernetes Version upgrade to: v1.27.2 CloudStack Version: 4.18 Template: ubuntu-2004-kube E2E Test Run Logs: https://github.com/blueorangutan/capc-prs/releases/download/capc-pr-ci-cd/capc-e2e-artifacts-pr307-sl-347.zip

[PASS] When testing affinity group Should have host affinity group when affinity is anti
[PASS] When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
[PASS] When testing with custom disk offering Should successfully create a cluster with a custom disk offering
[PASS] When testing app deployment to the workload cluster with network interruption [ToxiProxy] Should be able to create a cluster despite a network interruption during that process
[PASS] When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
[PASS] When testing with disk offering Should successfully create a cluster with disk offering
[PASS] When testing resource cleanup Should create a new network when the specified network does not exist
[PASS] with two clusters should successfully add and remove a second cluster without breaking the first cluster
[PASS] When testing horizontal scale out/in [TC17][TC18][TC20][TC21] Should successfully scale machine replicas up and down horizontally
[PASS] When testing MachineDeployment rolling upgrades Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
[PASS] When testing machine remediation Should replace a machine when it is destroyed
[PASS] When testing app deployment to the workload cluster with slow network [ToxiProxy] Should be able to download an HTML from the app deployed to the workload cluster
[PASS] When testing multiple CPs in a shared network with kubevip Should successfully create a cluster with multiple CPs in a shared network
[PASS] When the specified resource does not exist Should fail due to the specified account is not found [TC4a]
[PASS] When the specified resource does not exist Should fail due to the specified domain is not found [TC4b]
[PASS] When the specified resource does not exist Should fail due to the specified control plane offering is not found [TC7]
[PASS] When the specified resource does not exist Should fail due to the specified template is not found [TC6]
[PASS] When the specified resource does not exist Should fail due to the specified zone is not found [TC3]
[PASS] When the specified resource does not exist Should fail due to the specified disk offering is not found
[PASS] When the specified resource does not exist Should fail due to the compute resources are not sufficient for the specified offering [TC8]
[PASS] When the specified resource does not exist Should fail due to the specified disk offer is not customized but the disk size is specified
[PASS] When the specified resource does not exist Should fail due to the specified disk offer is customized but the disk size is not specified
[PASS] When the specified resource does not exist Should fail due to the public IP can not be found
[PASS] When the specified resource does not exist When starting with a healthy cluster Should fail to upgrade worker machine due to insufficient compute resources
[PASS] When the specified resource does not exist When starting with a healthy cluster Should fail to upgrade control plane machine due to insufficient compute resources
[PASS] When testing subdomain Should create a cluster in a subdomain

Summarizing 3 Failures:

[Fail] When testing affinity group [It] Should have host affinity group when affinity is pro 
/jenkins/workspace/capc-e2e-new/test/e2e/common.go:331

[Fail] When testing app deployment to the workload cluster [TC1][PR-Blocking] [It] Should be able to download an HTML from the app deployed to the workload cluster 
/jenkins/workspace/capc-e2e-new/test/e2e/deploy_app.go:111

[Fail] When testing Kubernetes version upgrades [It] Should successfully upgrade kubernetes versions when there is a change in relevant fields 
/root/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.12/framework/cluster_helpers.go:143

Ran 28 of 29 Specs in 8074.304 seconds
FAIL! -- 25 Passed | 3 Failed | 0 Pending | 1 Skipped
--- FAIL: TestE2E (8074.31s)
FAIL
k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 7 months ago

@k8s-triage-robot: Closed this PR.

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/pull/307#issuecomment-2094348128): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the PR is closed > >You can: >- Reopen this PR with `/reopen` >- Mark this PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
weizhouapache commented 7 months ago

/reopen

k8s-ci-robot commented 7 months ago

@weizhouapache: Reopened this PR.

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/pull/307#issuecomment-2094366571): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 6 months ago

@k8s-triage-robot: Closed this PR.

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/pull/307#issuecomment-2146076599): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the PR is closed > >You can: >- Reopen this PR with `/reopen` >- Mark this PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.