ovh / terraform-provider-ovh

Terraform OVH provider
https://registry.terraform.io/providers/ovh/ovh/latest/docs
Mozilla Public License 2.0
184 stars 135 forks source link

ovh_cloud_project_kube kubeconfig #203

Open rienafairefr opened 3 years ago

rienafairefr commented 3 years ago

Hi, it seems to me the kubeconfig attribute of the ovh_cloud_project_kube is not fetched from the API, only fetched once at the resource creation. If the kubeconfig is compromised therefore reset in the dashboard, then the state of the resource in terraform is not refreshed.

Terraform Version

v0.13.7, but probably on other versions

Affected Resource(s)

ovh_cloud_project_kube

Expected Behavior

The .kubeconfig attribute of the ovh_cloud_project_kube resource should be the kubeconfig of the ovh_cloud_project_kube

Actual Behavior

The .kubeconfig attribute stays the same as when first deployed

Steps to Reproduce

  1. create a ovh_cloud_project_kube resource
  2. reset the kubeconfig, e.g. in the dashboard.
  3. terraform refresh/apply > no change in the resource state
yanndegat commented 3 years ago

Hi @rienafairefr

you're correct. But this is more an issue with the design of the API than with the provider resource in itself. there's no way to detect this kind of change as the kubeconfig can be retrieved only at cluster creation.

As the reset of the cluster will trigger a complete cluster re creation, then it's equivalent to a terraform destroy/apply in your scenario.

maybe we could map the "reset" api endpoint in a specific terraform resource (eg: ovh_cloud_project_kube_reset) which would only trigger a reset. so you can taint it on demand and retrieve the kubeconfig from this new kind of resource.

I can't see any other way to manage this specific case.

cc @mhurtrel ?

mhurtrel commented 3 years ago

Note there are 2 api calls with different use cases and behaviour.

Both these call are supposed to be used in very specific situation and I would bot consider them as daily action as both may have an impact on you application availability and could be avoided bybusing RBAC and keeping the clusterbin good shape.

rienafairefr commented 3 years ago

Yes, ultimately that super-admin kubeconfig is definitely not expected to be disseminated, more like being used to create RBAC objects, then using kubeconfig from those RBAC for actual dissemination. In this case it was a test/dev env, no harm done.

The api used on the dashboard can retrieve the kubeconfig even well after creation so that why I was confused when the api in the provider stubbornly refused to refresh. seeing "refreshing the state of module.x.ovh_cloud_project_kube.yyy" and not getting any refresh done is unexpected I'd say. This means the dashboard at https://www.ovh.com/manager/public-cloud/#/pci/projects/ is using a non-public API, I guess ?

Yes a ovh_cloud_project_kube_kubeconfig resource might make sense, as you described @yanndegat, passing it the ovh_cloud_project_kube id. On creation of the ovh_cloud_project_kube, then the linked ovh_cloud_project_kube_kubeconfig would be its kubeconfig attribute, and the provider would call POST /cloud/project/{serviceName}/kube/{kubeId}/kubeconfig/reset if we taint the ovh_cloud_project_kube_kubeconfig, and the POST /cloud/project/{serviceName}/kube/{kubeId}/reset would only be called when tainting/re-creating the actual ovh_cloud_project_kube

yanndegat commented 3 years ago

well, the provider is supposed to be "dumb" and only map api endpoints as 1 to 1 resources. There are some very specific cases where this is not true. but if there is to be some business logic, then it has to be implemented as an api endpoint, then mapped in terraform.

but in the end you could end in a situation where you would have multiple resources defined in your recipe:

resource ovh_cloud_project_kube cluster {} resource ovh_cloud_project_kube_reset fullreset {} resource ovh_cloud_project_kube_kubeconfig_reset kubeconfig_reset {}

but then you would know which one has to be output.

btw: when looking at the way aws eks our gcp k8s engine are mapped in terraform, i can't see any logic implemented to reset the cluster auth config. maybe this logic has to kept & managed outside terraform.

yanndegat commented 3 years ago

in aws eks, theres a datasource to retrieve the kubeconfig auth info

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth

bullshit commented 2 years ago

Hi, is there a github project were we can open an issue so the behavior of the api changes or would it be okey for the maintainers just to use the reset kubeconfig endpoint in a resource?

ElliotG commented 1 year ago

Just wanted to add a +1 for getting the API expanded so that there was a data source for pulling the kubeconfig. Without it, it is extremely brittle/obnoxious to use the Kuberentes/Helm providers in Terraform.