Closed mogopz closed 3 years ago
I'm facing the same issue with TargetGroupBinding
. However, if you change apiVersion
to elbv2.k8s.aws
like this, at least you'll get a different error:
resource "kubernetes_manifest" "targetbinding" {
provider = kubernetes-alpha
manifest = {
apiVersion = "elbv2.k8s.aws"
kind = "TargetGroupBinding"
metadata = {
name = aws_lb_target_group.targets.name
namespace = kubernetes_deployment.app.metadata.0.namespace
}
spec = {
serviceRef = {
name = kubernetes_service.service.metadata.0.name
port = kubernetes_service.service.spec.0.port.0.port
}
targetType = "ip"
targetGroupARN = aws_lb_target_group.targets.arn
}
}
}
Error: Failed to determine GroupVersionResource for manifest
on test.tf line 223, in resource "kubernetes_manifest" "targetbinding":
223: resource "kubernetes_manifest" "targetbinding" {
no matches for kind "targetgroupbinding" in group ""
I have not been able to fix this so far.
@mogggggg Thanks for reporting this.
TL;DR; We're working on a permanent solution to this, but in the mean time setting preserveUnknownFields
to false
in the CRD works around this problem. The rationale for this is explained below.
Since provider version 0.3.0 we've made radical changes to the way it generates the plan. The central change is that we now use the OpenAPI definitions published by the cluster to ensure the structure of the resource's state is consistent even when the user specifies only a subset of the possible attributes. This is needed in order for Terraform to correctly maintain the state of the resource throughout its lifecycle.
The issue that you are seeing here is that the OpenAPI definition advertised by the cluster API for the TargetGroupBinding.v1beta1.elbv2.k8s.aws
resource is essentially empty. This is strange, because I had a quick look at this CRD and noticed they are defining a schema correctly in it, but also then setting preserveUnknownFields
to true. I'm not sure why the authors chose to set this since it does have a proper schema, but setting it to true causes the API server to advertise no schema for the CRD. The workaround, as mentioned is to set it to false. The permanent solution, on our side, is to source the schema information directly off the CRD in the case of CR resources, rather than going through the cluster API.
@CrawX elbv2.k8s.aws
is not a valid apiVersion value (it's missing the version part).
I also have the same issue. In my case i have such kubernetes_manifest resource
resource "kubernetes_manifest" "vertical-pod-autoscaler" {
provider = kubernetes-alpha
manifest = {
"apiVersion" = "autoscaling.k8s.io/v1"
"kind" = "VerticalPodAutoscaler"
...
Terraform plan fails with error
╷
│ Error: No valid OpenAPI definition
│
│ with kubernetes_manifest.vertical-pod-autoscaler,
│ on vpa.tf line 4, in resource "kubernetes_manifest" "vertical-pod-autoscaler":
│ 4: resource "kubernetes_manifest" "vertical-pod-autoscaler" {
│
│ Resource VerticalPodAutoscaler.v1.autoscaling.k8s.io does not have a valid OpenAPI definition in this cluster.
│
│ Usually this is caused by a CustomResource without a schema.
Cluster has verticalpodautoscalers api resource with Kind: VerticalPodAutoscaler but with NAME verticalpodautoscalers
kubectl api-resources --api-group=autoscaling.k8s.io
NAME SHORTNAMES APIVERSION NAMESPACED KIND
verticalpodautoscalers vpa autoscaling.k8s.io/v1 true VerticalPodAutoscaler
It looks like kubernetes-alpha checks Kind instead of CRD. In my exaple it checks Resource VerticalPodAutoscaler.v1.autoscaling.k8s.io instead of Resource VerticalPodAutoscalers.v1.autoscaling.k8s.io
You should use api resource name instead of kind for validating OpenAPI definition
Version 0.4.0 was just released which brings support for non-structural CR / CRDs. Please give it a try and let us know if this case is still an issue.
Version 0.4.0 was just released which brings support for non-structural CR / CRDs. Please give it a try and let us know if this case is still an issue.
With earlier versions I had the same issues as mentioned above by others. With 0.4.0 these errors went away and resource creation is working as expected. Thanks.
Thanks @alexsomesan, it's working for me now!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Terraform, Provider, Kubernetes versions
Steps to Reproduce
I'm having trouble creating CRD resources using this provider. In the latest version (
0.3.2
) I get the following error:But the same code works correctly in
0.2.1
To reproduce you can install the aws-load-balancer-controller CRD:
kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
Then try and create a TargetGroupBinding object:
I'm running the steps above in a Kind cluster to perform some automated testing, none of the AWS resources need to actually exist for it to work. Let me know if I can provide any more info!