Closed dotdate closed 4 years ago
If you previously applied these resources with e.g. kubectl
you have to first import them into the Terraform state using terraform import
. More info: https://registry.terraform.io/providers/kbst/kustomization/latest/docs#imports
I created those resources via terraform and kustomization. First run works. I than did some changes in other things and wanted to reapply via
terraform apply
I suggest you check terraform state list
and also terraform workspace list
. I don't know what you changed inbetween, maybe you deleted your state?
Saw this just now, but your resource references data.kustomization.current
while your data source is called example
in the code you showed above.
I have no reason to believe this fundamental part of the provider is broken. There are plenty of tests, and I am excessively using the provider as part of the Kubestack Terraform GitOps framework. So I must assume this is an error on your end. If you can provide a complete configuration that shows the error I'm happy to take a look. Until then there is nothing I can do.
I included the kustomization stuff in the main.tf
, here is a snippet of the important stuff for this topic:
terraform {
required_version = "0.13.4"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>2.30.0"
}
helm = {
source = "hashicorp/helm"
version = "~>1.3.1"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 1.13.2"
}
kustomization = {
source = "kbst/kustomization"
version = "0.2.2"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "1.9.1"
}
}
}
[...]
provider "kustomization" {
}
[...]
data "kustomization" "current" {
# using the workspace name to select the correct overlay
path = "minio/"
}
resource "kustomization_resource" "minio" {
for_each = data.kustomization.current.ids
manifest = data.kustomization.current.manifests[each.value]
}
And this is how my folder tree looks like for the mentioned stuff:
Output statelist
~/git/gitlab-devops.covis.de/covis/azure-qa: terraform state list
data.kustomization.current
azurerm_application_gateway.agw01
azurerm_kubernetes_cluster.aks01
azurerm_postgresql_configuration.postgresql-conf-01
azurerm_postgresql_database.tigerdb
azurerm_postgresql_server.postgresql01
azurerm_private_dns_zone.privatedns-zone-01
azurerm_private_dns_zone_virtual_network_link.privatedns-zone-vlink-01
azurerm_private_endpoint.privateendpoint-01
azurerm_public_ip.pip01
azurerm_resource_group.rg01
azurerm_role_assignment.ra3
azurerm_role_assignment.ra4
azurerm_role_assignment.ra5
azurerm_role_assignment.ra6
azurerm_role_assignment.ra7
azurerm_role_assignment.ra8
azurerm_storage_account.sa01
azurerm_storage_container.scont01
azurerm_subnet.appgwsubnet
azurerm_subnet.kubesubnet
azurerm_virtual_network.vnet01
helm_release.aad-pod-identity01
helm_release.agw-ingress01
helm_release.pgadmin-01
helm_release.redis-01
helm_release.velero01
kubernetes_secret.postgressecret
Output workspace (we don't use this feature)
~/git/gitlab-devops.covis.de/covis/azure-qa: terraform workspace list
workspaces not supported
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Configure repo and tag of MinIO Operator Image
images:
- name: minio/k8s-operator
newName: minio/k8s-operator
newTag: v3.0.28
namespace: minio-operator
resources:
- namespace.yaml
- service-account.yaml
- cluster-role.yaml
- cluster-role-binding.yaml
- crds/minio.min.io_tenants.yaml
- service.yaml
- deployment.yaml
I also strongly believe that the problem is on my side, but I don't see where.
Thank You!
I don't see an error in your configuration. But the resources are clearly not in the state, there are none starting with kustomization_resource.minio
. I don't know why the resources are not in the state but I'd really try importing the existing resources into the state.
To debug more, try a manual kustomize build minio/
and check the output. You can also look at the state of the data source like this terraform state show data.kustomization.current
. That should show the resources the data source includes which the for_each
should then loop over.
Thanks to you we could figure out the problem!
It was really a state problem. The created minio did not fit with the state on what we wanted to create. We don't know exactly how this happened but since we deploy manually this can happen.
For all others who run over this problem:
terraform destroy
Thanks for the feedback. I'll close the issue, seems there is nothing else to do here.
Hi
I hope I'm not misunderstanding something but on a rerun I get the error message already exist via Terraform. Is it not possible to make incremental changes?
Do I understand correctly that data is loaded by
and creation of resource is handled with
From my understanding terraform cannot see changes from kustomize and doesn't have a state of kustomize in statefile.
Error: