hashicorp / terraform-provider-nomad

Terraform Nomad provider
https://registry.terraform.io/providers/hashicorp/nomad/latest
Mozilla Public License 2.0
139 stars 102 forks source link

Provider produced inconsistent result after apply #253

Open wallacepf opened 2 years ago

wallacepf commented 2 years ago

Terraform Version

Terraform v1.0.9
on linux_amd64

Nomad Version

1.1.6

Provider Configuration

Which values are you setting in the provider configuration?

terraform {
    required_providers {
        nomad = {
            version = "~> 1.4.15"
        }
    }
}

provider "nomad" {
    address = "http://192.168.86.27:4646"
}

Affected Resource(s)

Terraform Configuration Files

terraform {
    required_providers {
        nomad = {
            version = "~> 1.4.15"
        }
    }
}

provider "nomad" {
    address = "http://192.168.86.27:4646"
}

resource "nomad_namespace" "cicd" {
    name = "cicd"
    description = "Namespace for CICD"
}

resource "nomad_job" "gl-runner" {
    jobspec = file ("${path.module}/jobs/gl-runner.nomad")
    hcl2 {
        enabled = "true"
        vars = {
            "datacenters" = "[\"NUC\"]",
            "namespace" = nomad_namespace.cicd.name
        }
    }
}
variable "datacenters" {
    type = list(string)
}

variable "namespace" {
    type = string
}

job "gitlab-runners" {
  datacenters = var.datacenters
  type        = "service"
  namespace = var.namespace

  group "runners" {
      task "runners" {
          driver = "docker"
          config {
              image = "gitlab/gitlab-runner:alpine"
              volumes = [
                  "/srv/gitlab-runner/config:/etc/gitlab-runner"
              ]
          }
      }
  }
}

Debug Output

https://gist.github.com/wallacepf/7bf4863122f539159bd1bb4329cf67b1

Expected Behavior

Resource get deployed within specified namespace on Nomad

Actual Behavior

Error message:

Error: Provider produced inconsistent result after apply
When applying changes to nomad_job.gl-runner, provider "provider[\"registry.terraform.io/hashicorp/nomad\"]" produced an unexpected new value: Root resource was present, but now absent. This is a bug in the provider, which should be reported in the provider's own issue tracker.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply (I'm currently using TFC4B with remote agents running on nomad)

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

lgfa29 commented 2 years ago

Hi @wallacepf 👋

I'm not sure how TFC4B works, but would it be possible to set the TF_LOG_PROVIDER environment variable and reproduce the issue again?

Thanks!

wallacepf commented 2 years ago

Hi @wallacepf 👋

I'm not sure how TFC4B works, but would it be possible to set the TF_LOG_PROVIDER environment variable and reproduce the issue again?

Thanks!

Of course! I'll do that during the weekend.

wallacepf commented 2 years ago

Here you will find the provider's debug information: https://gist.github.com/wallacepf/d7b85d8e657797a70f43dd55b6f84022

(Logging level was set to TRACE)

Seems like the variables I'm passing inside the HCL2 block aren't considered when the code runs.

Don't know if what I'm facing can be related to this: https://github.com/hashicorp/nomad/issues/11149 (The error message looks the same)

lgfa29 commented 2 years ago

Thanks for the logs @wallacepf.

I still can't seem to reproduce it thought 🤔

Are you updating an existing job? Or maybe you set the variable blocks after registering the job without them before?

MagicRB commented 1 year ago

I'm hitting something similar, if I set region in the job spec to a non-local region, the provider will hit the inconsistent result error. Imo it's because it doesn't check the correct region for verificatin of deployment. A fix would be to allow us to specify region and namespace at the nomad_job resource level. Having multiple providers is cumbersome with aliases

lgfa29 commented 1 year ago

Ah thanks for the extra info @MagicRB.

Unfortunately exposing these fields in the nomad_job resource is not that doable (check https://github.com/hashicorp/terraform-provider-nomad/issues/125#issuecomment-674251130 for more details). I will try to reproduce this again and see how we can fix it.

MagicRB commented 1 year ago

Maybe a solution would be to detect the mismatch and give a meaningful error message. That would at least tell people how to fix it.

Mileshin commented 2 months ago

Hi, any updates? I encountered the same error. My terraform version is:

$ terraform version
Terraform v1.8.2
on linux_amd64
+ provider registry.terraform.io/hashicorp/nomad v1.4.20
+ provider registry.terraform.io/hashicorp/null v3.1.1

My nomad job file

variable "namespace" {
  type = string
}

job "nginx-test" {
  datacenters = ["dc1"]
  namespace = var.namespace
  constraint {
    attribute = "${node.class}"
    value = "common"
  }
  type = "system"
  group "nginx" {
    network {
      port "http" {
        static = 2987
        to = 80
      }
    }

    service {
      name = "nginx"
      port = "http"
    }

    task "nginx" {
      driver = "docker"

      config {
        image = "nginx:1.26.0-alpine"

        ports = ["http"]

      }

    }
  }
}

terraform config

resource "nomad_job" "nginx-test" {
  jobspec    = file("${path.module}/jobspecs/nginx-test.hcl")
  hcl2 {
    enabled = true
    vars = {
      namespace = nomad_namespace.system-dev.name
    }
  }
}