jfrog / terraform-provider-xray

Terraform provider to manage JFrog Xray
https://jfrog.com/xray/
Apache License 2.0
149 stars 12 forks source link

Moving policy with same name from one watch to other fails #125

Closed heryxpc closed 1 year ago

heryxpc commented 1 year ago

Describe the bug I updated my configuration to move one policy to a new watch. Terraform apply seemed to destroy it, but when the api tries to generate it it fails with a 409 error:

module.watch-all-repos-xxx.xray_security_policy.critical_cvss: Destroying... [id=security-policy-critical-cvss]
module.watch-critical-all-repos-xxx.xray_security_policy.critical_cvss: Creating...
module.watch-all-repos-xxx.xray_security_policy.high_severity: Modifying... [id=security-policy-high-severity]
module.watch-all-repos-xxx.xray_security_policy.critical_cvss: Destruction complete after 0s
module.watch-all-repos-xxx.xray_security_policy.high_severity: Modifications complete after 0s [id=security-policy-high-severity]
module.watch-all-repos-xxx.xray_watch.all-repos: Modifying... [id=watch-all-repos-xxx]
module.watch-all-repos-xxx.xray_watch.all-repos: Modifications complete after 1s [id=watch-all-repos-xxx]
╷
│ Error: 
│ 409 POST https://artifactory-staging.xxx.net/xray/api/v2/policies
│ {"error":"Policy already exists"}
│ 
│   with module.watch-critical-all-repos-xxx.xray_security_policy.critical_cvss,
│   on .terraform/modules/watch-critical-all-repos-xxx/ops/terraform/modules/xray-security/watch-critical/main.tf line 10, in resource "xray_security_policy" "critical_cvss":
│   10: resource "xray_security_policy" "critical_cvss" {
│ 
╵

The policy security-policy-critical-cvss used to exist in the watch I'm using Terraform modules to save the configuration into the watch watch-all-repos-xxx. I was moving it to a new one named watch-critical-all-repos-xxx

My CI first ran terraform plan and then terraform apply. During plan it is stated that will be destroyed:

module.watch-all-repos-xxx.xray_security_policy.critical_cvss will be destroyed
resource "xray_security_policy" "critical_cvss" {
...
Plan: 2 to add, 4 to change, 1 to destroy.

But, as stated above it fails during apply. I'm using version 1.11.1

Former configuration:

terraform {
  required_providers {
    xray = {
      source  = "registry.terraform.io/jfrog/xray"
      version = ">= 1.11.1, <= 1.12"
    }
  }
}
resource "xray_security_policy" "high_severity" {
  name        = "security-policy-high-severity"
  description = "Security policy to alert on High severity vulnerabilities"
  type        = "security"

  rule {
    name     = "rule-high-severity"
    priority = 1

    criteria {
      min_severity          = "High"
    }

    actions {
      webhooks                           = []
      mails                              = []
      notify_watch_recipients            = true

      block_download {
        unscanned = false
        active    = false
      }
    }
  }
}

resource "xray_security_policy" "critical_cvss" {
  name        = "security-policy-critical-cvss"
  description = "Security policy to flag artifacts with Critical CVSS score 10.0 vulnerabilities"
  type        = "security"

  rule {
    name     = "cvss-ten"
    priority = 1

    criteria {

      cvss_range {
        from = 10.0
        to   = 10.0
      }
    }

    actions {
      webhooks                           = []
      mails                              = []
      notify_watch_recipients            = true

      block_download {
        unscanned = false
        active    = false
      }
    }
  }
}

resource "xray_watch" "all-repos" {
  name        = "watch-critical-all-repos-lyft"
  description = "Watch policy violations for top critical vulnerabilities in all repositories"
  active      = true

  watch_resource {
    type = "all-repos"

    filter {
      type  = "regex"
      value = ".*"
    }
  }

  assigned_policy {
    name = xray_security_policy.high_severity.name
    type = "security"
  }

  assigned_policy {
    name = xray_security_policy.critical_cvss.name
    type = "security"
  }

  watch_recipients = ["vuln-mgmt@xxx.com"]
}

New configuration:

terraform {
  required_providers {
    xray = {
      source  = "registry.terraform.io/jfrog/xray"
      version = ">= 1.11.1, <= 1.12"
    }
  }
}
resource "xray_security_policy" "high_severity" {
  name        = "security-policy-high-severity"
  description = "Security policy to alert on High severity vulnerabilities"
  type        = "security"

  rule {
    name     = "rule-high-severity"
    priority = 1

    criteria {
      min_severity          = "High"
    }

    actions {
      webhooks                           = []
      mails                              = []
      block_release_bundle_distribution  = false
      fail_build                         = false
      notify_watch_recipients            = false

      block_download {
        unscanned = false
        active    = false
      }
    }
  }
}

resource "xray_watch" "all-repos" {
  name        = "watch-critical-all-repos-lyft"
  description = "Watch policy violations for top critical vulnerabilities in all repositories"
  active      = true

  watch_resource {
    type = "all-repos"

    filter {
      type  = "regex"
      value = ".*"
    }
  }

  assigned_policy {
    name = xray_security_policy.high_severity.name
    type = "security"
  }

  watch_recipients = ["vuln-mgmt@xxx.com"]
}

Artifactory version: 7.41.7 Xray version: 3.41.5 Terraform: 1.0.2

Requirements for and issue

Expected behavior The policy from the original watch should be destroyed and a new one with the same name be created into the new watch.

Additional context We use project atlantis to run Terraform as part of our pipeline.

alexhung commented 1 year ago

@heryxpc If I read your message correctly, you are moving the policy resource from one module watch-all-repos-xxx to another watch-critical-all-repos-xxx?

Unless you use the Terraform moved block or state mv cli command, TF won't know that this is a move operation and will try to do exactly what you described. Which is, delete the policy resource in watch-all-repos-xxx and create the same policy as new resource in watch-critical-all-repos-xxx. However, since TF doesn't know they are the same resource, it will try to perform both operation at the same time in parallel. This is, I think, the cause of the 409 error.