Closed yuripallada-groupm closed 1 month ago
I tried this out and reproduced the behavior with the following program, after running pulumi new gcp-yaml
:
name: gpcuptimecheck
runtime: yaml
resources:
uptime-check:
type: gcp:monitoring:UptimeCheckConfig
properties:
displayName: API Uptime Check
timeout: 30s
httpCheck:
path: /healthcheck
port: 443
useSsl: true
validateSsl: true
acceptedResponseStatusCodes:
- statusClass: STATUS_CLASS_2XX
period: 300s
monitoredResource:
type: uptime_url
labels:
host: our.domain.com
When looking at the preview, if I select details, I see the following diff:
Looks like monitoredResource.labels
is changing.
I was able to workaround this with the ignoreChanges
resource option:
options:
ignoreChanges:
- monitoredResource.labels
Full program:
name: gpcuptimecheck
runtime: yaml
resources:
uptime-check:
type: gcp:monitoring:UptimeCheckConfig
properties:
displayName: API Uptime Check
timeout: 30s
httpCheck:
path: /healthcheck
port: 443
useSsl: true
validateSsl: true
acceptedResponseStatusCodes:
- statusClass: STATUS_CLASS_2XX
period: 300s
monitoredResource:
type: uptime_url
labels:
host: our.domain.com
options:
ignoreChanges:
- monitoredResource.labels
Moving this to the GCP repository to see if this is a bug that needs to be fixed or if there's a better approach to working around.
TLDR: The uptime_url
type for monitoredResource
requires project_id to be specified.
Hey @yuripallada-groupm, thanks for reporting here. The behaviour here is much less then ideal but it looks like GCP expects all labels for the monitored resource type to be specified as per https://stackoverflow.com/a/75826651 and https://cloud.google.com/monitoring/api/resources#tag_uptime_url
In this case the provider seems to auto-fill your project-id and then that produces a diff in your next pulumi up
. You should be able to work around the issue by specifying the project id yourself in the monitoredResource
, like so:
name: gpcuptimecheck
runtime: yaml
resources:
uptime-check:
type: gcp:monitoring:UptimeCheckConfig
properties:
displayName: API Uptime Check
timeout: 30s
httpCheck:
path: /healthcheck
port: 443
useSsl: true
validateSsl: true
acceptedResponseStatusCodes:
- statusClass: STATUS_CLASS_2XX
period: 300s
monitoredResource:
type: uptime_url
labels:
host: our.domain.com
project_id: pulumi-development
This looks like an upstream issue which is likely masked in TF since it refreshes by default.
Confirmed it's an upstream issue: https://github.com/hashicorp/terraform-provider-google/issues/18038
I'm a co-worker of @yuripallada-groupm (which is on leave right now).
@justinvp thanks for the temporary workaround, this indeed prevents the resources from being recreated. @VenelinMartinov specifying the project_id indeed works as well, we'll stick with that for now.
Thanks for your help!
Resolved upstream in https://github.com/hashicorp/terraform-provider-google/issues/18038
What happened?
We have a
gcp:monitoring:UptimeCheckConfig
in our Pulumi yaml program, which succesfully deploys a UptimeCheck on GCP. However, everytime we dopulumi up
it recreates this Uptime Check in GCP, while nothing changes about the resource.This is the resource in our program:
Where the ${apiDomain} variable contains the domain of our API (e.g. api.ourdomain.com)
The pipeline outputs:
++ gcp:monitoring:UptimeCheckConfig uptime-check created replacement (2s) [diff: ~monitoredResource]
The monitored resource hasn't changed between executions, so not sure why it feels like re-creating the resource. Additionally, we have also tried hardcoding the ${apiDomain} variable, but that didn't solve the problem.
We have also compared the state files between each deployment and found that nothing has changed to the input parameters, and only the id, name and uptimeCheckId output values have changed (obviously, since it re-creates it every time).
Do you have an idea how we can prevent this from happening?
Example
pulumi up
pulumi up
. The uptime check in GCP will be replacedOutput of
pulumi about
pulumi version: 3.113.3 pulumi GCP plugin version: 7.19.0
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).