rollbar / terraform-provider-rollbar

Terraform provider for Rollbar
https://rollbar.com
MIT License
28 stars 13 forks source link

Bug: Root resource │ was present, but now absent #359

Closed stoyanzhekov closed 10 months ago

stoyanzhekov commented 1 year ago

This is happening durring tf apply from time to time and the 'fix' I use is to make at least one more time tf apply

bug

ghost commented 1 year ago

Hey @stoyanzhekov, Can you confirm which Terraform version you use?

bwmetcalf commented 1 year ago

We are seeing this issue with tf v1.3.6 and rollbar provider 1.12.0. Applying multiple times does not fix the issue for us.

│ Error: Provider produced inconsistent result after apply
│ 
│ When applying changes to
│ module.path.module.rollbar_project_token[0].rollbar_project_access_token.default,
│ provider "provider[\"registry.terraform.io/rollbar/rollbar\"]" produced an unexpected new value: Root resource was
│ present, but now absent.
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
bwmetcalf commented 1 year ago

We were able to work around this by importing the token from rollbar. The resource created the token, but it seems it doesn't like the response it's getting from the rollbar API.

pawelsz-rb commented 1 year ago

so this problem is documented and well summarized here: https://support.hashicorp.com/hc/en-us/articles/1500006254562-Provider-Produced-Inconsistent-Results

This article says that the main reason for this is not to have a retry logic for read operation. But we have a retry logic for all operations set globally on rest api. @bwmetcalf what response are you getting from rollbar API that terraform did not like?

bwmetcalf commented 1 year ago

It's not clear what the response is, but this occurred again just now with the above versions. Anything I can provide to help troubleshoot this?

stoyanzhekov commented 1 year ago

Hey @stoyanzhekov, Can you confirm which Terraform version you use?

sorry for the late response. our version is Terraform v1.2.2

stoyanzhekov commented 1 year ago

The workaround for us is to use the target flag. So I first apply one by one rollbar resources and after that the rest

ghost commented 1 year ago

Hey @stoyanzhekov, I'm glad that the workaround solves your issue. Are you sure you're using 1.2.2? We didn't release that version. We released 1.2.0 on 2021-09-10 and 1.3.0 on 2021-09-14. We double-checked if we have a retry logic for all read operations, but everything looks good. Can you collect debug logs if this issue comes up again? This article will tell you how to collect debug logs: https://developer.hashicorp.com/terraform/internals/debugging

stoyanzhekov commented 1 year ago

Our terraform version is v1.2.2 and rollbar provider version is v1.12.0

bwmetcalf commented 1 year ago

This issue continues to be a problem for us. We are about to upgrade terraform from v1.3.6 to a later version and will report back. We are using rollbar provider v1.12.0. The only workaround we've found is to import the rollbar token that gets created into state.

ghost commented 1 year ago

Hey @bwmetcalf, Thanks for sharing more details about this case. Configuring the project access token doesn't work, but if you import, it works just fine. Do I understand your workaround correctly?

bwmetcalf commented 1 year ago

Correct. The token gets created but for some reason terraform throws the error I posted at the beginning of this thread. We can import the token that was created and then everything works. This is impacting us because for every new microservice we create, we also create a corresponding rollbar integration/token and have to manually intervene in our CI pipelines to complete the terraform apply jobs.

ghost commented 1 year ago

Thanks for the confirmation, I'll take this back to the team.

ghost commented 1 year ago

We found the reason behind this issue. We plan to release a fix soon.

ghost commented 1 year ago

We released an API fix a few weeks ago that resolves this issue. Feel free to reopen it in case you still bump into it.

Omicron7 commented 1 year ago

@rollbar-bborsits We just ran into this issue again.

Terraform 1.5.7 terraform-provider-rollbar 1.13.0

rollbar_project_access_token.access_token: Creating...
╷
│ Error: Provider produced inconsistent result after apply
│ 
│ When applying changes to rollbar_project_access_token.access_token,
│ provider "provider[\"registry.terraform.io/rollbar/rollbar\"]" produced an
│ unexpected new value: Root resource was present, but now absent.
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
stefanoaurilio commented 12 months ago

Hey @rollbar-bborsits we ran into the issue again with the rollbar_project_access_token resource. Here the terraform plan with the detailed logs:

`[DEBUG] provider.terraform-provider-rollbar_v1.13.0: {"level":"debug","args":{"name":"stage_post_server_item","scopes":["post_server_item"],"status":"enabled","rate_limit_window_size":60,"rate_limit_window_count":150},"token":{"Name":"stage_post_server_item","project_id":,"access_token":"","Scopes":["post_server_item"],"Status":"enabled","rate_limit_window_size":60,"rate_limit_window_count":150,"date_created":1695910994,"date_modified":1695910994,"cur_rate_limit_window_count":0,"cur_rate_limit_window_start":1695910994},"time":"2023-09-28T16:23:14+02:00","message":"Successfully created new project access token"}

[DEBUG] provider.terraform-provider-rollbar_v1.13.0: {"level":"debug","accessToken":"***","time":"2023-09-28T16:23:14+02:00","message":"Reading resource project access token"}

[DEBUG] provider.terraform-provider-rollbar_v1.13.0: {"level":"debug","projectID":636775,"token":"***","time":"2023-09-28T16:23:14+02:00","message":"Reading project access token"}

[DEBUG] provider.terraform-provider-rollbar_v1.13.0: {"level":"debug","projectID":636775,"time":"2023-09-28T16:23:14+02:00","message":"Listing project access tokens"}

[DEBUG] provider.terraform-provider-rollbar_v1.13.0: {"level":"warn","projectID":636775,"token":"*","time":"2023-09-28T16:23:14+02:00","message":"Could not find matching project access token**"}

[DEBUG] provider.terraform-provider-rollbar_v1.13.0: {"level":"debug","accessToken":"***","time":"2023-09-28T16:23:14+02:00","message":"Token not found on Rollbar - removed from state"}

[DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot

[ERROR] vertex "rollbar_project_access_token.rollbarProjectToken" error: Provider produced inconsistent result after apply ╷ │ Error: Provider produced inconsistent result after apply │ │ When applying changes to rollbar_project_access_token.rollbarProjectToken, provider "provider[\"registry.terraform.io/rollbar/rollbar\"]" produced an unexpected │ new value: Root resource was present, but now absent. │ │ This is a bug in the provider, which should be reported in the provider's own issue tracker. ╵`

The token is correctly created but as you can see in the logs seems is not recognized later during the terraform state update. This is impacting us because for every new microservice we create, we also create a corresponding rollbar integration/token and have to manually fix our CI pipelines to complete the terraform apply jobs.

ghost commented 12 months ago

Hey, We expected this error to disappear after the API bug fix in August. I’ll talk to the Terraform maintainer about what we can do with this. I’ll keep you updated. I appreciate your debug logs. It will help a lot in finding out what causes the issue.

ghost commented 12 months ago

@stefanoaurilio @Giaco9 @Omicron7 I concluded some internal discussions very recently. We completed some database maintenance during the last few weeks that caused a slight replication lag in our databases, which resulted in inconsistent API responses. As Terraform works fast, it's essential to serve the used endpoints accurately. We made some of our endpoints replication lag safe, but some are still affected. I filed a bug ticket to make all endpoints used by Terraform replication lag-safe. I'll make sure to update you when it's out. We plan to make a general fix to this issue in the future, but that requires a more considerable architectural change.

ghost commented 10 months ago

We completed the reinforcement of several endpoints, so this issue shouldn't not come up again.