Closed oleg-codaio closed 3 years ago
Thanks for filing this in. If you'd like to try an implementation I would say you could go ahead. I'm down to pairing (online or IRL) but my time is extra limited right now so I am focusing on my day job.
As to:
Is it possible to specifically set rate limits on this module?
As you said, seems like a general issue with Terraform, I'm worried about adding extra complexity here making things less debuggable/maintainable.
As a matter of fact, maybe it's something Stripe implemented for large(r) customers? Have you asked them if this rate limit was configurable by any chance?
Thanks again, I'll keep this open!
@kabesan Is this what you've done here https://github.com/kabesan/terraform-provider-stripe/commit/0792516a1daf18723dda9b7e04395a2fa76cde4d ? Do you think we could use this?
@franckverrot can we reopen this? I'm encountering the same issue. I checked out https://github.com/kabesan/terraform-provider-stripe/commit/0792516a1daf18723dda9b7e04395a2fa76cde4d and seemed to fix the issue for me.
I can make a PR to merge those changes into main
.
To speed up diffs in Terraform, we run
terraform plan
/apply
with higher parallelism using theparallelism
flag. Unfortunately, that can occasionally result in errors like this from the Stripe provider:This can happen on both testmode and live instances, depending on the parallelism. Meanwhile, we're using AWS and I suppose the rate limits there are more generous, since even having 200x parallelism isn't a problem.
Is it possible to specifically set rate limits on this module? I know this might be more of a general issue with Terraform than specifically this provider (i.e., if Terraform allowed setting the parallelism on a per-module/resource basis, see https://github.com/hashicorp/terraform/issues/14258), but I wanted to log it anyway. I'm not super familiar with the implementation, but maybe we could add some kind of queueing or randomized exponential retry logic to this provider.