Open adam-kulagowski opened 1 month ago
The best option here is to use the built in functionality of terraform support for adding a delay.
See documentation here time_sleep
Give this a try first. If this doesn't work, I can try to add exponential back off to attempt to workaround the issue.
I'll close this issue. But reopen if you can't resolve.
I've been looking into this further and I think I'm going to implement exponential backoff in the provider to resolve the rate limit issue. This will take me some time though maybe a few weeks.
Can you add a terraform configuration that triggers a rate limit error? Then I can implement the required exponential backoff retry around the API calls.
Sure, first thing in the morning (CET). As a side note: time_sleep is not a solution here (at least to my TF understanding) as I'm hitting a limiter when creating only two resources (each with count=100). So yes I can put a delay between project and service_key creation but a delay withing each iteration would help here greatly.
terraform {
required_providers {
openai = {
source = "skyscrapr/openai"
}
}
}
variable "ai_token" {}
provider "openai" {
admin_key = var.ai_token
organization_id = "org-XXXXXXXXXXXXXXXXXXXXXX"
}
resource openai_project "ac2_ws" {
name = "ac2-${count.index}"
count = 80
}
resource openai_project_service_account "ac2-keys" {
project_id = openai_project.ac2_ws[count.index].id
name = "ac2 test"
role = "member"
count = 80
depends_on = [ openai_project.ac2_ws ]
}
As promised: the code. The idea is to create one project for each user with a key assigned to that project. We expect 70 users so 80 sound reasonable :)
great. thanks for sending that through. I'll setup a test to run this config and verify the retry mechanism works. Will take me a few days.
Hi,
Is there a way to put a configurable delay between each call within resource iteration? Eg when I'm creating
project
resource withcount = 100
or higher, I'm often hitting openai API rate limit (which is not. configurable). Rerunning terraform multiple times helps to finish the job but it sounds like a workaround not a solution 😊