Closed JiaZhang42 closed 2 days ago
Oh I did not notice this one yet and I am not sure I fully understand what you mean. Does claude or openai return the wrong results with an ID name? For send batch functions I typically have something like this where implicit names are generated, which should ensure that ids are created for each LLMMessage if names are missing.
custom_id <- names(.llms)[i]
if (is.null(custom_id) || custom_id == "" || is.na(custom_id)) {
custom_id <- paste0(.id_prefix, i) # Generate a custom ID if name is missing
names(.llms)[i] <<- custom_id # Assign generated ID as the name of .llms
}
And this here in the fetch_batch functions:
results_by_custom_id <- purrr::set_names(results_list, sapply(results_list, function(x) x$custom_id))
updated_llms <- lapply(names(.llms), function(custom_id) {
result <- results_by_custom_id[[custom_id]]
if (!is.null(result) && result$result$type == "succeeded") {
assistant_reply <- result$result$message$content$text
llm <- add_message(llm = .llms[[custom_id]],
role = "assistant",
content = assistant_reply,
meta = extract_response_metadata(result$result$message))
return(llm)
} else {
warning(sprintf("Result for custom_id %s was unsuccessful or not found", custom_id))
return(.llms[[custom_id]])
}
})
I am wondering what's causing the error? Do you have an example for a batch that was stitched together wrong? By the way, do you want to add yourself as contributor to the DESCRIPTION file?
Or is it just the sort order for fetch_batch(). The current code should use the order of the batches in send_batch though, or am I missing something?
Oh I see! I didn't realize the order was already perfectly handled in fetch_batch()
👍. Sorry for the confusion.
I do have another question about the batch API. Due to the enqueued token limit for the OpenAI batch API, I divided the messages into chunks. I then use cron to run a piece of code every hour to check if the current chunk has been completed and if so submit the next chunk until no chunk left. This should work fine as OpenAI writes that "Once a batch job is completed, its tokens are no longer counted against that model's limit." But I keep receiving failed batch jobs and the error message "Enqueued token limit reached." I checked there was no other unfinished batch at that time. The job can only be submitted without failing after a few hours.
Have you ever experienced this issue? Seems that OpenAI doesn't reset the enqueued token count timely.
I appreciate the offer about the contributor list! My contribution was just a small bug fix, so I’m not sure it qualifies. But I’m happy to leave it to your judgment.
I have not experienced this issue yet, but it seems to be relatively common. My guess is that OpenAI really takes some time to reset enqueued token limits. Perhaps moving to a higher usage tier might help, but this is relatively expensive. I also had another issue with batches that they sometimes just expire for specific models.
I appreciate your bug-fix, your feedbacks and new tests. I'll add you with person("Jia", "Zhang", , , role = "ctb")
.
Thanks!
Hi Eduard,
I noticed that both OpenAI and Anthropic batch APIs don't guarantee the order when returning the results, causing problems to codes such as
in the article Classifying Texts with tidyllm. Of course one can manually name the list when creating the llm messages, but this is so easily ignored. Do you think it is a good idea to add request IDs as a required parameter of the batch functions or create IDs implicitly and sort results based on the implicit IDs? Thanks!