64bit / async-openai

Rust library for OpenAI
https://docs.rs/async-openai
MIT License
1.11k stars 165 forks source link

Advanced error handling for OpenAI-like LLM APIs #180

Closed louis030195 closed 8 months ago

louis030195 commented 8 months ago

Hey I'm using async-openai thru perplexity.ai API like so:

client.chat().create_stream(request)

And getting:

Error: stream failed: Stream ended

Which I suspect is a rate limit from perplexity.ai API but I don't know how to get more info about the error like reading the raw error? My second guess is that perplexity have a different way to announce a rate limit (429?) and that's why it's not detected as so in your lib.

while let Some(result) = self.stream.next().await {
...
match
...

Err(e) => { // <- openaierror
    // How to get the raw error?
    println!("Error: {}", e);
    // Return the error to the caller
    return Err(e);
}