64bit / async-openai

Rust library for OpenAI
https://docs.rs/async-openai
MIT License
1.17k stars 178 forks source link

I get the following error when trying to use model gpt-4 #128

Closed TheBuilderJR closed 1 year ago

TheBuilderJR commented 1 year ago

Works with turbo but not gpt4. Error message doesn't help much.

error: stream failed: Invalid status code: 404 Not Found

Code

#[derive(Deserialize)]
struct ChatInput {
    content: String,
    model: Option<String>,
}

#[post("/stream", format = "json", data = "<chat_input>")]
async fn stream(chat_input: Json<ChatInput>) -> TextStream![String] {
    let client = Client::new();
    let model = chat_input.model.clone().unwrap_or_else(|| "gpt-3.5-turbo".to_string());

    println!("{}", model);
    let model = "gpt-4-32k".to_string();
    let request = CreateChatCompletionRequestArgs::default()
        .model(model)
        // .max_tokens(512u16)
        .messages([
            ChatCompletionRequestMessageArgs::default()
                .content(chat_input.content.clone())
                .role(Role::User)
                .build()
                .unwrap(),
        ])
        .build()
        .unwrap();

    dbg!(&request);

    let mut stream = client.chat().create_stream(request).await.unwrap();

    TextStream! {
        while let Some(result) = stream.next().await {
            match result {
                Ok(response) => {
                    for chat_choice in &response.choices {
                        if let Some(ref content) = chat_choice.delta.content {
                            yield content.clone();
                        }
                    }
                }
                Err(err) => {
                    yield format!("error: {}", err);
                }
            }
        }
    }
}
64bit commented 1 year ago

OpenAI docs might provide more info on why "not found"

Does this apply to you?

GPT-4 is currently accessible to those who have made at least one successful payment through our developer platform.

TheBuilderJR commented 1 year ago

yes it did. paying for $5 worth of credits fixed it!

I think ideally we have better error messages but I guess that's more on openai.

one thing that would be nice is if model was an enum instead of a string!

64bit commented 1 year ago

Yeah there's opportunity to have better errors from streaming endpoint - like OpenAIError::ApiError from non-streaming counterpart. PRs welcome :)

The model is not enum for couple of reasons: