Mintplex-Labs / anything-llm

The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
https://anythingllm.com
MIT License
27.48k stars 2.77k forks source link

[BUG]: can't use openai realtime api #2551

Closed japen0617 closed 3 weeks ago

japen0617 commented 3 weeks ago

How are you running AnythingLLM?

Local development

What happened?

I chose the gpt-4o-realtime model in the workspace. Then try to use it, but it shows the following error message Could not respond to message. 404 This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/ completions image1 image3 image2

Are there known steps to reproduce?

No response

timothycarambat commented 3 weeks ago

Correct, dont use realtime as your model - we dont have support for it currently since it cannot handle chat message pairs

lwiart commented 1 week ago

Hello,

Same for me, I just installed AnythingLLM (1.6.9), chose OpenAI and pasted an API Key. But found why :

Normal ?

image

PS: other than that, AnythingLLM is awesome!

timothycarambat commented 6 days ago

While realtime is available in the dropdown (we dont filter) support for it needs to still be built out since it works quite differently from the traditional API - that is why it is broken. Realtime is pretty different from the normal API/chunk response.

In the interim, swap to just another model that works in the traditional sense (basically any model that isn't realtime)