Closed japen0617 closed 3 weeks ago
Correct, dont use realtime as your model - we dont have support for it currently since it cannot handle chat message pairs
Hello,
Same for me, I just installed AnythingLLM (1.6.9), chose OpenAI and pasted an API Key. But found why :
Normal ?
PS: other than that, AnythingLLM is awesome!
While realtime is available in the dropdown (we dont filter) support for it needs to still be built out since it works quite differently from the traditional API - that is why it is broken. Realtime is pretty different from the normal API/chunk response.
In the interim, swap to just another model that works in the traditional sense (basically any model that isn't realtime)
How are you running AnythingLLM?
Local development
What happened?
I chose the gpt-4o-realtime model in the workspace. Then try to use it, but it shows the following error message Could not respond to message. 404 This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/ completions
Are there known steps to reproduce?
No response