Closed NightMachinery closed 3 weeks ago
I will check it out. Plz in stack trace looks interesting. Maybe @ahyatt knows what it means?
plz is how we now do requests, which uses curl and actually parses to spec, instead of the pretty hacky things we were doing before.
@NightMachinery, can you share how you set up your provider, and what operation you are doing?
@r0man, in case he might know what's going on just from the stack trace.
Looks like a mismatch of the content type returned in the response and the registered one. I need more information. Which provider? Are you using a proxy? Can you somehow provide the response of this request? Is there a hidden plz buffer that contains the response?
@NightMachinery I had a closer look at the stacktrace (I was on mobile yesterday). It seems you were using a proxy (gost/2.11.5
).
Could you please try to:
If this still does not work it would be good to get the response. I think you can find it in a hidden buffer called plz-request-xxxx
where the x's are some random number.
So, I just tried the gost proxy locally with Ollama/OpenAI, and it seems to work with the changes in the main branch of llm. @ahyatt Let's cut a 16.1 release with the proxy changes?
Evaluating these two files indeed solved the issue!
More info:
plz
buffer in switch-to-buffer
.Here is the registered provider ellama-provider
:
#s(llm-openai-compatible nil nil nil "REDACTED" "llama3-70b-8192" "llama3-70b-8192" "https://api.groq.com/openai/v1")
Hi @NightMachinery, ok perfect. Thanks for reporting back. Let's wait for @ahyatt to cut a release, then this should be resolved. To switch to hidden/invisible buffers, see https://www.emacswiki.org/emacs/InvisibleBuffers . But we don't need this now.
Closing this as issue resolved in new release of llm. @ahyatt thank you for calling for @r0man @r0man thank you for fixing issue @NightMachinery Thank you for report
I get this error when I try to use ellama on a new installation of emacs. I can't really figure anything from this stacktrace ...