Closed vanstanian closed 6 months ago
Hi @vanstanian
The most probable cause the lack of HTTP 2.0 protocol support of your local server. I had the same issue with LM Studio. I have patched the code so that the HttpClient uses the HTTP 1.1 protocol version.
Thank you!
/w.
It works nice! This will be a great step in your project. Thank you!
Hello,
I've been investigating a bit this pluggin, and I've found that it's virtually possible to use it with any other model appart from OpenAI GPTs if such model runs on a server that conveys with OpenAI's REST intercomunication protocols. As far as I can tell, I tried my model throught Postman with the generated request of this plugin
(I've got to quit the "assist" message, because with it failed... but for another reason, because Mistral seems to need all the messages fullfilled...)
and everything seems fine, but when I tried this plugin with this config
I've got this
I don't know if I'm getting trouble with some configuration I missed to allow Eclipse to do request or something, but I don't know how to circumvent this bug.
I hope someone could bring me some help and I will be gratefull if someone can respond my answer.
Regards.