itsme2417 / PolyMind

A multimodal, function calling powered LLM webui.
GNU Affero General Public License v3.0
205 stars 15 forks source link

is it possible to use this with ollama? #9

Closed Kreijstal closed 7 months ago

Kreijstal commented 8 months ago

I tried to change the port to something llama serve listens to and I get this

Begin streamed GateKeeper output.
[2024-03-09 02:44:33,539] ERROR in app: Exception on / [POST]
Traceback (most recent call last):
  File "/home/kreijstal/.local/lib/python3.10/site-packages/requests/models.py", line 971, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.10/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 5 (char 4)

During handling of the above exception, another exception occurred:
itsme2417 commented 8 months ago

ollama appears to provide a custom api and only /v1/chat/completions. Polymind only supports llama.cpp's official server, tabbyapi and openai compatible /v1/completions endpoints, so unless you can get ollama to serve a standard completions endpoint, no.

Kreijstal commented 4 months ago

https://github.com/ollama/ollama/pull/5209 So it's possible right...!!!