szczyglis-dev / py-gpt

Desktop AI Assistant powered by GPT-4, GPT-4 Vision, GPT-3.5, DALL-E 3, Langchain, Llama-index, chat, vision, voice control, image generation and analysis, autonomous agents, code and command execution, file upload and download, speech synthesis and recognition, access to Web, memory, prompt presets, plugins, assistants & more. Linux, Windows, Mac.
https://pygpt.net
MIT License
449 stars 92 forks source link

Local Ollama Configuration #59

Open BumblingBen opened 3 weeks ago

BumblingBen commented 3 weeks ago

Hey Marcin, loving the concept of this application as it pretty much ticks every box I can think of. You could give Tony's JARVIS a run for his money with continued TLC.

Now I've buttered you up, the burning question I've been wrestling with for two hours or so... could you provide some concise steps to detail how I can connect py-gpt to a local Ollama instance? I wouldn't usually ask but I have been reading the docs, exploring the code (no expert but happy to learn) and the options in the application but I couldn't get it to work, presuming I've interped the docs right and it is possible.

When I save changes and select the Langchain Mode and the Model I created I then get this error:

Exception: Serializable.__init__() takes 1 positional argument but 2 were given
Type: TypeErrorMessage: Serializable.__init__() takes 1 positional argument but 2 were given
Traceback:   File "core\chain\__init__.py", line 60, in call
  File "core\chain\chat.py", line 69, in send
  File "core\chain\chat.py", line 62, in send
  File "provider\llms\ollama.py", line 46, in chat

Any advice would be greatly appreciated.