mateusz / cherryberry

Your friendly AI text adventure!
17 stars 2 forks source link

Ollama API support #9

Open bgreene2 opened 7 months ago

bgreene2 commented 7 months ago

This adds the ability to point at a model hosted on an Ollama installation instead of running the model within the app.

  1. Added two commandline arguments (ollama-host, ollama-model)
  2. Added Ollama modelfile to create the psyfighter model
  3. Updated readme with instructions
  4. Added requirements.txt to make installing dependencies with pip easier

Tested on my system, which is a PC running Windows 11 and WSL. The Ollama installation is in Windows, and the app was run in WSL.

mateusz commented 7 months ago

Thanks - this works well! However I've been playing downstream with using grammars (GBNF), and Ollama is not there yet for guided generation.

I might try looking at Outlines instead of GBNF, they integrate with llama.cpp, but unsure about ollama.

[thinking in progress]