TheComamba / UnKenny

A FoundryVTT module providing NPCs with artificial intelligence.
MIT License
2 stars 4 forks source link

Add support for Ollama #76

Open KTheMan opened 3 weeks ago

KTheMan commented 3 weeks ago

I have Ollama instance set up on an external server and think it would be a great, easy fit for this type of add-on if I could hook it in. Opens up use of many more models with better hardware, if the GM has access, IMO

KingphilAusti commented 3 weeks ago

So maybe a clearly defined interace to an external model could solve this.

I don't know if we can bulild this as an "easy to use for everyone" solution but could provide an interface which connects to some external model via a few lines of code.

KTheMan commented 2 weeks ago

Yup, pretty much what I was thinking; shouldn't need much, as Ollama exposes an OpenAI compatible completion endpoint.

From https://ollama.com/blog/openai-compatibility:

import OpenAI from 'openai'

const openai = new OpenAI({
  baseURL: 'http://localhost:11434/v1/',

  // required but ignored
  apiKey: 'ollama',
})

const chatCompletion = await openai.chat.completions.create({
  messages: [{ role: 'user', content: 'Say this is a test' }],
  model: 'llama3',
})

I did a bit of the work over at https://github.com/KTheMan/UnKenny. I haven't gotten far enough to get a proper response, but it does hit the Ollama endpoint

KingphilAusti commented 2 weeks ago

Ah I see. Going for a fork right now was also what I would have suggested. I should be enough to change openai-api, as you did. What is the error you get? Or how does your response look like?