wickercar / foundry-ai-text-importer

A FoundryVTT Module that uses GPT to import monster stats from plaintext into the virtual tabletop software
MIT License
0 stars 0 forks source link

Modification to use local LLM models #1

Open jrbuda opened 6 months ago

jrbuda commented 6 months ago

Is it possible to modify the module to be able to use a local LLM instead of OpenAI? I believe a modification of this page: https://github.com/wickercar/foundry-ai-text-importer/blob/main/src/module/monster-parser/llm/openaiModels.ts by pointing the endpoint could do it. Maybe a setting in the settings page to modify a variable that goes there? Using something like ollama allows a user to run a local LLM that outputs like GPT so that the outputs are identical to the receiver. I plan on tinkering with it in my local environment in order to test this.

jrbuda commented 6 months ago

I wanted to put an enhancement label on this one but I don't see a way to do it.

wickercar commented 6 months ago

Hey @jrbuda! Great callout, this is definitely a feature that should be on the roadmap.

Thanks for tinkering with it, let me know if you have any luck, and I can probably take a look at it this week!

wickercar commented 6 months ago

@jrbuda Hey! Any update here on the local LLM work? Just wondering if I should get cracking on it myself or if you've got something cooking. Thanks again for the feature request.

Trahloc commented 6 months ago

@jrbuda Hey! Any update here on the local LLM work? Just wondering if I should get cracking on it myself or if you've got something cooking. Thanks again for the feature request.

FYI the only thing needed is a field to put in an alternative URL to OpenAI. If blank, use OpenAI, if filled in, use custom URL. Oobabooga's Text Generation WebUI is compatible with OpenAI's API. If you want to practice against one I can setup a public URL for you. I'm on discord with username trahloc right now (I see you posted that ~2min ago).

jrbuda commented 6 months ago

@Trahloc I was unable to test it since I was having that package install issue. I planned on messing with it today or this weekend though. I have a local LLM already running with mixtral and mistral setup behind Ollama to act as a OpenAI proxy essentially. I have a discord bot that already hits my local LLM so I should know how to formulate the config/url in order to hit it correctly.

jrbuda commented 6 months ago

Now that I have the module installed, I am finding that I can't modify the code like I wanted to. Not sure how to carry forward with testing. I have not compiled a module local side before either unfortunately. Let me do some research but don't wait up on my behalf.

wickercar commented 6 months ago

@jrbuda thanks for the updates, I'll add dev instructions to the README by tomorrow to help unblock you here!