Mantella is a Skyrim and Fallout 4 mod which allows you to naturally speak to NPCs using Whisper (speech-to-text), LLMs (text generation), and Piper / xVASynth / XTTS (text-to-speech).
Could the documentation be updated to clearly outline the steps for setting up local LLMs?
I spent around two hours trying to figure out why there was no "Large Language Model" tab in the Skyrim MOD section for Mantella. Additionally, in the web configuration, it’s not mentioned that you have to manually enter the URL for the service when Mantella starts up to have like OpenAI compatible ollama connected, also I had to add it seems CMD_FLAGS.txt with –extensions openai.
This information seems to be missing (and would be worth noting), and more clarity here would save users significant time and effort!
Could the documentation be updated to clearly outline the steps for setting up local LLMs?
I spent around two hours trying to figure out why there was no "Large Language Model" tab in the Skyrim MOD section for Mantella. Additionally, in the web configuration, it’s not mentioned that you have to manually enter the URL for the service when Mantella starts up to have like OpenAI compatible ollama connected, also I had to add it seems
CMD_FLAGS.txt
with–extensions openai
.This information seems to be missing (and would be worth noting), and more clarity here would save users significant time and effort!