After experimenting with this tool to categorize Actual transactions using local LLMs via Ollama, I'm finding base models to be somewhat lacking in accuracy. I've created a new model on top of Llama 3.2 using Open WebUI workspaces that will better categorize my transactions, however, I can't seem to get the Open WebUI endpoint working with this tool.
Open WebUI should be able to pass along chat completions via API, but no matter how I format the OPENAI_BASE_URL= variable, I'm getting a 405 error when the container starts to process transactions. Would it be possible to look into this further as one method of fine-tuning local models?
After experimenting with this tool to categorize Actual transactions using local LLMs via Ollama, I'm finding base models to be somewhat lacking in accuracy. I've created a new model on top of Llama 3.2 using Open WebUI workspaces that will better categorize my transactions, however, I can't seem to get the Open WebUI endpoint working with this tool.
Open WebUI should be able to pass along chat completions via API, but no matter how I format the OPENAI_BASE_URL= variable, I'm getting a 405 error when the container starts to process transactions. Would it be possible to look into this further as one method of fine-tuning local models?
Thanks in advance!