Closed anjok closed 3 days ago
I'm interested in this as well. I'll have to put time aside to mess with it on my Pi. But, I would love to see this added
A "Custom" option for Knowledge Graph has been added in the latest release, which lets you enter a custom endpoint.
Given privacy and cost concerns it would be nice if we could use self-hosted LLMs.
I downloaded LM Studio, installed "lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF" and started the server at port 1234.
This is the minimal fix needed to have Vector use it:
To actually have it user configurable, it wouldn't require much more work, but more than I can currently put in:
It was a ton of work to get this to actually run as I don't think the builds on MacOS are correct, but I just may be doing something wrong.... why is it using
/root
in the builds and why issudo
needed to actually build? Also, there's no defaultbrew
user. And finally it couldn't find my vosk_api.h file and I had to symlink/copy a lot of stuff around.