Open simonw opened 1 year ago
It would be neat if the gpt4all
Python library handled this for me.
It looks like the desktop app stores models here:
~/Library/Application Support/nomic.ai/GPT4All/
On my computer:
% ls ~/Library/Application\ Support/nomic.ai/GPT4All/ | cat
ggml-mpt-7b-chat.bin
ggml-replit-code-v1-3b.bin
incomplete-GPT4All-13B-snoozy.ggmlv3.q4_0.bin
incomplete-ggml-gpt4all-j-v1.3-groovy.bin
incomplete-orca-mini-7b.ggmlv3.q4_0.bin
localdocs_v0.db
log-prev.txt
log.txt
orca-mini-3b.ggmlv3.q4_0.bin
test_write.txt
% ls ~/.cache/gpt4all | cat
ggml-gpt4all-j-v1.3-groovy.bin
ggml-mpt-7b-chat.bin
ggml-replit-code-v1-3b.bin
ggml-vicuna-13b-1.1-q4_2.bin
ggml-vicuna-7b-1.1-q4_2.bin
nous-hermes-13b.ggmlv3.q4_0.bin
orca-mini-3b.ggmlv3.q4_0.bin
orca-mini-7b.ggmlv3.q4_0.bin
What do you think about letting a user specify an individual model's path? If whatever/io.datasette.llm/paths.json
exists and there's an key for the model, use that path instead of the default.
or eventually an overall config.json
that contains everything I've changed from the default
for example (with a proper schema, of course):
{
"default": "gpt-4",
"paths": {
"llamawizardorcamanatee-1234": "/Users/username/models/whatever.ext",
"alpacawarlockdugong-3": "/Users/username/.cache/hugging_face/hub/whoknows.why",
},
"globals": {
"temperature": 0.9,
"hello": "hiya",
"etc": "etc..."
}
}
That's a good path forward. I've been exploring a version of that in this issue:
A config.json would be great. I had to modify the code manually to store the models in another directory. Thanks for this plugin.
it'd be also nice by just setting an env var, having model duplicates due to using multiple clients gets heavy very fast.
Great library, thanks for writing it.
In linux the models are stored in ~/.local/share/nomic.ai/GPT4All/
.
Right now I'm just linking it
cd ~/.cache/
rm -r gpt4all
ln -s ~/.local/share/nomic.ai/GPT4All/ gpt4all
https://twitter.com/gwthompson/status/1679166570440843264