simonw / llm

Access large language models from the command-line
https://llm.datasette.io
Apache License 2.0
4.01k stars 220 forks source link

How to install models offline? #152

Open JohnTravolski opened 1 year ago

JohnTravolski commented 1 year ago

I would like to use llm completely offline. I can install the llm python package offline with:

pip download llm

and then using this on the offline computer

pip install --no-index --find-links "$(convert-path($pwd))" .\llm-0.6.1-py3-none-any.whl

However, I get stuck once I want to install the offline models. To install them, I have to use:

llm install llm-gpt4all

then, when I actually specify the model I want to use (such as with llm -m ggml-vicuna-7b-1 "Five cut names for a pet penguin"), the .bin file gets downloaded here:

%userprofile%\.cache\gpt4all

I assume I can just copy the .bin model files in the .cache\gpt4all folder to the offline computer (but maybe I'm wrong, I'm not sure). However, I don't know how to replicate what the llm install llm-gpt4all line is doing completely offline. I used a drive monitoring program and it looks like it adds a bunch of files in the Python folder in the Program Files directory. For example:

image

It's not as easy to distinguish which of those are important compared to the single .bin file in the .cache folder.

Do you have any advice on how I can install these offline?

simonw commented 1 year ago

I'm afraid I don't have a Windows development machine, so I can only take guesses at how to do this!

It may be worth also trying out the new llm-mlc plugin, though I haven't tried that on Windows yet myself either: https://github.com/simonw/llm-mlc