ollama / ollama

Get up and running with Llama 3.2, Mistral, Gemma 2, and other large language models.
https://ollama.com
MIT License
95.14k stars 7.53k forks source link

Pulling Multiple Models at Once #4351

Open gusanmaz opened 5 months ago

gusanmaz commented 5 months ago

It would be nice if it could be possible to pull multiple models in one go in Ollama.

Today, I tried to run

ollama pull llava-phi3 llava-llama3 llama3-gradient phi3 moondream codeqwen

and it gave following error:

Error: accepts 1 arg(s), received 6
taozhiyuai commented 5 months ago

open as many terminals as you can

gusanmaz commented 5 months ago

open as many terminals as you can

I ended up typing this command:

ollama pull llava-phi3; ollama pull llava-llama3; ollama pull llama3-gradient; ollama pull phi3; ollama pull moondream; ollama pull codeqwen

but it would be convenient if pull command could accept multiple models.

kbump commented 5 months ago

Plus, one on this. I can open multiple terminals but would have to revisit to re-run cmds for larger mdls that fail. Those that fail to download are due to slow net from the other downloading or just a slow net in gen. Multi download with recovery-restart options would be a nice added Feature.

taozhiyuai commented 5 months ago

open as many terminals as you can

I ended up typing this command:

ollama pull llava-phi3; ollama pull llava-llama3; ollama pull llama3-gradient; ollama pull phi3; ollama pull moondream; ollama pull codeqwen

but it would be convenient if pull command could accept multiple models.

save all this command into a .sh file, and run it. it would be .bat file in windows. it will download all one by one.

if you do not know how to use .sh or .bat, do some research on google. it is not difficult.

gusanmaz commented 5 months ago

open as many terminals as you can

I ended up typing this command:

ollama pull llava-phi3; ollama pull llava-llama3; ollama pull llama3-gradient; ollama pull phi3; ollama pull moondream; ollama pull codeqwen

but it would be convenient if pull command could accept multiple models.

save all this command into a .sh file, and run it. it would be .bat file in windows. it will download all one by one.

if you do not know how to use .sh or .bat, do some research on google. it is not difficult.

Thank you for your suggestions. However, I'm curious about how you concluded that I might not know how to create script files based on my previous messages.

While I am aware that I can manage without this feature, I think that allowing multiple inputs for pull command could be a small but useful enhancement to Ollama.

zanderlewis commented 5 months ago

You could create a python script that gets input for each model and it will pull each model provided (using argparse).

chapterjason commented 5 months ago

Bash; anyone?

#!/usr/bin/env bash

MODELS="llama3:instruct
llama3:8b-instruct-q4_1
llama3:8b-instruct-q5_0
llama3:8b-instruct-q5_1
llama3:8b-instruct-q8_0
llama3:8b-instruct-q2_K
llama3:8b-instruct-q3_K_S
llama3:8b-instruct-q3_K_M
llama3:8b-instruct-q3_K_L
llama3:8b-instruct-q4_K_S
llama3:8b-instruct-q4_K_M
llama3:8b-instruct-q5_K_S
llama3:8b-instruct-q5_K_M
llama3:8b-instruct-q6_K
llama3:8b-instruct-fp16"

for MODEL in $MODELS; do
  ollama pull "$MODEL"
done
pykeras commented 4 months ago

You can run this command in your terminal or use it as a Bash script. This script will continue downloading models even if the connection drops in the middle.

MODELS=("llama3" "qwen2" "gemma2:27b")
for MODEL in $MODELS; do
  echo "Downloading model: $MODEL"
  while ! ollama pull "$MODEL"; do
    echo "Download failed for $MODEL. Retrying..."
    sleep 0.1
  done
  echo "Download successful for $MODEL."
done