Open gusanmaz opened 5 months ago
open as many terminals as you can
open as many terminals as you can
I ended up typing this command:
ollama pull llava-phi3; ollama pull llava-llama3; ollama pull llama3-gradient; ollama pull phi3; ollama pull moondream; ollama pull codeqwen
but it would be convenient if pull command could accept multiple models.
Plus, one on this. I can open multiple terminals but would have to revisit to re-run cmds for larger mdls that fail. Those that fail to download are due to slow net from the other downloading or just a slow net in gen. Multi download with recovery-restart options would be a nice added Feature.
open as many terminals as you can
I ended up typing this command:
ollama pull llava-phi3; ollama pull llava-llama3; ollama pull llama3-gradient; ollama pull phi3; ollama pull moondream; ollama pull codeqwen
but it would be convenient if pull command could accept multiple models.
save all this command into a .sh file, and run it. it would be .bat file in windows. it will download all one by one.
if you do not know how to use .sh or .bat, do some research on google. it is not difficult.
open as many terminals as you can
I ended up typing this command:
ollama pull llava-phi3; ollama pull llava-llama3; ollama pull llama3-gradient; ollama pull phi3; ollama pull moondream; ollama pull codeqwen
but it would be convenient if pull command could accept multiple models.
save all this command into a .sh file, and run it. it would be .bat file in windows. it will download all one by one.
if you do not know how to use .sh or .bat, do some research on google. it is not difficult.
Thank you for your suggestions. However, I'm curious about how you concluded that I might not know how to create script files based on my previous messages.
While I am aware that I can manage without this feature, I think that allowing multiple inputs for pull command could be a small but useful enhancement to Ollama.
You could create a python script that gets input for each model and it will pull each model provided (using argparse).
Bash; anyone?
#!/usr/bin/env bash
MODELS="llama3:instruct
llama3:8b-instruct-q4_1
llama3:8b-instruct-q5_0
llama3:8b-instruct-q5_1
llama3:8b-instruct-q8_0
llama3:8b-instruct-q2_K
llama3:8b-instruct-q3_K_S
llama3:8b-instruct-q3_K_M
llama3:8b-instruct-q3_K_L
llama3:8b-instruct-q4_K_S
llama3:8b-instruct-q4_K_M
llama3:8b-instruct-q5_K_S
llama3:8b-instruct-q5_K_M
llama3:8b-instruct-q6_K
llama3:8b-instruct-fp16"
for MODEL in $MODELS; do
ollama pull "$MODEL"
done
You can run this command in your terminal or use it as a Bash script. This script will continue downloading models even if the connection drops in the middle.
MODELS=("llama3" "qwen2" "gemma2:27b")
for MODEL in $MODELS; do
echo "Downloading model: $MODEL"
while ! ollama pull "$MODEL"; do
echo "Download failed for $MODEL. Retrying..."
sleep 0.1
done
echo "Download successful for $MODEL."
done
It would be nice if it could be possible to pull multiple models in one go in Ollama.
Today, I tried to run
and it gave following error: