Closed andysingal closed 4 weeks ago
You need to ensure your models are present in /root/.cache/py-llm-core/models/
You can always change the directory using the MODELS_CACHE_DIR
environment variable
Thanks , her is follow up question :
I want to run bash script on multiple images but Jupyter notebook dies on RTX 4090 .
‘’’
Where i need help::::
%%bash
IMG_DIR=~/Desktop/Papers/LLaVA/
for img in "${IMG_DIR}"*.jpg; do
base_name=$(basename "$img" .jpg)
# Define the output file name based on the image name
output_file="${IMG_DIR}${base_name}.txt"
# Execute the command and save the output to the defined output file
/Users/rlm/Desktop/Code/llama.cpp/bin/llava -m
../models/llava-7b/ggml-model-q5_k.gguf --mmproj ../models/llava-7b/mmproj-model-f16.gguf --temp 0.1 -p "Describe the image in detail. Be specific about graphs, such as bar plots." --image "$img" > "$output_file"
done
‘’’
On Tue, Nov 28, 2023 at 01:27 Pierre Alexandre SCHEMBRI < @.***> wrote:
You need to ensure your models are present in /root/.cache/py-llm-core/models/
You can always change the directory using the MODELS_CACHE_DIR environment variable
MODELS_CACHE_DIR = config( "MODELS_CACHE_DIR", default=os.path.expanduser("~/.cache/py-llm-core/models"), )
— Reply to this email directly, view it on GitHub https://github.com/advanced-stack/py-llm-core/issues/5#issuecomment-1829194950, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4LJNOT4CGWLEJ3XPIMPQ3YGV74RAVCNFSM6AAAAAA7X6ZJBWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRZGE4TIOJVGA . You are receiving this because you authored the thread.Message ID: @.***>
Closing stale issue.
While working on Baklaava Model
Download:
ERROR: