zylon-ai / private-gpt

Interact with your documents using the power of GPT, 100% privately, no data leaks
https://privategpt.dev
Apache License 2.0
53.77k stars 7.22k forks source link

"ValueError: Provided model path does not exist. Please check the path or provide a modelurl to download." #1949

Closed anamariaUIC closed 2 months ago

anamariaUIC commented 4 months ago

also part of error: " File "/mmfs1/scratch/anamaria/privateGPT2/privateGPT/privategpt/components/llm/llm_component.py", line 37, in __init logger.warning( Message: 'Failed to download tokenizer %s. Falling back to default tokenizer.' Arguments: ('mistralai/Mistral-7B-Instruct-v0.2', OSError('You are trying to access a gated repo.\nMake sure to have access to it at https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2./n401 Client Error. (Request ID: Root=1-66513d8d-296ab36417a5838f423b6c99;83a2c024-313b-4796-a33e-6428bc744429)\n\nCannot access gated repo for url https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/resolve/main/config.json./nAccess to model mistralai/Mistral-7B-Instruct-v0.2 is restricted. You must be authenticated to access it.')) ... " Not sure why it is pulling out anything from Mistral....

In settings.yaml I have:

llm: mode: llamacpp prompt_style: "llama2"

Should be matching the selected model

max_new_tokens: 4000 context_window: 8000 tokenizer: MayaPH/GodziLLa2-70B temperature: 0.1 # The

...

llamacpp: llm_hf_repo_id: TheBloke/GodziLLa2-70B-GGUF llm_hf_model_file: godzilla2-70b.Q4_K_M.gguf

huggingface: embedding_hf_model_name: BAAI/bge-small-en-v1.5 access_token: ${HUGGINGFACE_TOKEN:}

This error happened after running: PGPT_PROFILES=local make run Previous install commands finished without errors:

poetry install --extras "ui llms-llama-cpp embeddings-huggingface vector-stores-qdrant" poetry run python scripts/setup CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python

jaluma commented 2 months ago

You're trying to access a gated model. Please check the HF documentation, which explains how to generate a HF token. After that, request access to the model by going to the model's repository on HF and clicking the blue button at the top. Finally, configure the HUGGINGFACE_TOKEN environment variable and all should work :)