Closed natanloterio closed 6 months ago
Hi there,
this looks indeed weird. I am not sure why it is showing Llama-2-7b
even though you tried to download Mistral.
In my case, when I tried to download it without being authorized, the message is quite clear:
And then with the correct credentials:
I don't have a good explanation why it looks differently for you currently. Is this perhaps for an older LitGPT version? Could you try the latest version?
pip install -U git+https://github.com/Lightning-AI/litgpt.git
Mistral recently added an auth step to access their models https://huggingface.co/mistralai, so any tutorials that use it need an update to include the access token now
I can take that. Still weird though that it shows Llama 2 there though. Maybe a side-effect based on a particular access token that was used.
Instead of a stack trace + error message, we should produce a clean error message (without a stack trace) that provides clear instructions.
hey @natanloterio we updated the examples so that they don't require a HF token: https://github.com/Lightning-AI/litgpt/pull/1371
the last comment still applies: we should swallow the stack trace and instead just present the message
We improved the error messages in #1373, thanks again for the feedback!
Hello guys, thank you for the support on this issue. I updated my local repo and it worked.
Regarding the divergent logs showing Llama-2-7b and Mistral, it's because I mixed up the copy and paste. Both models were giving me the same error of missing HF_TOKEN.
It seems that a short instruction on how to get an Access Token would be appreciated by the community. So I openend a PR
Let me know what do think.
Cheers
I create the access token, and run the command ''litgpt download meta-llama/Meta-Llama-3-8B-Instruct --access_token=
The terminal output as the following:
Setting HF_HUB_ENABLE_HF_TRANSFER=1
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/Bin_LLM/bin/litgpt", line 8, in HF_TOKEN=your_token
environment variable or --access_token=your_token
may not have sufficient access rights. Please visit https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct for more information.
Hi there,
the specific models also require that you accept the license terms on the individual model pages. For the model you are interested in, have you accepted these terms? E.g., if you go to https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct, there should be a form on the landing page. After accepting the terms, it should look like this:
Hi there,
the specific models also require that you accept the license terms on the individual model pages. For the model you are interested in, have you accepted these terms? E.g., if you go to https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct, there should be a form on the landing page. After accepting the terms, it should look like this:
OK, I get it. I have not been granted access to this model. I try to get the access right. Thank you!
Steps to reproduce
python -m venv .venv
source .venv/bin/activate
pip install 'litgpt[all]'
litgpt download --repo_id mistralai/Mistral-7B-Instruct-v0.2
What I was expecting
mistralai/Mistral-7B-Instruct-v0.2
What I got
About the error
It seems that the exception is related to a invalid Hugging Face credential, but my credentials are fresh and valid
Suggestion
Improve the error handling so it give a more clear explanation about what must be done in order to get it running properly