Open honne23 opened 4 months ago
I just signed up for TogetherAI to investigate this, but it looks like it's a huggingface issue as you suspected. For guidance to work optimally, we try to find and load a model's tokenizer if it is open source, only falling back to a more generic one if we can't load it. This helps us accurately do things like token healing
.
Mistral's huggingface repositories are "restricted" and require users to have a huggingface account and accept their terms + conditions before viewing the repo.
At URL: https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1
On the client, we can't load the Mistral Tokenizer without this authenticated access. I think that presents two solutions:
1) Go to huggingface, accept the mistral terms and conditions, and authenticate with the huggingface CLI before you run guidance (https://huggingface.co/docs/huggingface_hub/main/en/guides/cli#huggingface-cli-login). In the future, I'll try to see if we can prompt/trigger this from guidance directly for models that throw this 401 error.
2) Load another tokenizer from Transfomers
or tiktoken
and pass it into the init argument of models.TogetherAI()
, which may result in slightly less optimal use of features like token healing, but will still work for you.
I appreciate that these aren't incredibly satisfying answers, so I'll leave this issue open as we think of a more seamless solution here. But hopefully this unblocks your usage in the interim!
Wanted to update and say that @Harsha-Nori's solution worked for me! I'll leave it up to you if you'd like to close it, incase you'd like to track a full solution in a separate PR.
The bug I've been unable to use models from together.ai due to a 401 error relating to huggingface, despite passing in a valid API key, both as a
api_key
init variable and environment variableTOGETHERAI_API_KEY
.To Reproduce
Dependencies:
System info (please complete the following information):
guidance.__version__
): 0.1.12 & 0.1.13