Closed leszekhanusz closed 1 year ago
Same issue here. I'm also having the same issue if I select one of the defaults in the list that's presented instead of manually specifying a HuggingFace.
It seems like maybe the huggingface API is having trouble, because I tried to access "https://huggingface.co/api/models/facebook/opt-6.7b/tree/main" in the browser and it similarly timed out and gave a 504 error. Probably not an issue with this tool.
Does anyone have a good guide for how to manually install models? I can see where the tool tells me to put them but I'm not sure what files I should put there and in what structure. With stable diffusion it's a single model file but it seems like these models are composed of multiple files. Do I put the whole folder?
I'm having this issue since this morning. Either the API is unreachable or some address has changed. HF status says its up, but the models seem to be unaccessible through their API url.
@Cobrabb You only need the json and txt files, and the model file (the large .bin or .safetensors) put them inside the /models folder in the main dir of ooba. Usually the safetensors Or you can try git clone https://huggingface.co/api/models/facebook/opt-6.7b
inside your /models folder.
both hf and git clone are giving me 504, github seems to have a lot of problems today too. I am able to download, by entering to file and clicking download on "This file is stored with Git LFS . It is too big to display, but you can still [download] it."
It was fixed.
Describe the bug
I tried to download the TheBloke/WizardLM-7B-uncensored-GPTQ model but I receive a 504 error code.
I'm not sure if I'm doing something wrong or if it's down for everybody. The HuggingFace status page seems to indicate that everything is running smoothly for now.
Is there an existing issue for this?
Reproduction
Screenshot
No response
Logs
System Info