Closed alexdwagner closed 1 year ago
Yeah I'm sure it's complicated for someone who's not a dev lol. I'm not sure why it's not working for you, but luckily a new one-click installer was just released over here. It should make things easier.
It'll ask you what brand of GPU you have (or no GPU) then you select the letter corresponding to it. When you get to the model downloading step, you'll select the letter for downloading another model (the letter L) and enter decapoda-research/llama-7b-hf
, or the model of your choice.
Make sure you're able to run and chat with the model using text-generation-webui before trying to use the bot, but once you're able to, just follow steps 2-5 here and you should be good to go!
Let me know if you run into anymore issues! There's an invite to a Discord btw and I'm in there too.
Thank you so much! I'm running the installer tonight, downloading the model right now. I joined the Discord too. I really appreciate your reply. I'm excited to try out the one-click installer!
Also, you probably already know this, but for anyone who stumbles across this issue–CUDA only runs on NVIDIA graphics cards, which do not work with Apple computers.
Perhaps this it's way over my head to be attempting to run this repo without being a dev, but this seemed like the best way to contact you, so here goes.
The terminal command below doesn't work for me:
When I run this, I get:
Any ideas on how to fix?