Open Juan-Cruz-Iturrioz opened 1 year ago
Why... why? Also, eh, WHY.
Faster CPU/GPU for trying it out before buying a decent ML machine?
I am pretty sure you need to write in English here.
Faster CPU/GPU for trying it out before buying a decent ML machine?
Yeah, no. The binary behind (alpaca) does not support GPU at all so your own computer would be faster unless you use a (very) old laptop or have very little memory (this will require af little as 4-6 GB to run apart from what you OS uses).
Heh- I have a pair of older machines that run the app but I don’t get responses back after ten minutes. One windows one Mac. dual core 2.7ghz 16mb ram.
why? if you want to use colab, then just use oobabooga text generation webbui or the llama.cpp CLI.
I would like to suggest an improvement to Freedom GPT. It would be great if the model could be executed on Colab, This would allow for easy access and execution of the model on a cloud-based environment, without the need for local installations.
Please consider this suggestion and let me know if there are any plans to implement it in the future. Thank you for your hard work on Freedom GPT!