Open scenaristeur opened 1 year ago
Skimming the article, while it's very exciting in general, it looks like this would only be relevant to training a model in the first place—not to running it. (i.e. it wouldn't make generating images on your CPU any faster. It might theoretically help with hypervisor training and the like, for those who do so regularly.)
Additionally, without looking further into the matter (I am not a ML engineer and am not qualified to evaluate it) I would temper my excitement on the matter: Those articles are from 2020 and if that implementation were both an easy drop-in replacement and better than GPUs in the general case, it seems at-least-likely that we'd be seeing a lot more of this technique around.
(That said, then again, while it's not particularly useful for this repo, I do see that people are still working on this—for instance there's a commercial venture, Third AI, that the researchers mention in a meta-repo of theirs—it could theoretically be quite useful sooner or later. But, again, not for "running" SD.)
Is there an existing issue for this?
What would your feature do ?
hi ! could stable diffusion run on "sub-linear deep learning engine" system instead of GPU ? https://news.rice.edu/news/2020/deep-learning-rethink-overcomes-major-obstacle-ai-industry with demo https://github.com/keroro824/HashingDeepLearning
asked on stable diffusion too : https://discord.com/channels/1002292111942635562/1002602742667280404/1058425604950724668
Proposed workflow
run on normal CPU, instead of purchasing GPU computer, or Cloud server resources
Additional information
No response