Closed AlexanderYW closed 1 year ago
Hey @AlexanderYW
Unfortunately this isn't a supported use case. Although, in theory, its possible. Definitely read any notes on the upstream diffuser project repository, as I don't have any knowledge or experience here.
Regarding the above error, you could bypass it in a few ways:
1) Pass { callInputs: { xformers_memory_efficient_attention": false } }
, this is hopefully enough
2) Comment out all the enable_xformers_memory_efficient_attention()
lines.
3) Use the latest :dev
release / branch, which uses PyTorch 2 and doesn't need xformers anymore.
Hope this is the only blocker, and I'll be interested to hear about your experiences. Definitely open to PRs to document and improve CPU-only support.
Thanks!
Hey @gadicc, after writing this issue I actually saw the code changes that had been added lately and saw that :dev release had changed in terms of xformers and it actually worked.
But thanks for you response didn't know about the callInputs key :)
Hi,
I'm trying to run the project on my server that only have a CPU, is that possible and if so which parameters do I need to apply?
I'm already running the container without the "--gpus all" parameter
I believe I'm running version 1.6.0
Here is the error i'm getting