Closed zaccharieramzi closed 2 years ago
Actually, I just needed to add a condition if torch.cuda.is_available():
before loading any cuda source.
I will submit a PR if interested.
Hi @zaccharieramzi , thanks for the interest in our work and also thank you for the useful PR! I hadn't really thought about running the experiments on CPU since diffusion models are already slow on GPUs. Still, I believe it would be useful for researchers who currently only have access to CPUs.
In my case it's not necessarily about not having a GPU, it's more that I want to be able to write unit tests for what I am doing and potentially having these unit tests run in a CI.
Re the PR feel free to merge it (I don't have the access to do so).
If you are interested I have also a branch where I reformatted the code in order to easily perform said unit tests.
Thanks, I've merged the PR now.
Would you mind submitting a PR about the unit tests? That would be valuable.
Hi,
I am very interested in your work, and I would like to reproduce its results as well as try to extend some of it.
I am just starting by running the retrospective inference command for real-valued data:
I am currently running into the following error when trying to run on CPU:
I tried to fix it by adding
with_cuda=torch.cuda.is_available(),
tofused = load(...
, but got the following error:One extra thing I did before running these commands was to also install
Ninja
(not listed in the reqs).