Open hoalarious opened 1 year ago
Not sure how it impacts performance, but at least for testing you can just put self.beta[timesteps] and self.log_beta_tilde_clipped to your GPU like this in the line before: self.beta = self.beta.to(xt.device) and self.log_beta_tilde_clipped = self.log_beta_tilde_clipped.to(xt.device)
The same you have to do for the subsequent errors, then you should be able to successfully sample.
Thanks I was hoping for a more elegant solution but probably not a good idea to get tied down on it at this stage. This talking head implementation is rather computational intensive. Taking 16 minutes for a 2 second clip. Going to test it on longer outputs to see if it gets better. I don't think that's from the fixes you suggested.
Got a working colab demo if anyone wants to try it: https://github.com/hoalarious/diffused-heads-colab/blob/main/diffused_heads_colab.ipynb
Getting an error:
Trying to get this working on colab/kaggle.
Need to setup my local environment to debug this better. Wondering if anyone has a fix for this in the mean time?