The code is actually substantially faster than the PyTorch counterpart on M1Pro. Even from the CoreML version.
I am getting the same results every time; it appears that the random noise is kind of "deterministic" in the text2image function.
Here I call the diffusion twice:
https://github.com/tcapelle/stable-diffusion-tensorflow/blob/master/02_inference.ipynb
Can you explain a little bit why all these hardcoded alphas are necessary?
Hey, thanks for putting this together =).
The code is actually substantially faster than the PyTorch counterpart on M1Pro. Even from the CoreML version. I am getting the same results every time; it appears that the random noise is kind of "deterministic" in the
text2image
function. Here I call the diffusion twice: https://github.com/tcapelle/stable-diffusion-tensorflow/blob/master/02_inference.ipynb Can you explain a little bit why all these hardcoded alphas are necessary?