openai / guided-diffusion

MIT License
6.06k stars 807 forks source link

Difference between `convert_module_to_f16` and torch native amp #41

Closed ShoufaChen closed 2 years ago

ShoufaChen commented 2 years ago

Hello,

Why didn't you use the PyTorch default torch.cuda.amp for mixed precision?

What's the difference between torch.cuda.amp and your convert_module_to_f16?

Thanks in advance.

unixpickle commented 2 years ago

at the time we worked on this project, torch.cuda.amp was not yet in a stable state.