NVlabs / stylegan3

Official PyTorch implementation of StyleGAN3
Other
6.43k stars 1.13k forks source link

Why can directly use FP16 during training? #102

Closed shoutOutYangJie closed 2 years ago

shoutOutYangJie commented 2 years ago

I find in your code, you use FP16 for the last 4 layers for stylegan-2. However, you don't activate "torch.cuda.amp"

nurpax commented 2 years ago

We don't use AMP to automatically turn on FP16 for layers, rather we use FP16 manually. Search for use of "float16" in the stylegan3 codebase.