Open rqtqp opened 2 years ago
Looking at train.py
, they set it like so:
c.G_kwargs.num_fp16_res = c.D_kwargs.num_fp16_res = 0 # for SynthesisNetwork
c.G_kwargs.conv_clamp = c.D_kwargs.conv_clamp = None # for each SynthesisLayer
You can set these after loading G
in any script (do it before a for loop so as to not waste resources):
G.synthesis.num_fp16_res = 0
for name, layer in G.synthesis.named_modules():
if hasattr(layer, 'conv_clamp'):
layer.conv_clamp = None
layer.use_fp16 = False # dunno if this is necessary, but it is an attribute; try with and without this
The output of gen_video.py (screenshot from mp4 file with 2x2 grid):
While if I'm running vizualizer.py and enable Force FP32 option the output seems as it should be.
So my question is somehow possible to enable force_fp32 option for gen_video.py
Thanks.