In the original tensorflow implementation before feeding the real images in to the network, they would be processed in the training_loop by a function named process_reals(), where the choosen original dynamic range would be transformed to that of the network drange [-1,1]. In the pytorch implementation I can't seem to find where this process takes place, therefore I am forced to adjust my inputs to [0,255]. Does anyone know if there is a way to change it?
I have found the answer in the line 261 of the training_lop.py script:
phase_real_img = (phase_real_img.to(device).to(torch.float32) / 127.5 - 1).split(batch_gpu). Here it scales the data to [-1, 1].
Hello,
In the original tensorflow implementation before feeding the real images in to the network, they would be processed in the training_loop by a function named process_reals(), where the choosen original dynamic range would be transformed to that of the network drange [-1,1]. In the pytorch implementation I can't seem to find where this process takes place, therefore I am forced to adjust my inputs to [0,255]. Does anyone know if there is a way to change it?
Regards.