Open jianzhang96 opened 1 year ago
Hi, @jianzhang96. Thanks for your comment and using this repository.
I tried that with the MNIST dataset.
The left figure shows the difference real_img - fake_img
and the right figure shows the absolute difference torch.abs(real_img - fake_img)
.
Indeed, using the absolute difference seems to make the output images reasonable.
I will consider adding an option to select the absolute difference. Thanks again for your comment.
Thanks for your excellent work! Recently I use the code to train my custom dataset, and I try to calculate the AUROC of localization performance. In the Line 25 of file fanogan/save_compared_images.py, there is code for image subtraction:
compared_images[2::3] = real_img - fake_img
But there are much white pixels in thecompared_images
. I find that the output images will be more reasonable by using absolute values of thecompared_images
:compared_images[2::3] = torch.abs(real_img - fake_img)
Thanks a lot!