wanghuanphd / MDvsFA_cGAN

The tensorflow and pytorch implementations of the MDvsFA_cGAN model which is proposed in ICCV2019 paper "Huan Wang, Luping Zhou and Lei Wang. Miss Detection vs. False Alarm: Adversarial Learing for Small Object Segmentation in Infrared Images. International Conference on Computer Vision, Oct.27-Nov.2,2019. Seoul, Republic of Korea".
106 stars 26 forks source link

MD FA Calculation Problem #6

Open SherryZhaoXY opened 3 years ago

SherryZhaoXY commented 3 years ago

Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way: MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images)) but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square. So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.

wcyjerry commented 2 years ago

Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way: MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images)) but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square. So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.

torch.mul(torch.pow(g1_out - output_images, 2), output_images) just does the work of L2 norm and square . L2 norm means suqare and root, when it is following by square , the root op was off-set. and for torch.mean , I suppose it tries to match the definition , MD actually means MD-rate , So it shoud devided by total elements of the image. As for training , as you can see there is some hyper-para for MD and FA , so whether it is meaned is not that important.

SherryZhaoXY commented 2 years ago

Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way: MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images)) but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square. So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.

torch.mul(torch.pow(g1_out - output_images, 2), output_images) just does the work of L2 norm and square . L2 norm means suqare and root, when it is following by square , the root op was off-set. and for torch.mean , I suppose it tries to match the definition , MD actually means MD-rate , So it shoud devided by total elements of the image. As for training , as you can see there is some hyper-para for MD and FA , so whether it is meaned is not that important.

Thanks for your explaination!

wcyjerry commented 2 years ago

Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way: MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images)) but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square. So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.

torch.mul(torch.pow(g1_out - output_images, 2), output_images) just does the work of L2 norm and square . L2 norm means suqare and root, when it is following by square , the root op was off-set. and for torch.mean , I suppose it tries to match the definition , MD actually means MD-rate , So it shoud devided by total elements of the image. As for training , as you can see there is some hyper-para for MD and FA , so whether it is meaned is not that important.

Thanks for your explaination!

U R welcome,If it has any help. Actually I'm a student of this papers author now,xdddd