Open SherryZhaoXY opened 3 years ago
Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way:
MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images))
but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square. So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.
torch.mul(torch.pow(g1_out - output_images, 2), output_images) just does the work of L2 norm and square . L2 norm means suqare and root, when it is following by square , the root op was off-set. and for torch.mean , I suppose it tries to match the definition , MD actually means MD-rate , So it shoud devided by total elements of the image. As for training , as you can see there is some hyper-para for MD and FA , so whether it is meaned is not that important.
Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way:
MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images))
but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square. So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.torch.mul(torch.pow(g1_out - output_images, 2), output_images) just does the work of L2 norm and square . L2 norm means suqare and root, when it is following by square , the root op was off-set. and for torch.mean , I suppose it tries to match the definition , MD actually means MD-rate , So it shoud devided by total elements of the image. As for training , as you can see there is some hyper-para for MD and FA , so whether it is meaned is not that important.
Thanks for your explaination!
Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way:
MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images))
but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square. So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.torch.mul(torch.pow(g1_out - output_images, 2), output_images) just does the work of L2 norm and square . L2 norm means suqare and root, when it is following by square , the root op was off-set. and for torch.mean , I suppose it tries to match the definition , MD actually means MD-rate , So it shoud devided by total elements of the image. As for training , as you can see there is some hyper-para for MD and FA , so whether it is meaned is not that important.
Thanks for your explaination!
U R welcome,If it has any help. Actually I'm a student of this papers author now,xdddd
Hi there, thanks for your contribution. I notice that you calculate the 'MD' and 'FA' in that way:
MD1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), output_images)) FA1 = torch.mean(torch.mul(torch.pow(g1_out - output_images, 2), 1 - output_images))
but in the paper, it looks like MD = || (S-S_0)×S_0||_2^2 , FA = || (S-S_0)×(1-S_0)||_2^2, the '×' denotes the element-wise multiplication, '_2' denotes L2 norm, ^2 denotes square. So, there are no any part of torch.pow(g1_out - output_images, 2), and L2 norm is sum of squares and square root, torch.mean() doesn't make it. Why you calculate it like this? Looking forward your reply.