Open bsun0802 opened 4 years ago
Hi,
There is no natural difference. You can just use ^2, but you have to change the weight of color loss in the training phase.
BEST,
Bo Sun notifications@github.com 于2020年7月25日周六 下午10:09写道:
Hi Author,
In the MyLoss.py you uploaded,
class L_color(nn.Module):
def __init__(self): super(L_color, self).__init__() def forward(self, x ): b,c,h,w = x.shape mean_rgb = torch.mean(x,[2,3],keepdim=True) mr,mg, mb = torch.split(mean_rgb, 1, dim=1) Drg = torch.pow(mr-mg,2) Drb = torch.pow(mr-mb,2) Dgb = torch.pow(mb-mg,2) k = torch.pow(torch.pow(Drg,2) + torch.pow(Drb,2) + torch.pow(Dgb,2),0.5) return k
The Drg was powered to 2 when Drg was defined, and when returning k, the Drg was powered to 2 again, so the difference is indeed powered to 4. Is it intentional design? and what is the reason for that?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Li-Chongyi/Zero-DCE/issues/6, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIEVQEOFG76WLLMXBP7TRCDR5LRR3ANCNFSM4PHPG3SQ .
Hi Author,
In the
MyLoss.py
you uploaded,The Drg was powered to 2 when Drg was defined, and when returning k, the Drg was powered to 2 again, so the difference is indeed powered to 4. Is it intentional design? and what is the reason for that?