Closed leinxx closed 5 years ago
The gradient computation is incorrect. Test code:
import numpy as np import torch import time from emd import EMDLoss dist = EMDLoss() xs = np.linspace(10, 50, 41) ys = np.linspace(10, 50, 41) l1 = np.stack((xs, np.ones(41)*10)).transpose() l2 = np.stack((xs, np.ones(41)*50)).transpose() l3 = np.stack((np.ones(41)*10, ys)).transpose() l4 = np.stack((np.ones(41)*50, ys)).transpose() p1 = np.concatenate((l1,l2,l3,l4)) npts = len(p1) p1 = torch.from_numpy(p1).cuda().unsqueeze_(0) for i in range(-3, 3): print('offset: ', i) p2 = p1 + i p1.requires_grad = True cost1 = dist(p1, p2) print('\tdistance: ', cost1.data.cpu().numpy()[0]) loss = torch.sum(cost1)/npts loss.backward() print('\tgradient: ', p1.grad.data.cpu().numpy()[0][0])
Run script, the output looks like this:
offset: -3 distance: 489.7591609045162 gradient: [0.48759389 0.48759389] offset: -2 distance: 323.4196924319101 gradient: [8124.4279909 6355.79031227] offset: -1 distance: 162.74898605187073 gradient: [14667.35957541 11995.79261712] offset: 0 distance: 0.0 gradient: [6.25019588e+08 6.25016916e+08] offset: 1 distance: 162.7489860518707 gradient: [6.25026458e+08 6.25022920e+08] offset: 2 distance: 323.41969243191016 gradient: [6.25035163e+08 6.25029917e+08]
The gradient is always positive when offset is both positive and negative, gradient is none zero when offset = 0.
The gradient computation is incorrect. Test code:
Run script, the output looks like this:
The gradient is always positive when offset is both positive and negative, gradient is none zero when offset = 0.