Closed brisker closed 7 years ago
Indeed, in order to visualize the raw output of the network, we set the number of dimensions to 2. The raw output is then a 2-dimensional image, or a RG-image, which is the bottom left image. The same representation is on the top right, but then displayed in a xy plot.
@DavyNeven two more questions:
print(loss_d)
)
in_margin = 0.5
out_margin = 1.5
Lnorm = 2
function norm(inp, L)
local n
if (L == 1) then
n = torch.sum(torch.abs(inp), 1)
else
n = torch.sqrt(torch.sum(torch.pow(inp, 2), 1) + 1e-8)
end
return n
end
function instance_loss(prediction, labels)
local batchsize = prediction:size(1)
local c = prediction:size(2)
local height = prediction:size(3)
local width = prediction:size(4)
local nInstanceMaps = labels:size(2)
local loss = 0
for b = 1, batchsize do
local pred = prediction[b] -- c x h x w
local loss_var = 0
local loss_dist = 0
for h = 1, nInstanceMaps do
local label = labels[b][h]:view(1, height, width) -- 1 x h x w
local means = {}
local loss_v = 0
local loss_d = 0
-- center pull force
for j = 1, label:max() do
local mask = label:eq(j)
local mask_sum = mask:sum()
if (mask_sum > 1) then
local inst = pred[mask:expandAs(pred)]:view(c, -1, 1) -- c x -1 x 1
-- Calculate mean of instance
local mean = torch.mean(inst, 2) -- c x 1 x 1
table.insert(means, mean)
-- Calculate variance of instance
local var = norm((inst - mean:expandAs(inst)), 2) -- 1 x -1 x 1
var = torch.cmax(var - (in_margin), 0)
local not_hinged = torch.sum(torch.gt(var, 0))
var = torch.pow(var, 2)
var = var:view(-1)
var = torch.mean(var)
loss_v = loss_v + var
end
end
loss_var = loss_var + loss_v
-- center push force
if (#means > 1) then
for j = 1, #means do
local mean_A = means[j] -- c x 1 x 1
for k = j + 1, #means do
local mean_B = means[k] -- c x 1 x 1
local d = norm(mean_A - mean_B, Lnorm) -- 1 x 1 x 1
d = torch.pow(torch.cmax(-(d - 2 * out_margin), 0), 2)
loss_d = loss_d + d[1][1][1]
end
end
loss_dist = loss_dist + loss_d / ((#means - 1) + 1e-8)
end
end
loss = loss + (loss_dist + loss_var)
print(loss_d)
end
loss = loss / batchsize + torch.sum(prediction) * 0
return loss
end
prediction = torch.ones(2,8,50,50) labels=torch.ones(2,1,50,50) labels:sub(-1,-1,-1,-1,29,34,39,49):fill(2) loss=instance_loss(prediction,labels)
But why the loss_d is two **nil values**? Are you sure the instance_loss implementation has no bugs?
![image](https://user-images.githubusercontent.com/13804492/32220329-44ff9170-be6c-11e7-9d43-050b169eaadc.png)
Besides, if batch_size is different, why is the loss different??? @DavyNeven
loss_d is nil because it is a local variable inside a for loop, you are printing its value outside this loop
@DavyNeven what about the different loss values between different batch_size? I think they should be the same.
They will be the same if the batch samples are the same. If you want them to be the same, change your sub-method to labels:sub(1,-1,1,-1,29,34,39,49):fill(2)
.
@DavyNeven
sorry, new to torch
the raw output of the network should be batch_size nDim H W, in your code it seems that nDim is 8, how did you get the figure on the left-bottom corner on the figure above, since it seems like a 3 H * W rgb image?(you refer to that figure to "the raw output of the network")