Working with fake labels for the bounding boxes like values in the range of negative values or positive values > 1,
all predicted bounding box coordinates are in (0,1).
Why is that?
Following the code for the FFN for the bounding boxes:
class MLP(nn.Module):
""" Very simple multi-layer perceptron (also called FFN)"""
def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
super().__init__()
self.num_layers = num_layers
h = [hidden_dim] * (num_layers - 1)
self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
def forward(self, x):
for i, layer in enumerate(self.layers):
x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
return x
there should be only a linear layer as the last one?
Working with fake labels for the bounding boxes like values in the range of negative values or positive values > 1, all predicted bounding box coordinates are in (0,1).
Why is that?
Following the code for the FFN for the bounding boxes:
class MLP(nn.Module): """ Very simple multi-layer perceptron (also called FFN)"""
there should be only a linear layer as the last one?
Is there a sigmoid missing?
Thx!