Open tienthegainz opened 4 years ago
In my environment, the cls_loss would not decrease no matter how many epochs run, it always is 2.302124500274658.
same issue
In my environment, the cls_loss would not decrease no matter how many epochs run, it always is 2.302124500274658.
Yes, 300 epoch trained model does.
@gmvidooly here's code
def extract_features(self, inputs):
# Stem
x = self._swish(self._bn0(self._conv_stem(inputs)))
P = []
index = 0
num_repeat = 0
# Blocks
for idx, block in enumerate(self._blocks):
drop_connect_rate = self._global_params.drop_connect_rate
if drop_connect_rate:
drop_connect_rate *= float(idx) / len(self._blocks)
x = block(x, drop_connect_rate=drop_connect_rate, idx=idx)
num_repeat = num_repeat + 1
if(num_repeat == self._blocks_args[index].num_repeat):
if index in {0, 1, 2, 4, 6}:
P.append(x)
num_repeat = 0
index = index + 1
return P
deleting the line 'classification = torch.clamp(classification, 1e-4, 1.0 - 1e-4)' in focal loss seems to solve the problem
I use EfficientDet-D0 to train on my own dataset and got poor results. Much worse than even YOLOV3. So I put a simple test to see if the model can overfit with a single datapoint. But it doesn't.
Test only on 1 single data point with 'ghe_an' object. After 30 epochs, loss is 2.6. I wonder what is the problem here?