zylo117 / Yet-Another-EfficientDet-Pytorch

The pytorch re-implement of the official efficientdet with SOTA performance in real time and pretrained weights.
GNU Lesser General Public License v3.0
5.21k stars 1.27k forks source link

Fixed loss parameters #236

Closed nnt296 closed 4 years ago

nnt296 commented 4 years ago

Hi @zylo117, thanks for the great work. However, I have a small question regarding your FocalLoss implementation.

Inferred from your loss.py , I see that you are training with gamma = 2.0. Yet, you set the default gamm in train.py to 1.5 (which is the same as in the paper). So I am wondering which was the value you used when training your models.

    def forward(self, classifications, regressions, anchors, annotations, **kwargs):
        alpha = 0.25
        gamma = 2.0
        batch_size = classifications.shape[0]
        classification_losses = []
        regression_losses = []
    parser.add_argument('--alpha', type=float, default=0.25)
    parser.add_argument('--gamma', type=float, default=1.5)
    parser.add_argument('--num_epochs', type=int, default=500)
zylo117 commented 4 years ago

I should remove this now.

parser.add_argument('--alpha', type=float, default=0.25)
parser.add_argument('--gamma', type=float, default=1.5)
zylo117 commented 4 years ago

done. https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch/commit/2817a547cec0a97e849db79a24317eae64f31519