e-lab / pytorch-demos

84 stars 25 forks source link

Is this a typo in face-keypoint/train.py? #3

Open pharrellyhy opened 5 years ago

pharrellyhy commented 5 years ago

Hi,

Thanks for sharing the code! When I went through the training code, I got confused at https://github.com/e-lab/pytorch-demos/blob/master/face-keypoint/train.py#L133 and https://github.com/e-lab/pytorch-demos/blob/master/face-keypoint/train.py#L186. Should the total_loss be moved to the preceding for loop? Otherwise the loss is not accumulated after each for loop. Thank!

dawood95 commented 5 years ago

If I remember correctly, I think the goal was to just monitor the loss at the last layer (highest resolution).

pharrellyhy commented 5 years ago

Thanks. Is there any reason that multi-stage loss is not being used here? I can recall that both openpose and CPM use multi-stage loss for backprop.

pharrellyhy commented 5 years ago

Hi,

I trained the facenet using the script you provided on LS3D-W dataset. The mse and neg training loss is quite low which are around 4e-4 at first few iterations and the validation loss is almost the same. Is there anything going wrong? Thanks.

dawood95 commented 5 years ago

I haven't looked at face keypoints in a while, so I am not entirely sure if that is what I used to get in terms of loss. However, I don't think it is a problem. As for multi stage loss, I am using multi-stage loss for learning, just not for printing purposes. Not entirely sure why I did that (Its been a year). I will update the code soon for pytorch 1.0 and look for possible bugs.