FunkyKoki / Look_At_Boundary_PyTorch

A PyTorch re-implement of CVPR 2018 LAB(Look At Bounday) --Champagne Jin
9 stars 1 forks source link

loss_regressor increase #5

Closed jackweiwang closed 5 years ago

jackweiwang commented 5 years ago

Training parameters: Dataset: WFLW Dataset split: train Batchsize: 8 Num workers: 0 PDB: False Use GPU: True Start lr: 2e-05 Max epoch: 2000 Loss type: smoothL1 Resumed model: False Creating networks ... Creating networks done! Loading dataset ...

<_io.TextIOWrapper name='/home/ww/Look_At_Boundary_PyTorch/dataset/WFLW/WFLW_train_annos.txt' mode='r' encoding='UTF-8'> Loading dataset done! Start training ... 100%|██████████████████████████████████████████████████████████| 938/938 [12:33<00:00, 1.25s/it] epoch: 0000 | loss_estimator: 185.95 | loss_regressor: 942.27 100%|██████████████████████████████████████████████████████████| 938/938 [12:39<00:00, 1.24it/s] epoch: 0001 | loss_estimator: 13.51 | loss_regressor: 932.60 100%|██████████████████████████████████████████████████████████| 938/938 [12:39<00:00, 1.43it/s] epoch: 0002 | loss_estimator: 13.45 | loss_regressor: 960.23 why loss_regressor so big , please tell me if i make any mistakes
FunkyKoki commented 5 years ago

Firstly, make sure you are using correct annotation files, indeed, I will release the annotation files and file structure recently. Secondly, loss_regressor is truely big at the first about 20 epoch, so, if you can make sure the annotation files and file structure are correct, don't worry, just keep training.

jingchunhui commented 2 years ago

Excuse me, actually I have the opposite problem to you,why my loss_estimator's so big but the loss_regressor's small? Am i make any mistakes? If it's possible that the WFLW i downloaded are not matched with your annotation files?