fangchangma / self-supervised-depth-completion

ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera"
MIT License
623 stars 135 forks source link

Pretrained model got poor result (RMSE=1343.609) #22

Closed icemiliang closed 5 years ago

icemiliang commented 5 years ago

Hi @fangchangma Thanks for sharing the code. I evaluated the pretrained model provided in readme. The result is not as good as reported in the paper (rmse 1343 vs 814). It was a clean clone and I followed the data folder structure. I attached the command and the screenshot of the results. Please let me know if there is an error or if I missed something here. Thank you.

python main.py --evaluate pretrain/mode=sparse+photo.w1=0.1.w2=0.1.input=gd.resnet34.criterion=l2.lr=1e-05.bs=16.wd=0.pretrained=False.jitter=0.1.time=2019-02-26@07-50/model_best.pth.tar

=> output: ../results/mode=sparse+photo.w1=0.1.w2=0.1.input=gd.resnet34.criterion=l2.lr=1e-05.bs=16.wd=0.pretrained=False.jitter=0.1.time=2019-05-08@10-21
Val Epoch: 8 [990/1000] lr=0 t_Data=0.001(0.001) t_GPU=0.014(0.023)
    RMSE=1086.59(1347.03) MAE=308.10(359.76) iRMSE=4.29(4.27) iMAE=1.50(1.64)
    silog=4.67(5.24) squared_rel=0.00(0.01) Delta1=0.994(0.992) REL=0.018(0.020)
    Lg10=0.007(0.008) Photometric=0.000(0.000) 
=> output: ../results/mode=sparse+photo.w1=0.1.w2=0.1.input=gd.resnet34.criterion=l2.lr=1e-05.bs=16.wd=0.pretrained=False.jitter=0.1.time=2019-05-08@10-21
Val Epoch: 8 [1000/1000]    lr=0 t_Data=0.001(0.001) t_GPU=0.014(0.023)
    RMSE=1005.17(1343.61) MAE=262.70(358.79) iRMSE=4.57(4.28) iMAE=1.57(1.64)
    silog=4.98(5.24) squared_rel=0.00(0.01) Delta1=0.993(0.992) REL=0.018(0.020)
    Lg10=0.007(0.008) Photometric=0.000(0.000) 
*
Summary of  val round
RMSE=1343.609
MAE=358.790
Photo=0.000
iRMSE=4.277
iMAE=1.642
squared_rel=0.006554501281207195
silog=5.2404233943858145
Delta1=0.992
REL=0.020
Lg10=0.008
t_GPU=0.023
(best rmse is 1343.609)
*
versatran01 commented 5 years ago

I have a trained model myself on half of the kitti data and got mae=0.36 and rmse=1.08. Exact same parameters as the repo but using my own training script with only depth supervision (no photometric stuff)

icemiliang commented 5 years ago

I have a trained model myself on half of the kitti data and got mae=0.36 and rmse=1.08. Exact same parameters as the repo but using my own training script with only depth supervision (no photometric stuff)

Thanks for the reply. I just found it could be because I mixed the self-supervised training model with the supervised training model. I'll do some more experiments. Closing the issue for now.