haofengac / MonoDepth-FPN-PyTorch

Single Image Depth Estimation with Feature Pyramid Network
MIT License
327 stars 69 forks source link

Why when computing RMSE loss, fake and real must be multiplied by 10? #3

Open jundanl opened 6 years ago

jundanl commented 6 years ago

You do transforms.ToTensor() on depth after loading it. I found that PyTorch will actually do depth = depth/255. I guess, when computing loss, it would be better to multiply 255 in order to get a correct loss. I haven't read your code thoroughly, and don't know what you have done when you preprocessed the data. I feel confused why fake and real must be multiplied by 10 when computing loss.

I'm a beginner in this field. Thanks for your great work.

haofengac commented 6 years ago

Hello,

Thanks for pointing this out. The script is actually for the NYUv2 dataset. I'll try to release the script for KITTI as soon as possible.