dwofk / fast-depth

ICRA 2019 "FastDepth: Fast Monocular Depth Estimation on Embedded Systems"
MIT License
926 stars 189 forks source link

Loss function #37

Open EryiXie opened 3 years ago

EryiXie commented 3 years ago

Thanks for this great work. I am currently trying to train fast-depth with my own dataset. I have noticed there is not training scripts. So I would like to ask, which depth losses are used in training?

It would be very nice, If anyone can give me a suggestion about which losses should I pick.

JVGD commented 3 years ago

I was reading the paper and just wondering the same, came here and didn't find it either

EryiXie commented 3 years ago

I was reading the paper and just wondering the same, came here and didn't find it either

Hi, I will begin to try some loss function design next week. Once I have some useful results, I will report it here.

YiLiM1 commented 3 years ago

I read the paper and found that the author in the experimental part mentioned to follow the training method of "Sparse-to-dense: depth prediction from sparse depth samples and a single image"and L1 loss was used in that paper. I tried to train the network with L1 loss, and the result was very bad.

JVGD commented 3 years ago

I used the "depth loss" in this paper and seems that the training is starting to converge. image.

I also suspect that I have a shitty dataset, and that is why I am getting so many noise (although you can see the depth more or less in some sense in the high level, there is a lot of noise, piwelwise)

sunmengnan commented 3 years ago

I used the "depth loss" in this paper and seems that the training is starting to converge. image.

I also suspect that I have a shitty dataset, and that is why I am getting so many noise (although you can see the depth more or less in some sense in the high level, there is a lot of noise, piwelwise)

which dataset are you using?