zouchuhang / LayoutNet

Torch implementation of our CVPR 18 paper: "LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image"
http://openaccess.thecvf.com/content_cvpr_2018/papers/Zou_LayoutNet_Reconstructing_the_CVPR_2018_paper.pdf
MIT License
419 stars 93 forks source link

Could you share the expected validation loss during training? #10

Closed yakirsudry closed 6 years ago

yakirsudry commented 6 years ago

So far I trained only the first 2 steps. (I don't really care about the box prediction, just the edge and corner detection).

First I trained using driver_pano_edg.lua and reached validation loss of 0.12333107 after 3260 iterations, which stopped improving for the rest ~4700 iterations.

Then, using this model, I trained with driver_pano_joint.lua and reached validation loss of 0.20790252 after 1480 iterations, which stopped improving for the rest ~6500 iterations.

It seems to produce results that are not as good as the supplied pretrained model.

What is the expected validation loss in each step?

zouchuhang commented 6 years ago

@yakirsudry ~0.20 is the expected validation loss which is the summation over the loss of layout edges (~0.11) and corners (~0.8) prediction. Could you share the details about the results that are "not as good as the supplied pretrained model" ?

yakirsudry commented 6 years ago

Thanks. I figured what was my mistake.

I still suggest documenting what is the expected validation loss (and maybe even training loss) at each step of the way (After edges training, joint, and full)

I'm closing this issue, but I think you should delete it