lyndonzheng / Synthetic2Realistic

[ECCV 2018]: T2Net: Synthetic-to-Realistic Translation for Depth Estimation Tasks
180 stars 42 forks source link

Result question #18

Closed name333 closed 5 years ago

name333 commented 5 years ago

Hello, thank you for sharing your code, I run the training and test code according to the process. During the training, the lab_s is around 0.5. I don't use the KITTI GT during the training, so there is no lab_t, and other loss can converge very well. When using the evaluation code you provided. I found that the results of the indicators obtained through the evaluation code are very different from those mentioned in the paper (maybe the results are not good). So I will use the estimation chart obtained by other methods and the GT chart saved by KITTI conversion, and use the evaluation code provided by you to evaluate, but the results of the indicators obtained are also very different. So, I would like to ask you if you need to pre-process the prediction graph before running the evaluation code (evaluation.py), or you need to make parameter changes on the code you provide.(The following Chinese translation) 你好,感谢你分享你的代码, 我按照流程运行了训练和测试代码,训练期间,lab_s在0.5左右,训练期间没有使用KITTI的GT所以没有lab_t,其他的loss都能很好的收敛。 在使用你提供的评估代码时。我发现通过评估代码得到指标结果和论文提到的差别很大(可能是得到的结果不好)。于是我将其他方法得到的预测图和通过KITTI转换保存的GT图,使用你们提供的评估代码进行评估,但是得到的指标结果同样相差很大。所以,我想请问一下你们在运行评估代码(evaluation.py)之前需不要需要对预测图进行预处理,或者在你提供的代码上需要做参数改动。

lyndonzheng commented 5 years ago

@name333 Could you provide the test command you used on the outdoor KITTI dataset? If you use the pre-trained model and results I shared on google drive, the estimation accuracy should be similar to the results in the original paper. Besides, please send your test accuracy of our results shared on google drive, so that I can see the problem comes from where.

name333 commented 5 years ago

yes,thank you for your reply, 1.test command:python test.py --name Outdoor_nyu_wsupervised --model test--img_source_file /dataset/Image2Depth31_KITTI/testA_SYN80.txt(TXT content:vkitti test image path) --img_target_file /dataset/Image2Depth31_KITTI/testA.txt(TXT content:KITTI test image path) evaluation command: python evaluation.py --split eigen -- file_path /datasplit path / --gt_path /Kitti original file path/ --garg_crop 2.Since the pre-training model provided is multi-GPU version, I have not tried it, Mine is a single GPU,can it run on single-GPU?Ready to try now 3.I don't know what I understand estimation accuracy right?I understand estimation accuracy Is it corresponding to a1, a2, a3 in the evaluation code,if yes ,This is the result of my current run:a1, a2, a3:0.1609 0.3350 0.5009

lyndonzheng commented 5 years ago

@name333 Your test command is correct, but the evalution.py command should have the --predicted_depth_path. Besides, the a1, a2, a3 value should not be that, even you use the mean depth value to do the evaluation. You can check the depth range on evaluation.py code to see if the ground truth and predicted truth has similar range. Please compare the predicted depth value we shared on google driver with your results.