Open Hub-Tian opened 5 years ago
I think it is due to the depth of sky region is wrong, you could crop some top regions of the output dense depth.
Thanks for you reply! Cropping the top regions will alleviate this problem, however, this "ray-like" case also happens near the edge of objects. It seems to be caused by the continuous depth prediction at the boundary of objects where the depth should "jump" in reality. Should some post-processing be taken to the "pred" from the "test" function (pred, time_temp = test(imgL, sparse, mask))? I also wonder how did you plot the figure 1 in you paper? I am trying to get the same results like yours.
Because our result is not smooth, you can filter the dense depth by traditional filters such as the median filter. To get the map in our paper, you can ask Yinda Zhang for help. Thanks for your attention!
Hi, this is an impressive work, however some questions appear in my mind.
OUTPUT
I want to know what it is , is it the depth map? and how can I get the correct depth map? thanks!
the torchvision version must be 0.2.0
On Tue, Sep 17, 2019 at 2:34 PM NowBurn notifications@github.com wrote:
Hi, this is an impressive work, however some questions appear in my mind.
- I run the code on the KITTI (train or val) with your trained model like below description. [image: 图片] https://user-images.githubusercontent.com/19162375/65016427-da09c900-d956-11e9-9a21-0fe8ab5479cc.png
- the result is below INPUT: [lidar_raw] [image: 0000000005] https://user-images.githubusercontent.com/19162375/65016877-eb070a00-d957-11e9-838c-a4f9eefffec2.png [gt] [image: 0000000005] https://user-images.githubusercontent.com/19162375/65016966-230e4d00-d958-11e9-8d4c-a42dd59034f4.png [rgb] [image: 0000000000] https://user-images.githubusercontent.com/19162375/65016826-cad74b00-d957-11e9-940c-05cf0aceee12.png
OUTPUT [image: 0000000005] https://user-images.githubusercontent.com/19162375/65016570-32d96180-d957-11e9-816f-05c70bfcf770.png
I want to know what it is , is it the depth map? and how can I get the correct depth map? thanks!
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/JiaxiongQ/DeepLiDAR/issues/5?email_source=notifications&email_token=AJANJRCKIH7PNDPXHE7PLUTQKB27JA5CNFSM4IUSLXN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD63OXTI#issuecomment-532081613, or mute the thread https://github.com/notifications/unsubscribe-auth/AJANJRHNDQX57T2RI5AV4WLQKB27JANCNFSM4IUSLXNQ .
BTW, how long does it need to train the model from scratch? And what kind of GPU and how many of it did you use to train the model?
We used 3 GeForce GTX 1080 Ti GPUs and it takes about 3 days.
@nowburn I have the same problem as yours. When I use the test.py with pretrained model, the evaluation results shows abnormal. rmse:7998.173 irmse:2.1443906 mae:4290.926 imae:0.2070867 The first dense map shows:
I wonder if it's related to the pytorch version. My pytorch version is 1.0.1.
you'd better use the environment that our equirements described
@junweifu Thanks to the author's reply, It will work with the environment that author's requirements described
@JiaxiongQ Thanks for your reply before, and I want to know how to evaluate the metrics like 'rmse', the kitti website says that they don't accept evaluation which is informal. I use your code to computer the rmse, [input] prediction gt:depth_annotated
Thanks!
Thank you for your advice. I find the torchvision version cause this kind of problem.
@JiaxiongQ
Thank you for your help. I use the official devkit tools to evaluate the results of pretrained model. The results are shown as follow:
mean mae: 0.215136
mean rmse: 0.687001
mean inverse mae: 0.00109365
mean inverse rmse: 0.00250434
mean log mae: 0.0123438
mean log rmse: 0.0269894
mean scale invariant log: 0.0267794
mean abs relative: 0.0124689
mean squared relative: 0.0011126
Is the evaluation method from official devkit tools as the same as you do?
Do those results seem normal?
One depth completion is shown as follow:
I think they seem normal
@JiaxiongQ OK, thanks~~~
Thanks for your wonderful work! I encountered some problem while visualizing the results on kitti using your pretrained model. I ploted the results ('pred') from test.py(pred, time_temp = test(imgL, sparse, mask)). The visualization results is abnormal. Any advice on this? I used the image from the training set from 2D detection and get the sparse lidar depth map by projecting the lidar point cloud into image plane.