Open nakul3112 opened 3 years ago
Hi, @nakul3112 : 1.the proposed model is 10-35 fps on Nvidia TX2 which is a emebdding system (https://www.seeedstudio.com/NVIDIA-Jetson-TX2-4GB-Module-p-4414.html).
Hi @JiaRenChang,
Thanks for your quick response. Also: 1) What strategy would you suggest to get the colored output disparity? Do we change the datatype of image? or something else?
2) Also, the conclusion and benchmark results from the paper show that the output is even sharper than AnyNet, even though both the model give speed of 12-35 FPS, am I correct?
Regards, Nakul
Hi, @nakul3112:
Hi @JiaRenChang ,
Thanks. 1) what is the difference between Test_img.py and submission.py? 2) So while training, the default model is RTStereoNet,but I see in finetune.py and test_img.py, the default model is StackHourGlass , could u explain if this is by purpose? I started training the model with RTStereoNet as default, and wanted to clarify those doubts before I start finetuning.
Thanks again for helping me out, Regards, Nakul
Hi @JiaRenChang ,
Would appreciate your help in following queries.
1) What is the difference between Test_img.py and submission.py? 2) while training, the default model is RTStereoNet, but I see in finetune.py and test_img.py, the default model is StackHourGlass , could you explain if this is by any purpose? I started training the model with RTStereoNet as default, and wanted to clarify those doubts before I start finetuning with KITTI.
Thanks again for your time Regards, Nakul
Hi @JiaRenChang,
1) As I read in your paper, the outcome of this model is 10-35fps on Nvidia GPU, correct? 2) Does this model needs SPN? 3) Is it recommneded to finetune the model trained on Sceneflow previously , with one of the KITTI datasets (Kitti2015 or Kitti2012)? 4) Also, does Test_img.py file output the colored disparity map as shown in your paper.
Regards, Nakul