JiaRenChang / RealtimeStereo

Attention-Aware Feature Aggregation for Real-time Stereo Matching on Edge Devices (ACCV, 2020)
GNU General Public License v3.0
168 stars 29 forks source link

Implementation details for RT Stereo #10

Open nakul3112 opened 3 years ago

nakul3112 commented 3 years ago

Hi @JiaRenChang,

 I was working on AnyNet (https://github.com/mileyan/AnyNet)  project, but haven't got appropriate results because I was not able to compile SPN module  successfully (Error while running make.sh: Not able to find setup.py file). Meanwhile, I read your paper on Real-Time Stereo and was  curious to implement the same, since it has been trained on datasets like Sceneflow & KITTI just like AnyNet.

 I would appreciate your time and efforts if you could clarify few queries I have:

1) As I read in your paper, the outcome of this model is 10-35fps on Nvidia GPU, correct? 2) Does this model needs SPN? 3) Is it recommneded to finetune the model trained on Sceneflow previously , with one of the KITTI datasets (Kitti2015 or Kitti2012)? 4) Also, does Test_img.py file output the colored disparity map as shown in your paper.

Regards, Nakul

JiaRenChang commented 3 years ago

Hi, @nakul3112 : 1.the proposed model is 10-35 fps on Nvidia TX2 which is a emebdding system (https://www.seeedstudio.com/NVIDIA-Jetson-TX2-4GB-Module-p-4414.html).

  1. This model does NOT need SPN.
  2. Yes.
  3. The output disparity in a single channel image (gray image).
nakul3112 commented 3 years ago

Hi @JiaRenChang,

Thanks for your quick response. Also: 1) What strategy would you suggest to get the colored output disparity? Do we change the datatype of image? or something else?

2) Also, the conclusion and benchmark results from the paper show that the output is even sharper than AnyNet, even though both the model give speed of 12-35 FPS, am I correct?

Regards, Nakul

JiaRenChang commented 3 years ago

Hi, @nakul3112:

  1. The KITTI website offers the codes for colorizing disparity. You can check it.
  2. Yes, the proposed method can give superior results than AnyNet in my experiments. Jia-Ren
nakul3112 commented 3 years ago

Hi @JiaRenChang ,

Thanks. 1) what is the difference between Test_img.py and submission.py? 2) So while training, the default model is RTStereoNet,but I see in finetune.py and test_img.py, the default model is StackHourGlass , could u explain if this is by purpose? I started training the model with RTStereoNet as default, and wanted to clarify those doubts before I start finetuning.

Thanks again for helping me out, Regards, Nakul

nakul3112 commented 3 years ago

Hi @JiaRenChang ,

Would appreciate your help in following queries.

1) What is the difference between Test_img.py and submission.py? 2) while training, the default model is RTStereoNet, but I see in finetune.py and test_img.py, the default model is StackHourGlass , could you explain if this is by any purpose? I started training the model with RTStereoNet as default, and wanted to clarify those doubts before I start finetuning with KITTI.

Thanks again for your time Regards, Nakul