Closed cakirogluozan closed 2 years ago
Hello,
Thanks for your great work! I have a couple of questions regarding running single inference on pre-trained model.
1- Is it possible to extract detected planar regions and aligned normals on inference_single_image.py
2- Is it possible the extract the aforementioned planar regions and aligned normals on pre-trained model which you shared?
The answers to these two questions are possible, but this process needs to solve some problems, and there are many factors in the process that affect the final performance.
Suppose you are operating on a picture that has no true depth value. The scale blur problem of the monocular depth estimation algorithm will make the depth value predicted by the network not the true depth, so you need to use some tricks to restore a true depth as much as possible. For example, the maximum and minimum limits and median ratios used in our method. Secondly, you need to use the camera intrinsics to restore the point cloud. Of course, the camera intrinsics can also use an approximate value, but this will affect the final effect.
For the realization of this process, you can refer to the code for obtaining the planar result from the network prediction result in trainer.py, which also involves some other non-key parameter adjustments. You can try it according to your own data.
Hello,
Thanks for your great work! I have a couple of questions regarding running single inference on pre-trained model.
1- Is it possible to extract detected planar regions and aligned normals on inference_single_image.py
2- Is it possible the extract the aforementioned planar regions and aligned normals on pre-trained model which you shared?