-
Greetings,
Hope all is well!
We are using ScanNet++ for a research project (access granted) but encountering an issue with OpenGL with our lab's cluster, and we are probably unable to resolve it…
-
----------------------------------------------------------------------------------------------------
| Required Info | …
-
Currently, we visualizer depth maps as point clouds, each point representing a sample on the depth image.
For 4k images (3840x2160) that's 8.4mio points which means we have to do a lot of vertex & …
-
can we get the depth images per scene for this ?
-
Hi,
Thank you for your hard work on this project. I have a question regarding metric depth estimation: how does your model learn the depth of the sky in the images? Since there is no ground truth d…
-
Hi
I am using the Gen 3 Lite Robot 7DoF with the Realsense D410 camera. For my project, I need RGBD images from the camera, which I access using the ros2_kortex_vision package. Based on the GitHub d…
-
The depth maps exported by Nerfstudio are usually false color images. Is it possible to export the raw depth images?
-
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
hello :wave: @Skalski…
-
I have some questions regarding the stereovision sample in `samples/sample_c++/module_sample/perception/test_perception_entry.cpp`
1. The camera directions, e.g. `DJI_PERCEPTION_RECTIFY_FRONT` are …
uzgit updated
2 months ago
-
Hi @georgegu1997!
I am currently working on my new project based on your code base.
Big thanks to your code, I can save a lot of time implementing the preprocessing stage of ego-videos.
Never…