-
Hi,
I exported the RGB and Depth camera captures as jpg files using UE's Sequencer at 30fps. When I overlay the frames from both streams, I could see that there is a strong spatial misalignment bet…
-
Hi! Is it possible to get the intrinsic properties of the depth camera and if so, how? Thanks!
-
Thanks again.
I have a question about using multiple cameras at the same time.
As per readme.md
When I run multi_camera.launch.py, I get the following log
To confirm that each node is working pr…
-
[INFO] [1719903957.734596519] [elevation_mapping]: Waiting for tf transformation to be available. (Message is throttled, 10s.)
[elevation_mapping-1] [INFO] [1719903957.826472056] [elevation_mapping]:…
-
Thx for your awesome work! @yocabon @beanmilk @seyoung-hyun
![image](https://github.com/naver/dust3r/assets/43490149/b200a028-fd3d-4b6d-ad92-e37e2c8e3c8a)
-
**Outstanding Work! Thank you!!**
However, I didn't quite understand the CUDA code, so I have two questions to ask:
1. **About `middepth` in the Code**
What exactly does it mean? Does the au…
-
Is it possible to make points from a backprojected point cloud (from depth map) individually selectable? Would be nice to have the same ability for selections of those points to project to (other) cam…
-
In the example of the Classification Data :
"//Here you would grab some data from your sensor and label it with the corresponding gesture it belongs to""
If I would like a sample a left swipe ges…
tthil updated
6 years ago
-
Hi there!
**Goal:**
I am attempting to use PyTorch3d with Pulsar to render images from point clouds of an outdoor scene.
**Setup:**
The ground truth image from the camera looks like this (720…
-
I am trying to interpret images output from a depth camera. A previous answer says:
> Each pixel in the result records the Euclidean distance between the 3D point corresponding to that pixel and the …