-
Hi,
I really admire your work and currently I am working on the multiple cameras setup for SLAM and localization.
The difficult thing of this is that two cameras detect the lines and the masked im…
-
## Start with the `why`:
The `why` of this effort (and initial research) is that any many applications depth cameras (and even sometimes LIDAR) are not sufficient to successfully detect objects in …
-
The Record3D app you developed is very useful and I appreciate it.
I have two questions while using your Record3D app.
**First**, I wonder if the USB streaming function can fix the camera's in…
-
I have a few questions regarding the depth prediction output in run_infer.py:
1. In `run_infer.py`, the `pipe_out` seems to only include predictions for `'disparity'`, `'disparity_colored'`, and `i…
-
Branch: [automation/object-detection-and-bounding-boxes](https://github.com/mcgill-robotics/rover/tree/automation/object-detection-and-bounding-boxes)
Need to interface with IntelRealSense Depth Ca…
-
### Start with the `why`:
VideoEncoder only supports NV12 or GRAY8 format. It cannot be used to encode depth images, only disparity images with subpixel disabled. This creates a big headache for us w…
-
It looks like the `MultiCameraSensor` does not support wide-angle cameras. Currently a single wide-angle camera can be created by using sdf `
-
I recently procured LIPSedge 3D camera (LIPSedge L215u/L210u) for developing custom computer vision applications like gesture recognition, face recognition, etc. However, I and facing difficulty in ge…
-
Hi! Thanks for the wonderful work. Regarding mesh extraction, after training your demo scene (m360 garden), I run the demo code ```python render.py -s ../3dgs/dataset/garden -m output/m360/garden/ --s…
-
Hi,
I would like to use monocular camera with imu data. I do not want to use stereo cameras. Is there a way to use them without using third party SLAM algorithms.
And when using IMU data, what is t…