-
hi,
I have a little doubt once we find the depth map then how can we convert the depth for a particular pixel in distance(meter).
thanks
Gagan
-
We currently training the model that controls the light direction based on the adaptive group norm on SD1.5
- **Input**: environment map and text prompt
- **Output**: the generated image follows t…
-
Has anyone tried integrating OpenGL to Pykinect2 to get a 3D view? Is it possible since Pykinect2 doesn't provide point cloud data?
avpai updated
3 months ago
-
Hi Amini, thanks for sharing the surprising and very useful project. I am very intresting in the data-driven simulation of RGB camera. So I read your paper 'Learning Robust Control Policies for End-to…
-
Hi,
I am trying to run this code on the underwater images released in your dataset to obtain the depth-map. For many images, I am getting patch-like artefacts in the transmission map. I am attachi…
-
Hi,
I am trying to calibrate color and depth streams so I can use depth thresholding on the color data. When I call `device_->setDepthColorSyncEnabled(true)` It doesn't seem to effect the depth or c…
-
**Describe the bug**
I was trying to convert the data captured with Polycam. Using InterfacePolycam pointing towards the keyframes folder and expecting an output of a .mvs file. (ie `InterfacePolycam…
-
The following:
```python
from guidance import substring
b = ("Hello " + substring("foobar baz " * 100)).serialize()
print(len(b))
```
Throw stack overflow exception:
```
Traceback (most …
-
on the kitti depth dataset site,should we also download the projected raw lidar scans,or will the annotated depth map dataset link be enough for evaluating this project?
-
Hi,
1. I am mapping through a camera and 3Dlidar. I want to generate the depth image through my 3Dlidar, since the depth image of my camera is crooked and isn't reliable.
i have used the argument …