-
Thank you so much for your awesome work!
The paper mentions being able to generate consistent scale depth estimates for video, but the depths I generate for a single frame using the code you provided…
-
Hi,
Nice work! Your work is amazing. Most of datasets was fine. And I followed your advice and tried testing the NeRF dataset with an initial poses, setting TSDF_scale to 0.1. Unfortunately, I ran in…
-
### 🔘 Request type
General feedback
### 🔘 Description
Hello,
I am playing with the cli of the tool using command
`WINDOW_BACKEND=headless depthflow dolly input -i 1.jpg main --quality 8…
-
Hi nagadomi! Any chance we can get this option for those who use other methods for the final conversion? Not sure if it will give even a slight boost to speed, but the storage savings and less drive u…
3gMan updated
3 weeks ago
-
请教一下,在NTU60的cs、cv和NTU120的cs下可以达到和论文差不多的实验结果,除了NTU120的setup,请问一下在setup实验设置下的超参数?
我们跑过NTU120的cv,是和setup结果差不多的,是不是论文中NTU的setup结果是用cv来跑的?
或者能不能提供一下NTU120的setup实验设置下的代码?
非常感谢!
-
So basically what I’ve tried doing is converting a 2D video into a 3D video with anaglyph.
1. I render a video clip into a PNG sequence using Adobe Media Encoder
2. I run the PNG sequence using Mi…
-
I try to train a NeRF on the lego bulldozer scene, and although the RGB video comes out as expected, the depth video is just black. I'm using torch==1.8.1 and torchvision==0.9.1 with cuda-10.2. Any i…
-
Hi, thanks for the great work. I'm trying to recover the pointclouds sequence of the original videos. However, I find that I could not align the pointclouds according to the annotations *"pkl" file) a…
-
When I try to use --frame-dup, I get:
```
Error Video encoding (v2.42.2)
D:\Eseguibili\Media\StaxRip\Apps\Encoders\x265\x265.exe --crf 16 --preset slow --output-depth 10 --level-idc 4.1 --no-hi…
-
Hello ARKit team,
I'm using an IOS app that uses ARKit as backend for scanning scenes and returning RGB video and depth maps, as well as camera poses and intrinsics. As you know, the depth generate…