facebookresearch / vggsfm

VGGSfM: Visual Geometry Grounded Deep Structure From Motion
Other
829 stars 52 forks source link

How to get dense point cloud #51

Closed LaFeuilleMorte closed 3 days ago

LaFeuilleMorte commented 3 weeks ago

Hi, I've reconstructed point cloud using your method, My input was 188 images extracted from a video, But I've got extremely sparse point clouds: only got 6,226 points, while got around 80,000 when using colmap. Especially on the textureless area(walls, floors and ceilings) My command was:

python demo.py SCENE_DIR=examples/living_room_sparse shared_camera=True camera_type=SIMPLE_PINHOLE query_method=aliked query_frame_num=2 max_query_pts=512

Is there anything I can do to make to point cloud denser? Looking forward to your suggestions! image

jytime commented 3 weeks ago

Hi,

The simplest way it to increase the number of query_frame_num and max_query_pts. You can first try query_frame_num=10 or something else.

LaFeuilleMorte commented 3 weeks ago

Hi,

The simplest way it to increase the number of query_frame_num and max_query_pts. You can first try query_frame_num=10 or something else.

OK, I'll try, thank u very much

LaFeuilleMorte commented 3 weeks ago

I increased the query frame num from 2 to 10. But the OOM error happened. In your ReadMe, I see the query frame num won't affect memory consumption right? image BTW, what's the dense depth option for? Can I make my point cloud denser with that?

jytime commented 3 weeks ago

The dense depth option will enable to predict monocular depths, and align monocular depths with our sparse points, resulting in a dense point cloud (with a resolution of NxHxW, meaning one point per pixel per frame). However, this point cloud may be noisy. You might want to give it a try.

I see. Regarding the oom problem, the query frame num in current version should not lead to a higher memory. I will have a check if something is wrong there.

jytime commented 2 weeks ago

Hi @LaFeuilleMorte ,

By the way, are your frames ordered? If so, you can try our video runner now, by

python video_demo.py SCENE_DIR=/PATH/TO/YOUR/VIDEO/FOLDER

This could give you a quite denser point cloud without a huge GPU consumption.

jytime commented 2 weeks ago

Hi @LaFeuilleMorte ,

I have solved the problem, now when the query_frame_num is too high, we will split the tracks to chunks so that no oom any more. You need to pull from the latest commit.

More specificlly, two parts have been changed:

https://github.com/facebookresearch/vggsfm/blob/00ddb1aac78dea124d7d4994a14b1278f3e3eb08/vggsfm/utils/triangulation.py#L703-L715

and

https://github.com/facebookresearch/vggsfm/blob/00ddb1aac78dea124d7d4994a14b1278f3e3eb08/vggsfm/runners/runner.py#L966-L972

The related hyperparameters are https://github.com/facebookresearch/vggsfm/blob/00ddb1aac78dea124d7d4994a14b1278f3e3eb08/vggsfm/runners/runner.py#L894 and https://github.com/facebookresearch/vggsfm/blob/00ddb1aac78dea124d7d4994a14b1278f3e3eb08/vggsfm/utils/triangulation.py#L712

I default them for 40GB GPU. If you are using a GPU with smaller memory, please decrease these two numbers correspondingly.

Idea-in-Dream commented 2 weeks ago

Is there a requirement for the number of images in this? What is the maximum number of images that can be used with 32g of memory Thank you

I use 500 pictures,OOM,use python video_demo.py SCENE_DIR=/PATH/TO/YOUR/VIDEO/FOLDER torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 31.74 GiB total capacity; 15.53 GiB already allocated; 6.60 GiB free; 24.51 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

jytime commented 2 weeks ago

For your case, please set max_points_num=100000 and max_tri_points_num=400000. If this still does not work, further reduce the number accordingly.

jytime commented 3 days ago

Hi @LaFeuilleMorte ,

For your case, I further provide a flag called extra_pt_pixel_interval, which can generate a much denser point cloud without any extra computation burden. More details can be found in the new readme.

Close this issue now.