graphdeco-inria / hierarchical-3d-gaussians

Official implementation of the SIGGRAPH 2024 paper "A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets"
Other
860 stars 78 forks source link

Weird result #61

Open HungNgoCT opened 2 weeks ago

HungNgoCT commented 2 weeks ago

Hi.,

Thank you for method and your open source.

I tried to repeat your steps on my data to understand as followings:

Can anyone help me to understand this? I also share the link of my data here. https://drive.google.com/file/d/14UgqhZp6FQFLKmOZVv-DV98Q79CoeQNv/view?usp=sharing

Thank you

0002 0006

https://github.com/user-attachments/assets/c5ebf1ca-463e-4fdb-994f-903450c4fb1e

ameuleman commented 2 weeks ago

Hi, Did you use the same calibration to optimize 3DGS and H3DGS? Could you please visualize COLMAP results? Can you snap to the closest camera in the viewer? Also, if images are ordered and captured from the same camera, they should be in the same subfolder

HungNgoCT commented 2 weeks ago

Hi @ameuleman , Thank for reply. I checked, the colmap results seem correct (I change the code to use exhaustive matching and mapper). Colmap failed when use hierarchical_mapper. The below image shows colmap result in aligned folder. image

HungNgoCT commented 2 weeks ago

Thank you @ameuleman ,

Problem because number of my images is less than 100 while default setting for min no. of chunk images is 100. It cannot create chunks.

But anyway, result is not nice as conventional GS. Do you have any advise?

https://github.com/user-attachments/assets/3183fc5e-8fcb-4505-9df3-df642e719bf4

ameuleman commented 2 weeks ago

Our method is tuned (learning rates and parameters) for larger scenes with more supervising images, we could expect some degradation on small scenes 3DGS is tuned for. Running the single-chunk optimization only should work better (images/ and depths/ paths should be set appropriately):

python train_single.py -s ${CHUNK_DIR} --model_path ${OUTPUT_DIR} -d depths --skip_scale_big_gauss
submodules/gaussianhierarchy/build/GaussianHierarchyCreator ${OUTPUT_DIR}/point_cloud/iteration_30000/point_cloud.ply ${CHUNK_DIR}  ${OUTPUT_DIR} 
python train_post.py -s ${CHUNK_DIR} --model_path ${OUTPUT_DIR} --hierarchy ${OUTPUT_DIR}/hierarchy.hier --iterations 15000 --feature_lr 0.0005 --opacity_lr 0.01 --scaling_lr 0.001
HungNgoCT commented 3 days ago

Hi @ameuleman ,

Thank you for your further guidance.

I tried to follow your guidance on training for single trunk. However, it usually cause "long Gaussians" as shown in the image below, while "long Gaussians" do not appear in 3D GS. Could you please predict the reasons? Or, what parameters I should adjust to reduce this? Thanks

image

Linkersem commented 3 days ago

hi, i also have meet same problem,The quality of H3DGS is relatively poor compared to 3DGS, and there are more burrs, as you mentioned “long Gaussians”.

ameuleman commented 3 days ago

Hi, Are they run on the same calibration and use the same SfM points? Would you mind sharing the calibrated data for this scene?

HungNgoCT commented 2 days ago

Hi, Are they run on the same calibration and use the same SfM points? Would you mind sharing the calibrated data for this scene?

Hi @ameuleman ,

I used the same steps as guidance, just revised source code in order to use exhaustive matching, and min number of chunk images is 50 instead of 100: 1) Run generate_colmap 2) Run generate_chunks 3) Run generate_depth 4) Run full_train

All steps completed without any errors

Data, camera_calibration, output are here. https://drive.google.com/file/d/1oUrUfDkRKBm67ty1HMUj066yP_2EfA1U/view?usp=sharing