graphdeco-inria / hierarchical-3d-gaussians

Official implementation of the SIGGRAPH 2024 paper "A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets"
Other
935 stars 87 forks source link

Difference pose between gt and render images #43

Closed Gaaaavin closed 2 months ago

Gaaaavin commented 2 months ago

I was able to train the model smoothly with my own data. But I have a problem when I tried to render the images from the trained model. I tried to run the following code:

python render_hierarchy.py -s ${DATASET_DIR} --model_path ${DATASET_DIR}/output --hierarchy ${DATASET_DIR}/output/merged.hier --out_dir ${DATASET_DIR}/output/renders --scaffold_file ${DATASET_DIR}/output/scaffold/point_cloud/iteration_30000

The is the same as that from the bottom of the readme, except that without the --eval flag. I changed this because I didn't use the --eval flag when training and I don't have the test.txt file.

However, I find that the rendered images and the gt images have a different pose. For example, the following two images are the first image in my data. imageimage

Please let me know what is the code that I should modify.

Gaaaavin commented 2 months ago

It turns out the problem was introduced by using the pose before reorientation (i.e. from the rectified folder, instead of the aligned folder). To successfully run the hierarchical rendering for the whole scene, you need to have images and sparse folder under DATASET_DIR, which doesn't exist if you follow the data preparation procedure. My solution is to create symbolic links to the folder. Go to your DATASET_DIR and run

ln -s camera_calibration/aligned/sparse/ sparse
ln -s camera_calibration/rectified/images/ images

This should solve the problem.