Open merriaux opened 5 months ago
Thanks for your issue! Your modifications are all correct, but I am sorry that we made more changes to our colmap, and that part is not currently open source, which may be the reason for the failures.
It is possible to use only waymo's original camera poses and lidar point clouds, only losing some of the reconstruction effect.
For a better user experience, we will upload some of the processed data later!
best
Hi @merriaux we are excited to share the information that we release our preprocessed dataset, feel free to try it!
best
Hi @LightwheelAI ,
thanks for the sharing.
In colmap/sparse/0
folder there are several point cloud versions (it is not the same for each sequence preprocessed you shared):
I supposed that I have to use the lasted version(_clean or upper id file) ?
I started by retrain 2094681306939952000 using points3D2.bin, on all the frame [0, 197]. In the final rgb output, static background seems to be well reconstructed, but dynamic objects don't look good at all like your results.
But if we looks closer, the separation objects/background seems to fail to:
Objects: https://github.com/LightwheelAI/street-gaussians-ns/assets/16850602/1e3d22cc-f939-4f2f-b9df-6f692da7b783 Background https://github.com/LightwheelAI/street-gaussians-ns/assets/16850602/292afcb0-9a55-4b05-bb83-4275c9710064 outputs https://github.com/LightwheelAI/street-gaussians-ns/assets/16850602/116a3c85-27e0-4d41-b572-9d32562433cd
Metrics are here:
'results': {'psnr': 32.550968170166016, 'psnr_std': 3.979510545730591, 'ssim': 0.9497784972190857,
'ssim_std': 0.01306102890521288, 'lpips': 0.1528734266757965, 'lpips_std': 0.024015061557292938, 'num_rays_per_sec':
14777026.0, 'num_rays_per_sec_std': 4633082.0, 'fps': 6.012787342071533, 'fps_std': 1.8852059841156006}}
From the readme, I should reach psnr>35. I also tried on sequence 10588771936253546636_2300_000_2320_000, with similar results.
Is there any config training not good in the repo ? what could create the bad quality dynamic objects reconstruction and separation? thanks
Hi, thanks for trying.
The points3D_clean.bin
is point cloud that we clean the floaters in SfM. The points3D.bin
has also been cleaned with different parameter, so it is the same as our experiments.
We believe that this may be an effect due to the frames chosen, in our experiments we chose the following sequences in each scene, which is the same as in the paper. You can edit it by frame_select
in sgn_dataparser
.
10448102132863604198 [0,85]
2094681306939952000 [20,115]
8398516118967750070 [0,160]
Hello, excellent work, when testing the code, the initialized point cloud does have multiple versions as well, and I don't seem to find points3D_withlidar.txt
in the data you processed. BTW, it has small error at export
, I think it should be bash scripts/shells/export.sh path/to/your/output/
, The upper-level directory of config.yaml
thanks @LightwheelAI , I retrain these 3 sequences, the metrics are:
10448102132863604198 [0,85]
2094681306939952000 [20,115]
8398516118967750070 [0,160]
I don't I reach to reproduce your result, could you confirm that ? Any ideas of what is going wrong ? Any others people with same issues ?
thanks
Thanks for your trying. We downloaded the code for this repo directly and experimented on the 2094681306939952000 [20,115] scenario, but it didn't appear to be your case, here's the result of our experiment
{
"psnr": 35.27250289916992,
"psnr_std": 1.7390128374099731,
"ssim": 0.9604260325431824,
"ssim_std": 0.005675625056028366,
"lpips": 0.1401970386505127,
"lpips_std": 0.0072755273431539536,
"num_rays_per_sec": 31165180.0,
"num_rays_per_sec_std": 10825163.0,
"fps": 12.681144714355469,
"fps_std": 4.404770374298096
}
We're experiencing a similar issue to you with our dynamic object reconstruction effect, which we'll try to optimise in a subsequent update!
Hi @LightwheelAI,
Many thanks for your answer and new tests. My bad, I don't understand that happen, I just retrained sequence 2094681306939952000 [20,115]
and this time I got similar metrics for both initialisation point cloud:
With points3D2 init:
With points3D1 init:
@merriaux Hi, may I ask how you separate and visualize those dynamic objects in each image? Is there a related script?
Hi, Thanks for our work, In order to understand data preprocessing, I inspected point cloud at several step. And it look like colmap mapper didn't provide good reconstruction for each sequence. For example on 10588771936253546636_2300_000_2320_000:
point cloud from waymo lidar, look ok:
but colmap mapper, looks like there is a angle (a right turn on yaw axis) in the middle of the reconstruction:
and after the colmap_comparer, the fit between the 2 point cloud merged are not good at all. It is difficult to see without 3D visualization, but the mapper pcd is rolled approximately 45° on the right: Maybe i did a mistake in my colmap modifications: https://github.com/LightwheelAI/street-gaussians-ns/issues/1#issuecomment-2133347625
Of course with an initialisation like, the training is not very good. I have also tried with 2094681306939952000_2972_300_2992_300 (one of the sequences listed in the readme), and the mapper reconstruction just failed.
Do you have a list of waymo sequences, you already validate with your data process pipepline ? do you already reach some fitting issues with model_comparer ? I have some difficulties to reproduce.
thanks for your help