DekuLiuTesla / CityGaussian

CityGaussian Series for High-quality Large-Scale Scene Reconstruction with Gaussians
https://dekuliutesla.github.io/CityGaussianV2
Other
516 stars 37 forks source link

Failed to save 30000 Gaussians point_cloud.ply #56

Open xuncpp opened 1 week ago

xuncpp commented 1 week ago

Thank you for your excellent work. I followed your detailed instructions and tried it. But it failed at the step of train coarse global gaussian model.The dataset I used is residence. The reason for the failure is that the point cloud results of 30,000 times cannot be saved. The specific error information is as follows:

[ITER 30000] Evaluating train: L1 0.05433260723948479 PSNR 22.245019912719727 [11/11 19:52:55]

[ITER 30000] Saving Gaussians [11/11 19:52:56]
Traceback (most recent call last):
  File "D:\project\CityGaussian\train_large.py", line 311, in <module>
    training(lp, op, pp, args.test_iterations, args.save_iterations, args.refilter_iterations, args.checkpoint_iterations, args.start_checkpoint, args.max_cache_num, args.debug_from)
  File "D:\project\CityGaussian\train_large.py", line 146, in training
    scene.save(iteration, dataset)
  File "D:\project\CityGaussian\scene\__init__.py", line 248, in save
    self.gaussians.save_ply(os.path.join(point_cloud_path, "point_cloud.ply"))
  File "D:\project\CityGaussian\scene\gaussian_model.py", line 219, in save_ply
    elements[:] = list(map(tuple, attributes))
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
MemoryError

The error message points to the function save_ply,The error statement is "elements[:] = list(map(tuple, attributes))" I tried different max_cache_num(256 and 32) as suggested, but all failed. Also, 7000 saved successfully. I hope you can answer my question, thank you!

xuncpp commented 1 week ago

The additional operating device information is as follows: the graphics card is 4090 24GB, the computer memory is 32GB.