deguchihiroyuki / E2GS

Other
5 stars 0 forks source link

Problem with renderer while training the framwork #1

Open trishagithubreddypalli opened 2 weeks ago

trishagithubreddypalli commented 2 weeks ago

Hello @deguchihiroyuki!

Thank you for open sourcing this nice work.

When I train, I am getting below error.

Number of points at initialisation : 100000 [23/09 13:28:51] Training progress: 0%| | 0/30000 [00:00<?, ?it/s]Traceback (most recent call last): File "train_BlurAndEvent.py", line 273, in training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from) File "train_BlurAndEvent.py", line 111, in training render_pkg = Nrender(viewpoint_cam, gaussians, pipe, background, viewpoint_cam.image_name) File "/mnt/data/trisha/3dgs/E2GS/gaussian_renderer/init.py", line 159, in Nrender debug=pipe.debug TypeError: new() got an unexpected keyword argument 'viewmatrix'

It seems as per my understanding, rendered is slightly modified that from standard 3DGS renderer.

Can you please update renderer installation details or help me to resolve this error.

Thank you

deguchihiroyuki commented 2 weeks ago

Hi @trishagithubreddypalli , thank you for interested in my work!

I'm sorry but I can't get the reason why the error happens because I confirmed train_BlurAndEvent.py works with the code I uploaded on github.

I think 'viewmatrix' is the argument of the "class GaussianRasterizationSettings(NamedTuple):" in G:\ssd1\anaconda3\envs\gaussian_splatting\Lib\site-packages\diff_gaussian_rasterization

So, please check about this. And I use diff_gaussian_rasterization=0.0.0

trishagithubreddypalli commented 2 weeks ago

Thank you so much for the help .The error is resolved .

trishagithubreddypalli commented 2 weeks ago

When I am working with the synthetic dataset , after the training completion I am able to run the render.py file .But when I am using real world dataset , the training process is done perfectly but when I am running the file render.py I am getting error like ERROR: Looking for config file in /mnt/data/trisha/3dgs/e2gs/E2GS/output/7009da11-3/cfg_args Config file found: /mnt/data/trisha/3dgs/e2gs/E2GS/output/7009da11-3/cfg_args Rendering /mnt/data/trisha/3dgs/e2gs/E2GS/output/7009da11-3 Loading trained model at iteration 30000 [26/09 12:20:18] Traceback (most recent call last): File "/mnt/data/trisha/3dgs/e2gs/E2GS/render.py", line 123, in render_sets(model.extract(args), args.iteration, pipeline.extract(args), args.skip_train, args.skip_test) File "/mnt/data/trisha/3dgs/e2gs/E2GS/render.py", line 87, in render_sets scene = Scene(dataset, gaussians, "no_blur_loss", load_iteration=iteration, shuffle=False) File "/mnt/data/trisha/3dgs/e2gs/E2GS/scene/init.py", line 88, in init self.cameras_extent = scene_info.nerf_normalization["radius"] UnboundLocalError: local variable 'scene_info' referenced before assignment

Can you please help me out from these .Where can be the mistake?

Thank you for your time.

trishagithubreddypalli commented 2 weeks ago

And also when I am reconstructing the scene using the colmap . I am getting some NANs in pointcloud.ply file . Where will be the issue? I will be grateful if you help me out from this issue. Thank you

trishagithubreddypalli commented 1 week ago

If it is possible, can you please share the dataset after applying EDI model for deblurring.

Thank you!

GopiRajuMatta commented 1 week ago

Yes, I also tried but images are not deblurred properly, hence colmap is not working on these..

Please provide dataset. It would be much appreciable, if you can share the dataset which directly supports gaussian splatting(includes pose information as well sparse point cloud)..

Thank you so much

deguchihiroyuki commented 1 week ago

@trishagithubreddypalli @GopiRajuMatta , thanks for raising the discussion. I will upload the code to apply EDI model later.

And, I create the initial point cloud like the way below.

  1. Using "colmap gui"
  2. Doing "automatic reconstruction"
  3. After finishing it, "\dense\0\sparse\points3D.bin" shoould be made, and use it as an initial pointcloud.

Even if inputs are the E2NeRF's blurry images, I think it would work. So please try to do so!