SpectacularAI / 3dgs-deblur

[ECCV2024] Gaussian Splatting on the Move: Blur and Rolling Shutter Compensation for Natural Camera Motion
https://spectacularai.github.io/3dgs-deblur/
Apache License 2.0
148 stars 10 forks source link

Custom dataset #4

Closed MrNeRF closed 6 months ago

MrNeRF commented 6 months ago

Hey, super exciting work. It is not completely evident for me from the Readme, if I need to do something specific with a custom dataset. Without reading the source code, is there a simple way to process some of my data. Is it possible on a video or should I use images?

Thank you Janusch

oseiskar commented 6 months ago

Hi! This aspect is still a work-in-progress in the repo. To test the method on your own data, you need to record it with the Spectacular Rec application (see here), which also records the IMU data, exposure times and rolling-shutter readout times (Android), as this info is not currently automatically learned from data.

Unfortunately, the version of the app that records all the necessary data is still in Play/App store review so it will probably be available for download by early next week. We'll let you know in this PR and update the instructions once this happens.

In the meantime, the method can be tested with the preprocessed dataset in Zenodo, e.g.,

# (install as instructed in the README)

# download the preprocessed data
python download_data.py --dataset sai

# list cases
python train.py
# train: choose between baseline and motion_blur (ours), e.g.,
python train.py --preview --case=12
MrNeRF commented 6 months ago

Thanks for your answer. I really appreciate it.

I will play with your data. The results are looking very good. But I am more curious how my own data will look like. That's the only test that counts for me in the end :)

hardikdava commented 6 months ago

@oseiskar Just curious question, does it work without IMU data e.g. pure on colmap poses?

oseiskar commented 6 months ago

@oseiskar Just curious question, does it work without IMU data e.g. pure on colmap poses?

The approach presented in the paper does not work without IMU data or other external linear & angular velocity information. The codebase itself might be extendable to work without it and we are considering this option in the future

oseiskar commented 6 months ago

@MrNeRF : The Spectacular Rec app versions that support recording the necessary data are now public. See https://github.com/SpectacularAI/3dgs-deblur?tab=readme-ov-file#training-with-custom-data for instructions on how to train with recordings created with that app.

The easiest way to start comparing is running (after installation):

./scripts/render_and_train_comparison_sai_custom_mb.sh /PATH/TO/spectacular-rec-MY_RECORDING.zip

This will train on the dataset with and without motion blur compensation and renders a video showing the differences. If the input data is not particularly blurry, they expected level of improvement could be summarized as "subtle but clearly noticeable".

xiyufeng2 commented 6 months ago

I have samilar question. with the data processed with colmap format, how to train a scene model?

MrNeRF commented 6 months ago

@oseiskar Thanks. I will try it out for sure. I also plan to read the paper during this upcoming week as I started to do some paper threads. Anyway, excited to try it with my own data.

MrNeRF commented 6 months ago

`🎥 Rendering 🎥 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/6431(0.0%) ? -:--:-- 0:00:00 Traceback (most recent call last): File "/home/paja/projects/3dgs-deblur/nerfstudio/nerfstudio/scripts/render.py", line 276, in _render_trajectory_video writer.add_image(render_image) File "/home/paja/micromamba/envs/nerfstudio/lib/python3.10/site-packages/mediapy/init.py", line 1653, in add_image if stdin.write(data) != len(data): BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/paja/micromamba/envs/nerfstudio/bin/ns-render", line 8, in sys.exit(entrypoint()) File "/home/paja/projects/3dgs-deblur/nerfstudio/nerfstudio/scripts/render.py", line 908, in entrypoint tyro.cli(Commands).main() File "/home/paja/projects/3dgs-deblur/nerfstudio/nerfstudio/scripts/render.py", line 492, in main _render_trajectory_video( File "/home/paja/projects/3dgs-deblur/nerfstudio/nerfstudio/scripts/render.py", line 127, in _render_trajectory_video with ExitStack() as stack: File "/home/paja/micromamba/envs/nerfstudio/lib/python3.10/contextlib.py", line 576, in exit raise exc_details[1] File "/home/paja/micromamba/envs/nerfstudio/lib/python3.10/contextlib.py", line 561, in exit if cb(*exc_details): File "/home/paja/micromamba/envs/nerfstudio/lib/python3.10/site-packages/mediapy/init.py", line 1614, in exit self.close() File "/home/paja/micromamba/envs/nerfstudio/lib/python3.10/site-packages/mediapy/init.py", line 1671, in close raise RuntimeError(f"Error writing '{self.path}': {s}") RuntimeError: Error writing 'data/renders/cargo-baseline.mp4': Unrecognized option 'crf'. Error splitting the argument list: Option not found

Traceback (most recent call last): File "/home/paja/projects/3dgs-deblur/render_video.py", line 326, in process(case, args) File "/home/paja/projects/3dgs-deblur/render_video.py", line 276, in process subprocess.check_call(render_cmd) File "/home/paja/micromamba/envs/nerfstudio/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['ns-render', 'camera-path', '--load-config', 'data/outputs/custom/train_all/cargo/splatfacto/2024-03-25_213634/config.yml', '--camera-path-filename', 'data/outputs/custom/train_all/cargo/splatfacto/2024-03-25_213634/demo_video_camera_path.json', '--video-crf', '21', '--output-path', 'data/renders/cargo-baseline.mp4']' returned non-zero exit status 1. `

I got this error during the final rendering. I ran this command: ./scripts/render_and_train_comparison_sai_custom_mb.sh cargo where cargo is my dataset.

MrNeRF commented 6 months ago

frame_00072 Btw, it seems that I have to rotate the phone by 90 degrees to get a correclty oriented image.

MrNeRF commented 6 months ago

Seems to be a ffmpg issue on my side.

oseiskar commented 6 months ago

Hmm. Good to know. Not the first time different ffmpeg installations cause issues. I didn't know that some versions don't support the -crf (video quality) parameter. Could also be that something in your setup causes the video to be rendered in some format that does not use/support that (something else than mp4/H264).

In any case, if you manage to fix it at your end, you should be able to just rerun this last part of the script without re-training: https://github.com/SpectacularAI/3dgs-deblur/blob/fd6579ff38645e7b9af67346aeebd799841ed584/scripts/render_and_train_comparison_sai_custom_mb.sh#L21-L23

... if not, let me know and we'll fix the rendering scripts to work around this problem

oseiskar commented 6 months ago

Btw, it seems that I have to rotate the phone by 90 degrees to get a correclty oriented image.

This is also true. There's currently no auto-rotation in these rendering video scripts. Alternatively, you can rotate the final video 90 deg, but the comparison effect may look a bit funny in that case.

MrNeRF commented 6 months ago

I fixed it with micromamba install conda-forge::ffmpeg . You can replace micromamba by conda. But the orientation is also very different :)

https://github.com/SpectacularAI/3dgs-deblur/assets/33876434/5c7b14b8-113c-44be-8534-f8ac2c211fce

oseiskar commented 6 months ago

@MrNeRF Did you modify the rendering script somehow? It originally has a flag called --original_trajectory, which should cause the camera to follow a smoothed version of the recording trajectory. If it's off, the video looks like what you posted above.

https://github.com/SpectacularAI/3dgs-deblur/blob/fd6579ff38645e7b9af67346aeebd799841ed584/scripts/render_and_compile_comparison_video.sh#L8

MrNeRF commented 6 months ago

I got impatient so I ran the commands by hand. Likely, I missed that. Anyway, finally I got a nice rendering. The quality is top notch 💪. I can upload it later for reference.

MrNeRF commented 6 months ago

For reference. I had to downsample and to trim the video. So the compression looks worse than what I got out of it. But the deblurring effect is still visible.

https://github.com/SpectacularAI/3dgs-deblur/assets/33876434/fbf88e50-8a55-4d99-8426-deecb8016651

oseiskar commented 4 months ago

https://github.com/SpectacularAI/3dgs-deblur/issues/4#issuecomment-2012428807

Update! The method now also works with pure COLMAP poses, without IMU (see README)