Closed MrNeRF closed 6 months ago
Hi! This aspect is still a work-in-progress in the repo. To test the method on your own data, you need to record it with the Spectacular Rec application (see here), which also records the IMU data, exposure times and rolling-shutter readout times (Android), as this info is not currently automatically learned from data.
Unfortunately, the version of the app that records all the necessary data is still in Play/App store review so it will probably be available for download by early next week. We'll let you know in this PR and update the instructions once this happens.
In the meantime, the method can be tested with the preprocessed dataset in Zenodo, e.g.,
# (install as instructed in the README)
# download the preprocessed data
python download_data.py --dataset sai
# list cases
python train.py
# train: choose between baseline and motion_blur (ours), e.g.,
python train.py --preview --case=12
Thanks for your answer. I really appreciate it.
I will play with your data. The results are looking very good. But I am more curious how my own data will look like. That's the only test that counts for me in the end :)
@oseiskar Just curious question, does it work without IMU data e.g. pure on colmap poses?
@oseiskar Just curious question, does it work without IMU data e.g. pure on colmap poses?
The approach presented in the paper does not work without IMU data or other external linear & angular velocity information. The codebase itself might be extendable to work without it and we are considering this option in the future
@MrNeRF : The Spectacular Rec app versions that support recording the necessary data are now public. See https://github.com/SpectacularAI/3dgs-deblur?tab=readme-ov-file#training-with-custom-data for instructions on how to train with recordings created with that app.
The easiest way to start comparing is running (after installation):
./scripts/render_and_train_comparison_sai_custom_mb.sh /PATH/TO/spectacular-rec-MY_RECORDING.zip
This will train on the dataset with and without motion blur compensation and renders a video showing the differences. If the input data is not particularly blurry, they expected level of improvement could be summarized as "subtle but clearly noticeable".
I have samilar question. with the data processed with colmap format, how to train a scene model?
@oseiskar Thanks. I will try it out for sure. I also plan to read the paper during this upcoming week as I started to do some paper threads. Anyway, excited to try it with my own data.
`🎥 Rendering 🎥 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/6431(0.0%) ? -:--:-- 0:00:00 Traceback (most recent call last): File "/home/paja/projects/3dgs-deblur/nerfstudio/nerfstudio/scripts/render.py", line 276, in _render_trajectory_video writer.add_image(render_image) File "/home/paja/micromamba/envs/nerfstudio/lib/python3.10/site-packages/mediapy/init.py", line 1653, in add_image if stdin.write(data) != len(data): BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/paja/micromamba/envs/nerfstudio/bin/ns-render", line 8, in
Traceback (most recent call last):
File "/home/paja/projects/3dgs-deblur/render_video.py", line 326, in
I got this error during the final rendering. I ran this command: ./scripts/render_and_train_comparison_sai_custom_mb.sh cargo where cargo is my dataset.
Btw, it seems that I have to rotate the phone by 90 degrees to get a correclty oriented image.
Seems to be a ffmpg issue on my side.
Hmm. Good to know. Not the first time different ffmpeg installations cause issues. I didn't know that some versions don't support the -crf
(video quality) parameter. Could also be that something in your setup causes the video to be rendered in some format that does not use/support that (something else than mp4/H264).
In any case, if you manage to fix it at your end, you should be able to just rerun this last part of the script without re-training: https://github.com/SpectacularAI/3dgs-deblur/blob/fd6579ff38645e7b9af67346aeebd799841ed584/scripts/render_and_train_comparison_sai_custom_mb.sh#L21-L23
... if not, let me know and we'll fix the rendering scripts to work around this problem
Btw, it seems that I have to rotate the phone by 90 degrees to get a correclty oriented image.
This is also true. There's currently no auto-rotation in these rendering video scripts. Alternatively, you can rotate the final video 90 deg, but the comparison effect may look a bit funny in that case.
I fixed it with micromamba install conda-forge::ffmpeg . You can replace micromamba by conda. But the orientation is also very different :)
https://github.com/SpectacularAI/3dgs-deblur/assets/33876434/5c7b14b8-113c-44be-8534-f8ac2c211fce
@MrNeRF Did you modify the rendering script somehow? It originally has a flag called --original_trajectory
, which should cause the camera to follow a smoothed version of the recording trajectory. If it's off, the video looks like what you posted above.
I got impatient so I ran the commands by hand. Likely, I missed that. Anyway, finally I got a nice rendering. The quality is top notch 💪. I can upload it later for reference.
For reference. I had to downsample and to trim the video. So the compression looks worse than what I got out of it. But the deblurring effect is still visible.
https://github.com/SpectacularAI/3dgs-deblur/assets/33876434/fbf88e50-8a55-4d99-8426-deecb8016651
https://github.com/SpectacularAI/3dgs-deblur/issues/4#issuecomment-2012428807
Update! The method now also works with pure COLMAP poses, without IMU (see README)
Hey, super exciting work. It is not completely evident for me from the Readme, if I need to do something specific with a custom dataset. Without reading the source code, is there a simple way to process some of my data. Is it possible on a video or should I use images?
Thank you Janusch