nerfstudio-project / nerfstudio

A collaboration friendly studio for NeRFs
https://docs.nerf.studio
Apache License 2.0
8.87k stars 1.18k forks source link

Render training path #3183

Open MatteoFusconi opened 1 month ago

MatteoFusconi commented 1 month ago

I would like to have the possibility to render a video using the camera positions of the training set. Probably it is already possible to do so, but I don't know without a lot of manual work. I am using splatfacto

I looked at the documentation and in the issues section but i couldn't find anything helpful. Can someone help me with this?

stanathong commented 1 month ago

Is the video you would like to render composes of the RGB images rendered by your nerf/splat model? If so, you should be able to manually do this by running two commands:

  1. Render images using the training poses. The command below will give you rendered images (in the folder 'rgb').

ns-render dataset --load-config [path-to-your-trained-model]\config.yml --output-path [path-to-your-output-folder] --image-format png --split train

  1. Use external tool e.g. ffmpeg to compile a video from the rendered images you get. For ffmpeg, something likes this should work.

ffmpeg -framerate 1 -pattern_type glob -i '*.png' -c:v libx264 -pix_fmt yuv420p video.mp4

I've never used splat but I believe this should work the same. Hope this help!

BharathSeshadri commented 1 week ago

Hi @stanathong @MatteoFusconi , the ns-render dataset command seems to work for nerfacto models, but throws an error for splatfacto models. Did you encounter the same? init.py 608 _pil_image raise ValueError(f'Image shape {image.shape} is neither 2D nor 3D.')