Open avclubvids opened 1 year ago
This might make sense as an option for the ExportAnimatedCamera node, but that node might not like dealing with camera arrays (different intrinsics per camera)
Is this the json format required by all the different Nerf tools https://docs.nerf.studio/en/latest/quickstart/data_conventions.html?
yup! That's the NeRF Studio version, the NVIDIA version is extremely similar and for the most part the two are interchangeable. Here are some scripts for creating the NVIDIA Instant Neural Graphics Primitives version of the JSON:
https://github.com/NVlabs/instant-ngp/blob/master/scripts/colmap2nerf.py https://github.com/NVlabs/instant-ngp/blob/master/scripts/record3d2nerf.py
ciao! io ho provato questo : https://github.com/maximeraafat/BlenderNeRF . Prima elaboro con Meshroom e poi con l'addon in Blender mi crea il file Json necessario per Nvidia Nerf..! Funziona benissimo!
@MarcoRos75 Yes, this is how I have been solving NeRFs for a while now, it certainly works but there is large room for improvement. For example, it does not support multiple camera intrinsics, for that you need to use the Agisoft converter.
Josiah Reeves has made a converter script to go Meshroom -> NeRF json: https://github.com/joreeves/mr2nerf
This is enough to do what is needed, but it would certainly be better if this workflow ended up inside MR so you could directly generate a NeRF json.
Request: It is currently possible to solve camera in/extrinsics in MR and import them to something like Blender via .abc and then convert them into the specific JSON file format required by iNGP, NeRF Studio, Turbo NeRF, etc. It would be great to have the ability to export this file directly from MR to skip a few extra steps.
Desired solution: A node for exporting solved camera poses to the NeRF JSON format.
Additional Info: NeRFs can work better using undistorted images, so it would be good to have the option to flatten the input images once their distortion is calculated and point to them in the JSON. The JSON format needs an absolute path for each image or a relative path to a folder, by default, this is to a folder next to the JSON file called "images". Here is a tool for converting from Agisoft XML to JSON: https://github.com/joreeves/agi2nerf and one for going from RC to JSON: https://github.com/joreeves/rc2nerf. And here is a Blender plugin that can do this conversion: https://github.com/maximeraafat/BlenderNeRF
Many NeRFs are captured using videos, so this might be a nice feature to leverage MR's new video support (it would have to export the frames it calculates poses for and point to them in the JSON). There are also NeRFs being captured with arrays, so per-image intrinsics would be desired (the JSON format supports this but not every NeRF tool does yet).