Closed poyodiaz closed 1 year ago
Hi @poyodiaz, thanks for your interest in BlenderNeRF!
I'm sorry to hear that you are experiencing issues with the plugin. Could you however provide further details on how you created the dataset? Which BlenderNeRF method did you use? Does the issue only appear once you import the images to NGP, or do the renders from the created dataset already not match the camera views from the Blender scene? Could you perhaps also give some details on what NGP is rendering?
One issue might be that you have selected the wrong test camera, if the issue appears only when rendering test views in NGP. Feel free to share any screenshots and/or the Blender file directly here; that would be very helpful for debugging your issue :)
@maximeraafat thanks for answering.
I am creating the data est from a pointcloud inside of blender with an animated camera , i rendered everyframe and exported the dataset with the SOP method and the TTC method , i have the trainned camera path correct and the images inside instant NGP now but the field of view doesnt match , iam i missunderstanding something?
btw the renderer images match perfectly with the blender file, is the exported cam inside isntatn NGP not matching
Hi @poyodiaz, thanks for the clarification! Do the renderings in NGP look like a zoomed in or out version of the field of view in the original images? And are your training images in a square aspect ratio or a non-uniform aspect ratio (such as 16:9, or anything else)?
I just noticed after some quick experiments that for non-uniform aspect ratios, the Blender renders seem to not perfectly match the field of view of the NPG renders. I believe this is because of an error in the intrinsics matrix calculation in my plugin, specifically in the computation of s_u
and s_v
in the blender_nerf_operator.py
file. I got the code for the intrinsics from here.
I am not yet sure what the fix is, but will investigate this further as soon as possible. Feel free to experiment by yourself as well and share any results or progress in this issue!
I've been investigating the issue and have noticed that changing size_x >= size_y
to size_x <= size_y
in line 38 of the blender_nerf_operator.py
file fixes the issue for a toy example I've set up. This might be due to some coordinate changes from Blender to NGP, but I still need to verify this on different examples before committing the edit.
@poyodiaz, would you be so kind and test this change in order to verify whether the NGP renders then match the Blender renders in your scene?
Hi @poyodiaz, thanks for the clarification! Do the renderings in NGP look like a zoomed in or out version of the field of view in the original images? And are your training images in a square aspect ratio or a non-uniform aspect ratio (such as 16:9, or anything else)?
I just noticed after some quick experiments that for non-uniform aspect ratios, the Blender renders seem to not perfectly match the field of view of the NPG renders. I believe this is because of an error in the intrinsics matrix calculation in my plugin, specifically in the computation of
s_u
ands_v
in theblender_nerf_operator.py
file. I got the code for the intrinsics from here.I am not yet sure what the fix is, but will investigate this further as soon as possible. Feel free to experiment by yourself as well and share any results or progress in this issue!
Hi , thanks for your time
i am rendering 16:9 aspect ratio and the result is a zoomed out version of the original file. ill give it a try on the operator you mention
I've been investigating the issue and have noticed that changing
size_x >= size_y
tosize_x <= size_y
in line 38 of theblender_nerf_operator.py
file fixes the issue for a toy example I've set up. This might be due to some coordinate changes from Blender to NGP, but I still need to verify this on different examples before committing the edit.@poyodiaz, would you be so kind and test this change in order to verify whether the NGP renders then match the Blender renders in your scene?
i suposse you you mean line 37?
Yes sorry I meant line 37. I am not yet 100% sure that this is the issue though. I will therefore perform a few experiments in the next days and will then commit a fix to the main code branch.
hey ! i tried the fix and it worked! thanks a lot ! also i think that if you use the blender´s camera sensor fit as "vertical " the issue doesnt happen.
Hi @poyodiaz, sorry for the delayed reply!
I'm glad to hear it worked out. It turns out though that the code bug actually affects way more camera settings. I've been performing exhaustive experiments over the last few days to verify what the issue exactly is, and I will soon release an updated version of the add-on containing a fix (probably as BlenderNeRF v5), such that the Blender renders match the NGP renders with any type of camera (with sensor fit auto
, vertical
and horizontal
), and any aspect ratio.
If you notice any other issue, please feel free to let me know! :)
@maximeraafat that´s very good news !because your software is very good to integrate renders with instant NGP
Thanks a lot @poyodiaz, that means a lot to me and I'm glad that this tool is helpful :)
Hi , firat i want to thank for your work on this one , when i open the data set the camera looks different than the original camera position in blender , i dont know if it is a zoom or a scale issue ? do you have any suggestion? thanks in advance