Closed SergioMOrozco closed 1 year ago
UPDATE:
So I managed to make some progress. In my pybullet simulation environment, I have the following coordinate system:
Which I interpret as:
And I have the following coordinate system in Instant Ngp in the camera frame:
Which I interpret as (kind of hard to tell but the Z is pointing straight out the camera): X = Right Y = Down Z = Forward
Therefore the conversion should be (X,Y,Z)->(Z,-X,-Y).
During the simulation, I capture multiple images and their corresponding transformation matrices. Here is a quick video that demonstrates the motion:
Screencast from 05-26-2023 12:15:02 PM.webm
So, it goes from left to right in a semi-circular motion. I am able to replicate this motion like so:
But the rotation is completely off. It seems odd to me that the translation would be correct, but the rotation wouldn't be. Here is the matrix conversion code I used to get the above image:
tmp_x = transformation_matrix[0,0:4].copy()
tmp_y = transformation_matrix[1,0:4].copy()
tmp_z = transformation_matrix[2,0:4].copy()
transformation_matrix[0,0:4] = tmp_z
transformation_matrix[1,0:4] = -tmp_x
transformation_matrix[2,0:4] = -tmp_y
I don't know why this is so hard to do, but I'm struggling pretty hard with this.
We might be struggling with what boils down to the same problem. https://github.com/NVlabs/instant-ngp/issues/1364#issuecomment-1584306093
My theory is that the problem is the camera poses found using colmap2nerf.py are flipped/rotated but that they get further transformed when inputted into instant-ngp... I haven't been able to find out what the conversion is & can't seem to find the answer anywhere. Am not well versed in c++ code, so I am not confident about finding an answr by reading through the .cu files...
edit: there are functions in nerf_loader.h
for transforming back & forth from ngp and nerf coords
@Student204161 I actually found the issue in my code, which was a simple matrix inversion
np.linalg.inv(transformation_matrix)
along with some scaling. You can take a look at my whole github repo. Hopefully it helps https://github.com/FezTheImmigrant/pybullet_playground. The file worth looking into is kuka.py
Thanks for the reply - I'm glad you worked out your problem, I fortunately solved my problem some days ago... ended up spending almost a month on it but I'm just glad it works now.
I had some code errors also, but in the end I was able convert the raw volumes into the ngp coordinate system, from the nerf coordinate system (* and so I was able to do the reprojection)
Have a good day
I am using a Pybullet simulation environment to generate a sequence of images along with camera poses as input to Instant-Ngp in the form of the transformations.json file; however, I cannot figure out how convert between the Pybullet coordinate system and the Instant-Ngp coordinate system.
The pybullet camera that I am using uses the following coordinate system:
In my mind, this should be a conversion between the two camera coordinate systems, where the Instant Ngp coordinate system uses:
Therefore, something like this should suffice:
Admittedly, my understanding of linear algebra is comically terrible. If somebody could at least guide me in the right direction, I would really appreciate it. I've read through every open/closed issue that discusses the weird coordinate system that Instant Ngp uses, but I haven't found any answers to be satisfactory.