Closed AyonRRahman closed 11 months ago
hi, it's the other way around, as it can be seen in the code : https://github.com/ClementPinard/SfmLearner-Pytorch/blob/master/inverse_warp.py#L183
if we have a point in the target frame coordinate system, we can get its coordinates in the reference frame system and thus know where it would be projected on the screen (and thus know its color)
so it's not pose per se, but rather the inverse pose.
If you are only interested in training depth, it does not matter, but if you are interested in odometry as well, then you will need to invert the transformation vectors, see how it's done in the evaluation : https://github.com/ClementPinard/SfmLearner-Pytorch/blob/master/test_pose.py#L75
Thank you. Understood.
I have a confusion. What are the poses we are getting. Are these to go from the reference image frame to target image frame. So what i mean is, lets say, we got the T=[R, t] from the one of the predictions. if I multiply a 3d point x with it like , Rx+t. Are going from the reference image frame to the target image frame? or the other way from target image frame to reference image frame. Thank you.