ethz-asl / ethzasl_ptam

Modified version of Parallel Tracking and Mapping (PTAM)
http://wiki.ros.org/ethzasl_ptam
235 stars 184 forks source link

Transformation matrix and units of pose_world #86

Open yvtheja opened 8 years ago

yvtheja commented 8 years ago

Hi,

1> I am trying to draw the trajectory of the camera on the frame which is taken by the camera at the initial position. When the odometry starts, I have stored the initial pose and computed the transformation matrix(T1) and when the camera starts moving, for each pose, I have computed transformation matrix(T2) and computed T1*T2.inverse() ( To get the T2 pose w.r.t T1 )and pushed the x, y, z of translation into camera intrinsics to get the image point of the pose T2 on the frame taken at T1. I have multiplied fx and cx with my image width and fy and cy with my image height. But the pixel values I am getting are really random and not accurate enough. I tested by plotting those points on the initial frame but they are just accumulating to the top right corner. Do I have to include anything in my procedure??

2> I am little confused about the pose_world. Are they in metres? or in any other units?

My project is training a Terrian Estimator. A Rover or an astronaut is mounted with a monocular camera. At every step, the camera can view some part of the region which is in front of it but not the terrain which is exactly below or immediate to it. So, I have to get the terrain exactly under the foot or wheel from the previous frames taken by the camera and we can annotate that terrain patch by the readings we get from the accelerometer or for now, from astronaut itself through push buttons. So, both the terrain patch with its reading is given to any regression model. So, for every world point of the camera trajectory, we can get an approximated world points of the astronauts’ foot or wheels of the rover and see if it .We check if the particular world point is visible in the previous frames and if visible, we collect the patches of terrain from all the frames in which the corresponding world point is visible.

Thank you.

stephanweiss commented 8 years ago

Hi Vishnu,

My guess is that you have somewhere a transformation wrong. I feel the only way to find the bug is to go step by step through your transformations and verify each step with sample data...

Best,

Stephan


Von: Vishnu Teja notifications@github.com Gesendet: Donnerstag, 23. Juni 2016 15:29 An: ethz-asl/ethzasl_ptam Betreff: [ethz-asl/ethzasl_ptam] Transformation matrix (#86)

Hi,

I am trying to draw the trajectory of the camera on the frame which is taken by the camera at the initial position. When the odometry starts, I have stored the initial pose and computed the transformation matrix(T1) and when the camera starts moving, for each pose, I have computed transformation matrix(T2) and computed T1*T2.inverse() ( To get the T2 pose w.r.t T1 )and pushed the x, y, z of translation into camera intrinsics to get the image point of the pose T2 on the frame taken at T1. I have multiplied fx and cx with my image width and fy and cy with my image height. But the pixel values I am getting are really random and not accurate enough. I tested by plotting those points on the initial frame but they are just accumulating to the top right corner. Do I have to include anything in my procedure??

My project is training a Terrian Estimator. A Rover or an astronaut is mounted with a monocular camera. At every step, the camera can view some part of the region which is in front of it but not the terrain which is exactly below or immediate to it. So, I have to get the terrain exactly under the foot or wheel from the previous frames taken by the camera and we can annotate that terrain patch by the readings we get from the accelerometer or for now, from astronaut itself through push buttons. So, both the terrain patch with its reading is given to any regression model. So, for every world point of the camera trajectory, we can get an approximated world points of the astronauts' foot or wheels of the rover and see if it .We check if the particular world point is visible in the previous frames and if visible, we collect the patches of terrain from all the frames in which the corresponding world point is visible.

Thank you.

You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/ethz-asl/ethzasl_ptam/issues/86, or mute the threadhttps://github.com/notifications/unsubscribe/AAtHtxdBppNwJ2F4hUytWs8YgWPxBSb8ks5qOoo9gaJpZM4I8z0z.