tensorflow / models

Models and examples built with TensorFlow
Other
77.16k stars 45.76k forks source link

Objectmotion and Egomotion #6465

Closed knightzazh closed 5 years ago

knightzazh commented 5 years ago

System information

Describe the problem

Good morning, I have using inference.py to get the ego motion and object motion (for object motion I add some modification in the code). I got egomotion.txt and objectmotion.txt . But I have no view what the meaning of this file, I already read the code, this value is from the egomotion network & objectmotion network but I still haven't any view the meaning of this file. Is there anyone has view about the meaning is these value? I wonder that it is x y z motion and also the velocity of x y z.

I attached the example of egomotion and objectmotion output

Egomotion : ego

Objectmotion : obj

ymodak commented 5 years ago

@aneliaangelova Can you please take a look? Thanks!

aneliaangelova commented 5 years ago

Each line contains a number for the frame and two transforms each of 6 numbers. The fist 6 numbers are transformation of frame 1_2 and the next are frame 2_3 (means 2 to 3) Then within the 6 numbers, I assume it is x,y,z and then rot_around_x, rot_around_y, rot_around_z. You can check out the code and confirm that too - it is easy to track it. Thanks!

knightzazh commented 5 years ago

Each line contains a number for the frame and two transforms each of 6 numbers. The fist 6 numbers are transformation of frame 1_2 and the next are frame 2_3 (means 2 to 3) Then within the 6 numbers, I assume it is x,y,z and then rot_around_x, rot_around_y, rot_around_z. You can check out the code and confirm that too - it is easy to track it. Thanks!

Oh I see so if I want to visualize the movement I must processed the x, y, z, rot_x, rot_y, and rot_z right, Thank you for your information

na-hri commented 4 years ago

@aneliaangelova Great work! I have a few questions: 1) Are the intrinsic parameters of the camera required for getting object motion during inference time? 2) Is the way of getting object motion during inference simply to use the code starting here in the model.py file, or any further modification required? 3) The network predicts 2 transforms (each of 6 numbers) for a given triplet of images, for ex. for an input triplet frame 1_2_3, you get transformations for frame 1_2 and frame 2_3. We can also get a transformation for frame 2_3 by an input triplet frame 2_3_4. Ideally, these 2 transformation for frame 2_3 should be same right? 4) For converting the 6 DoF output vector to a trajectory w.r.t first frame of a sequence for either object motion or ego motion, what exact transformation should be used? I'm asking this because I've seen differences in the way people do that, so just want to confirm directly from you.

aneliaangelova commented 4 years ago

knightzazh@ correct. That will give the delta movement and you can apply Rot , Translation to the current position orientation to obtain the new one.

aneliaangelova commented 4 years ago

na-hri@ 1.No you don't need the intrinsics for that. However the ego-motion or motion xyz outputs are in "network" units, i.e. are not tied to global metric system.

  1. Correct, they should be the same, though I expect a small amount of noise if you swap their places like that. 2,4 vincentca@
VincentCa commented 4 years ago

Hi @na-hri,

  1. Our example of the inference.py code currently instantiates the model with handle_motion set to False (default), causing it to not include the object motion network as part of the graph. But you can simply change this flag and use model.inference_objectmotion(...) to call object motion inference. Make sure your input data is prepared appropriately for networks trained with motion masks, i.e. use_masks=True, otherwise your results will be very off/unreasonable. Keep in mind that if you want object motion estimates, you do need to provide aligned segmentation masks for the data just as you would have to for training.
  2. You could just build homogeneous transformation matrices from the transform vector estimates and apply those sequentially to an initial pose. However, the other ways you might have encountered are designed to mitigate numerical issues that will cause drift and I would recommend using them instead. Hope this helps! Vincent
na-hri commented 4 years ago

Hi @VincentCa @aneliaangelova

Thanks for your response!

1) While training the object motion network, your code starting here uses intrinsic parameters, estimated ego motion and depth to modify the input for the object motion network. To get the object motion during test time, I'm assuming we need to follow the same modification steps? But this would require intrinsic parameters right? 2) I looked at 2 different repositories for generating the trajectory:

I believe the output vector from the above models has the same format as yours, but both have different transformations for generating the trajectory. Am I correct? Why is that so, and which one would you recommend for generating trajectories using your transformation vector estimates?

na-hri commented 4 years ago

@VincentCa @aneliaangelova Just a gentle reminder. Please let me know. Basically, I'm interested in plotting object and ego motion trajectories using output vector from your network.

poornimajd commented 4 years ago

@VincentCa @aneliaangelova Just a gentle reminder. Please let me know. Basically, I'm interested in plotting object and ego motion trajectories using output vector from your network.

@na-hri have you been able to do this? I am also trying to interpret the output for object and ego motion. Any suggestion is greatly appreciated! Thank you

mikibella commented 2 years ago

@aneliaangelova

na-hri@ 1.No you don't need the intrinsics for that. However the ego-motion or motion xyz outputs are in "network" units, i.e. are not tied to global metric system. 3. Correct, they should be the same, though I expect a small amount of noise if you swap their places like that. 2,4 vincentca@

regarding 1. Is there a way to do a transfer to the global metric system from the "network units". Thank you