Unity-Technologies / com.unity.perception

Perception toolkit for sim2real training and validation in Unity
Other
894 stars 172 forks source link

How to get perception camera's pose #523

Closed FancyChen085 closed 1 year ago

FancyChen085 commented 1 year ago

Hello,it's a perfect work!And I wonder how to get the perception camera's pose as groundtruth with corresponding frames for my camera relocalization training,I found I can not add Labeling component in the perception camera.Thank you for your reply.

StevenBorkman commented 1 year ago

Hi, thanks for reaching out. Translation and intrinsic data is reported for the camera every frame in the perception output json file including translation, rotation, velocity, acceleration and camera intrinsic matrix. Is this data enough for pose, or do you need additional information? I'm sure I can help you with a labeler to capture that data.

FancyChen085 commented 1 year ago

Hello!Thanks for your reply! I have already find the translation and intrinsic data in the output json file,and it's enough for my training,but there is something wrong with the output data:the translation and rotation data of the "step 0" frame is not same even not close to the counterpart in the Transform filed in the UI when I set the perception camera fixed on a moving agent,which would be same when the agent keep still.Therefore I wonder why there is a difference between the moving and not moving and how can I obtain the right translation and rotation data of the moving perception camera. I would appreciate if you can help me.Thank you very much!

---- Replied Message ---- | From | Steve @.> | | Date | 9/6/2022 22:06 | | To | @.> | | Cc | @.> , @.> | | Subject | Re: [Unity-Technologies/com.unity.perception] How to get perception camera's pose (Issue #523) |

Hi, thanks for reaching out. Translation and intrinsic data is reported for the camera every frame in the perception output json file including translation, rotation, velocity, acceleration and camera intrinsic matrix. Is this data enough for pose, or do you need additional information? I'm sure I can help you with a labeler to capture that data.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

FancyChen085 commented 1 year ago

Hi,there is a screenshot to make me understood easily.

---- Replied Message ---- | From | Steve @.> | | Date | 9/6/2022 22:06 | | To | @.> | | Cc | @.> , @.> | | Subject | Re: [Unity-Technologies/com.unity.perception] How to get perception camera's pose (Issue #523) |

Hi, thanks for reaching out. Translation and intrinsic data is reported for the camera every frame in the perception output json file including translation, rotation, velocity, acceleration and camera intrinsic matrix. Is this data enough for pose, or do you need additional information? I'm sure I can help you with a labeler to capture that data.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

StevenBorkman commented 1 year ago

In the case of the camera being attached to a moving entity, the camera's transform should stay fairly constant, but the ego's transform values will probably change. In our system, we consider any thing that a camera might be mounted to as an ego, for example, in an autonomous vehicle. The camera's transform is relative to the car (ego) so it will probably stay constant, but the car's transform will be updated.

You can get the ego information from the same capture json file. And the camera's real world location is the joining of the two transforms.

FancyChen085 commented 1 year ago

Thank you for your detailed explanation! I get it and will continue my work under your guidance ,thanks again!

---- Replied Message ---- | From | Steve @.> | | Date | 9/7/2022 21:51 | | To | @.> | | Cc | @.> , @.> | | Subject | Re: [Unity-Technologies/com.unity.perception] How to get perception camera's pose (Issue #523) |

In the case of the camera being attached to a moving entity, the camera's transform should stay fairly constant, but the ego's transform values will probably change. In our system, we consider any thing that a camera might be mounted to as an ego, for example, in an autonomous vehicle. The camera's transform is relative to the car (ego) so it will probably stay constant, but the car's transform will be updated.

You can get the ego information from the same capture json file. And the camera's real world location is the joining of the two transforms.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

StevenBorkman commented 1 year ago

No problem. I am going to close this issue now, but feel free to reopen or create a new one if needed. Best of luck.