Closed aditya2592 closed 5 years ago
I don't really understand your question, can you show me some images?
had a coordinate frame with z pointing up
=> after you import your 3d object model into UE4, you get a static mesh in the UE4 coordinate system. How you place it in the scene to be captured is up to you. In the annotation data, it use the OpenCV coordinate system: Z forward, X right, Y down.
Sorry for being unclear. So irrespective of how the coordinate frame of the static mesh is after importing, the annotation will always use the OpenCV coordinate system?
Yes, the output annotation data will always be in OpenCV coordinate system (objects poses also in the camera's perspective - coordinate system).
Thanks. And what does the fixed object transform contain in such cases?
The annotation data handle the static mesh that you imported to UE4 include its import transform. We include the 3d model transform in the objects.json for the visualization tool: https://github.com/NVIDIA/Dataset_Utilities. The reason is when we work with the YCB 3d models we didn't want to modify the original 3d models themselve, we only apply the import transform when we import them to UE4, when we visualize them, we want people to just use the original 3d models for convenient.
Thank you for the clarification and this useful feature in the tool. I also wanted to ask about where the world origin is in the plugin? And is it possible to change the world origin?
You can't change the World origin of a UE4 map. Why do you want to change it? Also the location, rotation of objects in the annotation data is all in the camera coordinate system (local to the camera), not in the world coordinate system (global), changing the world origin won't help.
I was trying to understand the camera location and orientation that is printed along with pose annotation data and how the camera is oriented with respect to the world
Is that the goal of your research? To teach the network the camera pose from the image? or do you just want to understand how the system work? Either way in UE4 map you have full control of where to place the objects and the camera, I think it would be enough to achieve what you want.
Thanks, I was looking to understand the system more to understand how tuning an algorithm on datasets from the Dataset Synthesizer translates to the real world
Is the axis in the object model transformed before the ground truth pose is printed in the annotation file? I observed that in some cases when the object model had a coordinate frame with z pointing up, the ground truth pose annotation has a different orientation.