Closed aaronsng closed 3 years ago
Thank you!
ae_train your_group/your_ae -d
Should be clear and fully visible, otherwise you will get garbage-in garbage-out.
Otherwise, it looks alright and if the model has texture, it should work alright. At a distance of 10m an RGB image might not be sufficient to accurately estimate the distance so you might have a look into pose refinement methods using depth data like ICP. But if it is for navigation, it should be enough.
Thanks MartinSmeyer!
Just one last clarification. If your vertex scale is 1000, your radius will be in metres? Same applies with the value of 1, your radius will be in milimetres?
radius will always be in mm!
Hi, I've been using the AAE and I must say it is pretty remarkable algorithm that you have developed here. I've recently attempted to deploy the algorithm in a real world situation, to identify an object from a distance in an uncontrolled environment. Due to an NDA, I'm not allowed to disclose the CAD file and what is the object specifically, but I can say that it is something like the photo that follows.
The object is placed within 10 metres from the camera. After deploying and testing the AAE with a live video feed, the AAE is consistently unable to provide a steady orientation. I'm not sure if its the AAE inability to be deployed in an uncontrolled environment or is due to the misunderstanding of training parameters that I used. Hence, I would like to clarify the following parameters (apologies, I don't have much experience with OpenGL or CAD-ding in general):
Here are the notable changes I have made to the training parameters:
Bootstrap Ratio: 8
Radius: 1000
Iterations: 50,000
Vertex Scale: 10.8 // As described above
Batch Size: 64
After performing inference, I would take the inferred rotation matrix and convert it to it's equivalent quaternion representation, but it has occurred to me that by doing so, it might lose information such as the order of multiplication. I used the pysxid library that you included in the repo to convert it