Closed LukasBergs closed 7 months ago
Update:
I initially used the dimensions of the butter object, leading to the object appearing much smaller than expected. This issue was resolved by correcting the object_name parameter. Despite the correction, the results still differ from those obtained using the inference script.
The disparity between the two sets of results is demonstrated in the following videos:
Using inference script:
Result: The inference results for most of the image look good.
Using ROS2 node on synthetic data:
Result: I cannot guarantee that the 3D bounding box in the image is correct. Nevertheless, 3D pose shows strange rotations.
Using ROS2 node on real data:
Result: Distance to the camera in z direction looks fine, but rotation is off.
Any ideas what could go wrong?
Best, Lukas
Hi @LukasBergs ,
I've attached a document that outlines the process we have used to validate DOPE inference quality using the toy Ketchup model. Could you please walk through these steps and verify if you can correctly match inference script and ROS 2 node results for the toy model?
Once we're sure that you're able to produce correct results with the NVIDIA-provided model, we can isolate where the error might be with your custom model.
Hi @jaiveersinghNV ,
Thank you for guiding me with the provided markdown file!
Upon re-evaluating the specified values for image and network dimensions, I've identified an issue with the launch parameter "output_height" in the launch file
It seems that both parameters, output_width and output_height, are assigned the value of network_image_width. I assume this is a mistake. Could you please verify?
Having updated the parameter to network_image_height, I have observed that my results now align more closely with the inference python script. For the time being, I am closing the issue.
Thank you for your help!
Hello,
I am currently working on training a custom DOPE model on a battery pack object using a dataset generated with Nvidia Isaac Sim. The training process has been successful, and the inference results using the script (https://github.com/andrewyguo/dope_training/tree/master/inference) look promising.
However, when I use the trained model in the ROS2 node for real-time pose estimation, I encounter issues with the pose results. While the orientation appears correct, the position is significantly off, leading to inaccurate pose estimates.
Supplementary material:
Example training image:
JSON file:
Inference result using the python script:
JSON file:
config_pose.yaml:
camera_info.yaml:
ROS2 node pose topic output
ROS2 DOPE config file:
In the training as well as in the inference result by the python script the object is in z direction about 100cm away from the camera. This distance seems to be okay. When running the ROS2 node the pose topic states that the object would be 0.07m away from the camera. That is definitely way closer than it should be. What could cause this issue?
I would greatly appreciate any guidance or assistance in debugging and resolving this issue. Thank you in advance for your time and support. If additional information is needed for further analysis, please let me know.
Best, Lukas