Closed ChrisSun99 closed 1 year ago
Hi, the depth images are not in the right format, can you save them as uint16 and mm scale? From a skim of the data, it looks like the robot arm moves relatively fast. So make sure the max_rot and max_trans >= the max possible velocity https://github.com/wenbowen123/BundleTrack/blob/3df16853c745e7f216ea45df386a57fff0fc9c39/config_ycbineoat.yml#L59
Also increase the LOG allows for more detailed visualization (see some old issues for discussions on this) to check what's wrong https://github.com/wenbowen123/BundleTrack/blob/3df16853c745e7f216ea45df386a57fff0fc9c39/config_ycbineoat.yml#L6
Thank you for your reply! What are the meaning of the four relevant parameters? Are their units "degree" and "meters" correspondingly? I'm trying to play around with their numbers but got no luck.
The translation are meters, the rotations are degree.
The most important thing is the depth image you uploaded here was wrong. If you fixed them, but still had problems. This issue might be helpful for debugging. There are visualizations and output to check where went wrong.
Hey Bowen -- working with @ChrisSun99 here. Thanks for your debugging help so far. We've got more information for you to help figure out where we are going wrong. Here are 6 things we will get to you -- @ChrisSun99 will send the data along shortly, but here's the high-level overview of what and why we are providing:
LOG: 3
as you suggested.Hi @ChrisSun99 @ebianchi , there are multiple challenges in you data: (severe motion blur, errorneous segmentation, fast rotations, and far distance from camera).
I'd recommend slower your robot arm motion if possible and also mount the camera closer to the workspace. There also seems to be some constant shift in the segmentation.
@ebianchi point 5 sounds reasonable to me. But some left/right multiplication can be common pitfalls. https://github.com/wenbowen123/BundleTrack/issues/38 discusses about this. Starting from sim would be a nice idea.
For point 6, I'm not sure if I fully understand about the shift. The mask should ideally cover the correct object region in each image. Then bundletrack will provide you the right poses. It is fine if the object move to different locations in the image along the video (as long as segmentations of each frame are in the ballpark). The current segmentation looks poor. In some images, it's like more than half size of the object away from the true mask.
Sorry I'm fully occupied in the coming month. I can check more closely your data details afterwards (thanks for your patience)
Hi @ebianchi @ChrisSun99 , it seems your plot is not correct. I visualized the point cloud by transforming their poses and they are reasonably aligned. The first image is the point cloud of first frame. The second is the the merged point cloud of first frame and the 200th frame. But in your plot, the 200 time stamp is already deviating a lot.
@wenbowen123 Thank you very much! It turned out our transformation was incorrect. We followed #38 and the result looks reasonable now.
Hi @wenbowen123, thank you for your nice work. I've been trying to run Bundletrack on my own RGBD data (linked here), but I wasn't able to get a good result. I attached our previous results below. Do you have any intuition of problems with the data? Or is it possible for you to achieve any better results with this data? Thank you.