Open mrtnbm opened 4 days ago
There is an error with the segmentation, have you changed the segmentation model?
There is an error with the segmentation, have you changed the segmentation model?
Hey Ammar, thanks for your fast response!
I actually used the default settings with SAM2. I ran the conda script to install everything automatically.
Is it maybe the fault of the bad lighting in that picture? I use the realsense node for Ros2 to get the realsense topics like you did.
No, the output mask image does not seem to be the right one.
Have you used "sam2_b.pt" model? The one you have showed looks like the FastSAM("FastSAM-s.pt") model.
I used the default script foundationpose_ros_multi.py which loaded sam2:
self.seg_model = SAM("sam2_b.pt")
That's why I don't understand why the result are so weird
can you show the window of "click on objects to track"
can you show the window of "click on objects to track"
Thank you very much for your help. I'm not at home today, I will provide a screenshot tomorrow.
It does look a lot different though as far as I remember from a few days ago. There are no overlapping segmentations and the keyboard gets segmented cleanly in comparison to the segmentation mask with the exception of some keys being segmented as well.
Hello @ammar-n-abbas,
here is the screenshot of the GUI to select the object:
The mask is
Some keys get segmented individually, but the whole keyboard itself is also segmented quite accurately.
Can you try the use the back of the keyboard for first segmentation then later on you can turn it around
Can you try the use the back of the keyboard for first segmentation then later on you can turn it around
Unfortunately, after some tests while using the back of the keyboard it is still not tracking the object after selecting the object in the segmentation GUI.
This is the mask and segmentation in selection GUI for the back of the keyboard:
I also tried different realsense D435 settings: I changed the resolution to 1280x720 on both RGB and Depth Outputs, tried different presets like HighDensity, MidDensity, HighAccuracy with no effect.
I set up a obj of my keyboard that has the right dimensions and similar colors. Still, I can't seem to get a successful pose estimation.
Here is the obj, just put into a zip to be able to upload it here: logi_wo_mtl_vert_col.zip
The mask looks like this:
RGB looks like this (light and therefore video quality might be bad, is this the cause maybe?):
I only selected this object with mouse click and pressing Enter afterwards. As soon as the tracking gui starts, I do not see any pose estimation happening.
I use Ubuntu 22.04 with 4090 Mobile GPU and conda with Python 3.10 env and Intel Realsense D435 with 5.13.0.5 FW (there is 5.16.0.1 released, did not try yet because Isaac Ros needs this FW):
I noticed that I get some errors, right after selecting the object in segmentation mask:
After hitting enter and starting the tracking module, the errors come up again:
Also, I had to change
trimesh.load()
line to add arguments for forcing mesh:trimesh.load(mesh, force='mesh', process=False)
Otherwise I got AttributeErrors for missing vertices parameter in the scene (I only have one model in the file, do not know why trimesh added in a scene). I do not know if process=False is really necessary.