Closed Sai-Yarlagadda closed 1 month ago
Hi @Sai-Yarlagadda ,
Did you forget to set use-depth 0
. We mentioned clearly in our papers that Grasp-Anything and Grasp-Anything++ do not support depth images. However, an upcoming dataset is going to release to resolve the problem. Please stay tune!
Also I would directly like to visualize the grasp position when I pass the image and the prompt. Is there any file that can help me do the inferencing based on the picture, prompt and weights?
You can look at the grasp_generator.py
file, it provides some snippet to transform from predicted angle, postion, etc. to condense 2D grasp pose.
Closed due to inactivity.
I have been trying to run the evaluate.py file for the dataset grasp-anywhere. I get this error: Code I ran:
python evaluate.py --network lgrconvnet --dataset grasp-anywhere --dataset-path /home/sai/robotool/LGD/dataset --iou-eval
Error:
the error tells that self.depth_files is not defined. Am I running something wrong? Also I would directly like to visualize the grasp position when I pass the image and the prompt. Is there any file that can help me do the inferencing based on the picture, prompt and weights?