Fsoft-AIC / LGD

Dataset and Code for CVPR 2024 paper "Language-driven Grasp Detection."
https://airvlab.github.io/grasp-anything/
MIT License
22 stars 2 forks source link

Inferencing from a picture #3

Closed Sai-Yarlagadda closed 1 month ago

Sai-Yarlagadda commented 3 months ago

I have been trying to run the evaluate.py file for the dataset grasp-anywhere. I get this error: Code I ran: python evaluate.py --network lgrconvnet --dataset grasp-anywhere --dataset-path /home/sai/robotool/LGD/dataset --iou-eval

Error:

Traceback (most recent call last):
  File "/home/sai/robotool/LGD/evaluate.py", line 130, in <module>
    for idx, (x, y, didx, rot, zoom, prompt, query) in enumerate(test_data):
  File "/home/sai/anaconda3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 631, in __next__
    data = self._next_data()
           ^^^^^^^^^^^^^^^^^
  File "/home/sai/anaconda3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1346, in _next_data
    return self._process_data(data)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sai/anaconda3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1372, in _process_data
    data.reraise()
  File "/home/sai/anaconda3/lib/python3.11/site-packages/torch/_utils.py", line 705, in reraise
    raise exception
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/sai/anaconda3/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
    data = fetcher.fetch(index)  # type: ignore[possibly-undefined]
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/sai/anaconda3/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sai/anaconda3/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
            ~~~~~~~~~~~~^^^^^
  File "/home/sai/robotool/LGD/utils/data/language_grasp_data.py", line 65, in __getitem__
    depth_img = self.get_depth(idx, rot, zoom_factor)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sai/robotool/LGD/utils/data/grasp_anywhere_data.py", line 97, in get_depth
    depth_img = image.DepthImage.from_tiff(self.depth_files[idx])
                                           ^^^^^^^^^^^^^^^^
AttributeError: 'GraspAnywhereDataset' object has no attribute 'depth_files'

the error tells that self.depth_files is not defined. Am I running something wrong? Also I would directly like to visualize the grasp position when I pass the image and the prompt. Is there any file that can help me do the inferencing based on the picture, prompt and weights?

andvg3 commented 3 months ago

Hi @Sai-Yarlagadda ,

Did you forget to set use-depth 0. We mentioned clearly in our papers that Grasp-Anything and Grasp-Anything++ do not support depth images. However, an upcoming dataset is going to release to resolve the problem. Please stay tune!

Also I would directly like to visualize the grasp position when I pass the image and the prompt. Is there any file that can help me do the inferencing based on the picture, prompt and weights?

You can look at the grasp_generator.py file, it provides some snippet to transform from predicted angle, postion, etc. to condense 2D grasp pose.

andvg3 commented 1 month ago

Closed due to inactivity.