xukechun / Vision-Language-Grasping

[ICRA 2023] A Joint Modeling of Vision-Language-Action for Target-oriented Grasping in Clutter
86 stars 11 forks source link

import pybullet as p ImportError: numpy.core.multiarray failed to import #4

Open nourmorsy opened 10 months ago

nourmorsy commented 10 months ago

I setup the requirements and pointnet2 , KNN successfully however when I tried to try the demo there is an error between pybullet version and numpy version like in error.txt and I checked that all packages is correct like in packages.txt , is there any solution for that ? error.txt packges.txt

xukechun commented 9 months ago

Hi,

This problem might arise from the old version of numpy. Maybe update the numpy will solve it. But strangely, the version of numpy and pybullet are all the same as my tested env. And there is no problem in the setup process of knn and pointnet2. I provide the package list of my tested env here. The problem probably arises from the versions of other packages installed by conda. Hope that can help you.

Best, Kechun Xu

nourmorsy commented 9 months ago

Thank you for replaying I found where is the problem The problem was I setup all these packages on Ubuntu 20.4 and when I switched to Ubuntu 18.8 it works just fine But I have a question how can I change the seen and the query or the prompt to pick up Pacific objects ?

xukechun commented 9 months ago

Hi,

We provide some language templates and keywords in constant.py. During training time, we sample a language template and a keyword to form a language instruction to assign target object(s). If you want to add other prompts of grasping, maybe that can be "Bring me a xxx", "Pass me a xxx", "Can you please pick up a xxx", and etc. https://github.com/xukechun/Vision-Language-Grasping/blob/27e4ee43070b93585099c46926c5214c15890e75/constants.py#L7-L38

nourmorsy commented 9 months ago

Thank you for answering I have another question where can i find the part where you extract bounding box enconding and text enconding using clip ?

xukechun commented 9 months ago

Hi, They can be found in networks.py. https://github.com/xukechun/Vision-Language-Grasping/blob/27e4ee43070b93585099c46926c5214c15890e75/models/networks.py#L236-L251