Open Andyyoung0507 opened 2 months ago
Hi, thank you for your interest. Sure, I just updated the readme with some more information, please check it out. For our main experiment (Table II in the paper), we use the same objects and scenes as GIGA, but we do some processing, e.g. to render our own rgbs images.
We also provide an additional experiment where we train and evaluate on the graspnet-1B dataset (Table III in the paper)
Thanks for your quick reply!
I tried to install the environment and tried to run the Evaluation , but got an error of "TypeError: unsupported operand type(s) for |: 'type' and 'type' ". So I upgrade the python to 3.10 to solve this problem. But a new error occurred: "ModuleNotFoundError: No module named 'mplib.pymp.fcl'; 'mplib.pymp' is not a package“. I did not find the corresponding file named fcl in the path of the package in the environment. Is there a problem with the version of the package I installed? The version of the mplib in my env is 0.2.0. @chisarie Thanks!
Hi, thank you for the feedback. Yes you are right, both issues are caused by changes in the newer versions of mplib. I now pushed a change to set the correct version of all packages in setup.py
. Please try installing again, it should work correctly now.
After I degrade the version of mplib to 0.0.9, another error comes like that:
Traceback (most recent call last): File "/home/axe/Downloads/repository/CenterGrasp/scripts/evaluate_shape.py", line 197, in <module> main(**vars(args)) File "/home/axe/Downloads/repository/CenterGrasp/scripts/evaluate_shape.py", line 121, in main GigaPipeline(giga_mode, seed, visualize=not headless, real_robot=False), File "/home/axe/Downloads/repository/CenterGrasp/centergrasp/pipelines/giga_pipeline.py", line 21, in __init__ fx=ZED2HALF_PARAMS.f_xy, AttributeError: 'CameraParams' object has no attribute 'f_xy'
Is there any error in theclass CameraParams
? Thanks!
Ah good catch, yes I refactored that class last minute for the last experiments I did, and didn't notice that it broke a couple of other scripts. It's fixed now!
Thanks for your help! When I try to evaluate, some errors about the dataset file path come. In the config.py the class of Directories, the FRANKA, GIGA, TEXTURES, YCB, GRASPNET do not in the pre-generated data file, so is there some specific urls to obtain the above relevant data?
class Directories:
DATA = pathlib.Path.home() / "datasets"
GIGA = DATA / "giga"
TEXTURES = DATA / "textures"
YCB = DATA / "maniskill_ycb"
FRANKA = DATA / "franka"
GRASPNET = DATA / "graspnet"
GRASPS = DATA / "centergrasp_g" / "grasps"
SGDF = DATA / "centergrasp_g" / "sgdf"
SGDF_GRASPNET = DATA / "centergrasp_g" / "graspnet/sgdf"
RGBD = DATA / "centergrasp_g" / "rgbd"
RGBD_GRASPNET = DATA / "centergrasp_g" / "graspnet/rgbd"
EVAL_GRASPNET = DATA / "centergrasp_g" / "graspnet/dump_eval"
EVAL_GRASPNET_OLD = DATA / "centergrasp_g" / "graspnet/dump_eval_old"
ROOT = pathlib.Path(__file__).parent.parent
GIGA_REPO = ROOT.parent / "GIGA"
CONFIGS = ROOT / "configs"
Thanks a lot for all your rapid replies!
Ah yes you are right, I forgot to specify how to get those other folders, I just updated the readme, check it out! Also please download the pre-generated data folder again. I noticed it was incomplete, I uploaded the full version now.
Thank you, the problem of file reading process has been solved, but the following error was reported.
I tried to sovle these problems. But I did not find the libkdtree related content in the conda environment construction, and I also did not find the pyrender related statement in the location of mesh_to_sdf in the entire code repository. At the same time, there should be a problem with my nvidia driver installation. The system cannot find the libGLX_nvidia.so.0 file. Is there any other solution besides downgrading and reinstalling the nvidia driver? I am worried that it will bring other system-related package problems. My computer configuration is ubuntu20.04, NVIDIA GeForce RTX 4070. The nvidia Driver Version is 550.76 while the cuda version is 11.8. Thanks for all your help!
Yes it seems that some library is trying to load some Vulkan extension and fails at it. Are you able to pinpoint which library or which line of code throws the error?
Also, can you try running vkcube
in a terminal? Does it work (i.e. can you see the rotating cube?)
When I start debugging in the vscode, the error comes as following:
When I try to run vkcube
in a terminal? It shows the rotating cube.
I use locate icd.json
to find the icd.json, receive that in terminal.
/etc/vulkan/icd.d/nvidia_icd.json
/home/axe/miniconda3/envs/lerobot/lib/python3.10/site-packages/sapien/vulkan_library/nvidia_icd.json
/usr/share/code/vk_swiftshader_icd.json
I use sudo apt install libvulkan1 libvulkan-dev
to install the vulkan driver, use vulkaninfo
to get some detail info of Vulkan, but all these did not solve the problem.
Thanks for your help!
Ok the error is thrown when launching the SapienRenderer. Can you try to run Sapien's example scripts, do they work? Note that we use sapien==2.2 in this project. You can try their hello-wolrd as well as their rendering example scripts.
If those also give you an error, try opening an issue in the Sapien repository, they will know better how to help you.
Yes, Thanks for your so much help!
Hi, thank you for your excellent work. According to your suggestions, I used the downloaded data and the corresponding pre-trained model to implement the verification phase. I still have the following questions:
FileNotFoundError
[Errno 2] No such file or directory: '.../centergrasp_g/graspnet/scenes/scene_0100/object_id_list.txt'.
The latest dataset download link you released seems not to have the relevant part of the data. Is there a more detailed tutorial? Thanks!
data.zip
folder and you should be good to go.Thanks for your reply!
I downloaded the dataset and pretrained models in data.zip, and run python scripts/train_giga.py
to test the training process successfully.
I used the command python evaluation_runs.py
in the directory ‘./centergrasp/graspnet’, the error comes below:
[Errno 2] No such file or directory:
'/home/axe/Downloads/datasets/centergrasp_new/centergrasp_g/graspnet/scenes/scene_0100/object_id_list.txt'
File "/home/axe/Downloads/repository/CenterGrasp/centergrasp/graspnet/rgb_data.py", line 38, in __init__
self.graspnet_api = GraspNet(root=Directories.GRASPNET, camera=self.camera, split=mode)
File "/home/axe/Downloads/repository/CenterGrasp/centergrasp/graspnet/evaluation_runs.py", line 23, in main
rgbd_reader = RGBDReader(mode="test")
File "/home/axe/Downloads/repository/CenterGrasp/centergrasp/graspnet/evaluation_runs.py", line 58, in <module>
main(rgb_model="el6oa23g")
FileNotFoundError: [Errno 2] No such file or directory: '/home/axe/Downloads/datasets/centergrasp_new/centergrasp_g/graspnet/scenes/scene_0100/object_id_list.txt'
I also found that the data download from pre-generated data did not include the graspnet folder.
ls
grasps rgbd sgdf
Are there any other omissions? Thanks for your help!
Ah yes I see, you need to download the graspnet dataset as explained here: https://graspnet.net/datasets.html. I added an additional remark on this at the bottom of my readme file
Thanks, I downloaded the dataset of GraspNet-1Billion and rerun the code, finding that the Bmask and heatmap file do not exist? Are there some ways to generate or download these files?
Yes you need to run centergrasp\graspnet\make_heatmaps.py
Great! Thanks.
When I tried to reproduce train_sgdf.py script, I found that in class SgdfPathsLoader the get_list function reading the mesh file path with the train or valid is not specified. So the mode is always train. This results in the training mesh file being read when building the test data set. Will this lead to inaccurate verification accuracy? Thank you!
@staticmethod
def get_list(mode: str, num_obj: Optional[int] = None) -> List[pathlib.Path]:
assert mode in ["train", "valid"]
full_mesh_paths = MeshPathsLoader.get_list("all") # without specializing the mode train or valid, so always the valid?
Hello, I only had a quick look, but if I remember correctly, this is because the sgdf values are only calculated for the training meshes. The train and validation split for the sgdf decoder is done by points, i.e. by sampling many points for each mesh and using some for the train split and some for the validation split. Hope it makes sense
The train and validation split for the sgdf decoder is done by points, i.e. by sampling many points for each mesh and using some for the train split and some for the validation split. Hope it makes sense
This explanation really solved my question. Thanks!
It is a nice work!. Can you provide a more detailed readme file? Explain how to reproduce the results of the paper? I roughly browsed the paper. Is the same dataset as GIGA used for training? At the same time, this paper conducted inference verification on graspnet?