renezurbruegg / icg_net

Implementation of the Paper: ICG-Net: A unified approach for instance centric grasping
MIT License
5 stars 1 forks source link

Hello author, I used the depth map data I collected myself and found that the gripper prediction results were very poor. I wonder if you have an example NPY file there. In addition, I encountered a warning in the network loading section, indicating that the network layers do not correspond #8

Open Jingranxia opened 14 hours ago

Jingranxia commented 14 hours ago

图片

Jingranxia commented 14 hours ago

The upright part in the picture is my bottle, which was not generated in the bottle section. Instead, it ran in the opposite direction to predict the gripper and predicted the missing depth on my desktop. Is there a problem reproducing it from where I am

Jingranxia commented 14 hours ago

2024-09-23 13:32:37.987 | WARNING | icg_net.utils.checkpoint:load_checkpoint_with_missing_or_exsessive_keys:51 - Key not found, it will be initialized randomly: criterion.empty_weight 2024-09-23 13:32:37.990 | WARNING | icg_net.utils.checkpoint:load_checkpoint_with_missing_or_exsessive_keys:73 - excessive key: criterion.empty_weight

renezurbruegg commented 14 hours ago

The criterion error can safely be ignore, since this is only used during training (weight is the loss for the no object class).

For the pointcloud, we follow the convention of edge grasp net which expects the table plane to be filtered out, and located at a certain height. You can have a look at this code: https://github.com/renezurbruegg/icg_benchmark/blob/master/scripts/test_icgnet.py and export the pointclouds to a file, to see the correct, expected format

Jingranxia commented 13 hours ago

Thank you very much for taking the time to answer the questions. It's great

Jingranxia commented 12 hours ago

I have another question here. The predicted gripper will run onto cluttered items, and its direction is opposite to the actual Z-axis orientation of my camera. Where should I set this option?

renezurbruegg commented 12 hours ago

In this case, the orientation of the surface normal of the pointcloud is probably wrong. Either make sure you have the correct surface normals in the pointcloud ore provide the correct camera location here: https://github.com/renezurbruegg/icg_net/blob/main/scripts/show.py#L64

renezurbruegg commented 12 hours ago

Note, if you want to use your custom normals, make sure to uncomment the section that assigns the normals here: https://github.com/renezurbruegg/icg_net/blob/main/scripts/show.py#L48

Jingranxia commented 11 hours ago

Strictly speaking, the viewpoint position read by the camera should be [0,0,0], why do you use different results for the “eye” in your data

Jingranxia commented 11 hours ago

图片 This is the normal and camera origin coordinate axis when I collected data

renezurbruegg commented 11 hours ago

We assume the pointcloud to be in the world frame (with z axis pointing upwards) and define the workspace as [0-0.3m]^3 and not in the camera frame. Additonally, the surface normals should point towards the camera origin.

You can check the evaluation scripts with pybullet and match the pointcloud with yours, to make sure your pointcloud adheres to the convention.

Jingranxia commented 10 hours ago

Hello author, may I ask if there are plans to open source the training code? Could you please provide a link

Jingranxia commented 10 hours ago

We assume the pointcloud to be in the world frame (with z axis pointing upwards) and define the workspace as [0-0.3m]^3 and not in the camera frame. Additonally, the surface normals should point towards the camera origin.

You can check the evaluation scripts with pybullet and match the pointcloud with yours, to make sure your pointcloud adheres to the convention.

Understood, I should have read the paper first, as it caused many mistakes.

Jingranxia commented 8 hours ago

图片 Hello author, I have created data according to your format again. After transferring the point cloud to the base coordinate system of the robotic arm, I used Mask to extract a point cloud of a bottle of water for prediction, but did not generate a gripper. Why is this