Open Jingranxia opened 14 hours ago
The upright part in the picture is my bottle, which was not generated in the bottle section. Instead, it ran in the opposite direction to predict the gripper and predicted the missing depth on my desktop. Is there a problem reproducing it from where I am
2024-09-23 13:32:37.987 | WARNING | icg_net.utils.checkpoint:load_checkpoint_with_missing_or_exsessive_keys:51 - Key not found, it will be initialized randomly: criterion.empty_weight 2024-09-23 13:32:37.990 | WARNING | icg_net.utils.checkpoint:load_checkpoint_with_missing_or_exsessive_keys:73 - excessive key: criterion.empty_weight
The criterion error can safely be ignore, since this is only used during training (weight is the loss for the no object class).
For the pointcloud, we follow the convention of edge grasp net which expects the table plane to be filtered out, and located at a certain height. You can have a look at this code: https://github.com/renezurbruegg/icg_benchmark/blob/master/scripts/test_icgnet.py and export the pointclouds to a file, to see the correct, expected format
Thank you very much for taking the time to answer the questions. It's great
I have another question here. The predicted gripper will run onto cluttered items, and its direction is opposite to the actual Z-axis orientation of my camera. Where should I set this option?
In this case, the orientation of the surface normal of the pointcloud is probably wrong. Either make sure you have the correct surface normals in the pointcloud ore provide the correct camera location here: https://github.com/renezurbruegg/icg_net/blob/main/scripts/show.py#L64
Note, if you want to use your custom normals, make sure to uncomment the section that assigns the normals here: https://github.com/renezurbruegg/icg_net/blob/main/scripts/show.py#L48
Strictly speaking, the viewpoint position read by the camera should be [0,0,0], why do you use different results for the “eye” in your data
This is the normal and camera origin coordinate axis when I collected data
We assume the pointcloud to be in the world frame (with z axis pointing upwards) and define the workspace as [0-0.3m]^3 and not in the camera frame. Additonally, the surface normals should point towards the camera origin.
You can check the evaluation scripts with pybullet and match the pointcloud with yours, to make sure your pointcloud adheres to the convention.
Hello author, may I ask if there are plans to open source the training code? Could you please provide a link
We assume the pointcloud to be in the world frame (with z axis pointing upwards) and define the workspace as [0-0.3m]^3 and not in the camera frame. Additonally, the surface normals should point towards the camera origin.
You can check the evaluation scripts with pybullet and match the pointcloud with yours, to make sure your pointcloud adheres to the convention.
Understood, I should have read the paper first, as it caused many mistakes.
Hello author, I have created data according to your format again. After transferring the point cloud to the base coordinate system of the robotic arm, I used Mask to extract a point cloud of a bottle of water for prediction, but did not generate a gripper. Why is this