Open li-source opened 4 years ago
Hi @li-source , Actually, I did not split the dataset on object-wise setting. When you use the code to generate data, it is image-wise split. This intuition is that: Image-wise split: training with all objects, while some views remain unseen. This is to evaluate the intra-object generalization capability of the model. Object-wise split: training on all available vies of the same object and testing on new objects. This is to evaluate inter-object generalization capability of the model. Following the above notes, you can split the data whatever method you want. Please read carefully the data description on the web: http://pr.cs.cornell.edu/grasping/rect_data/data.php
Hi @li-source , Actually, I did not split the dataset on object-wise setting. When you use the code to generate data, it is image-wise split. This intuition is that: Image-wise split: training with all objects, while some views remain unseen. This is to evaluate the intra-object generalization capability of the model. Object-wise split: training on all available vies of the same object and testing on new objects. This is to evaluate inter-object generalization capability of the model. Following the above notes, you can split the data whatever method you want. Please read carefully the data description on the web: http://pr.cs.cornell.edu/grasping/rect_data/data.php
thank you for your reply! and I have an another question. How to generate the bounding box of the object by the target image and the corresponding background image?
Hi @li-source ,
Please check ./dataset/*.m/
and the dataloader grasp_dataset.py
.
I think you can:
Hi @li-source , Please check
./dataset/*.m/
and the dataloadergrasp_dataset.py
. I think you can:
- Ask @mouad47 in #1. He is working on the same problem.
- Check relevant issues here: https://github.com/ivalab/grasp_multiObject_multiGrasp/issues
ok, thanks!
Hi , how do you splits Cornell dataset on image-wise and object-wise? Thanks!