chengkaiAcademyCity / EnvAwareAfford

Official repository of the NeurIPS 2023 paper "Learning Environment-Aware Affordance for 3D Articulated Object Manipulation under Occlusions"
MIT License
10 stars 0 forks source link

Question about generating offline data #1

Open goxq opened 2 weeks ago

goxq commented 2 weeks ago

Hi, thanks for your excellent work. I recently ran your code and have some questions.

  1. When I run the script bash scripts/run_gen_offline_data.sh, the output in terminal looks like the image below, showing both 'cmd fail' and 'cmd succ'. Is that the expected behavior? image

  2. Some of the output of the cmd code in datagen.py xvfb-run -a python collect_data.py %s %s %s %s %d %s --out_dir %s --trial_id %d --random_seed %d --data_split %s --no_gui >> "my_collect_data.log" 2>&1 is shown in the images below: image image image

Is this output expected? I noticed that there are errors indicating that no file was found (highlighted in red box in the image), but the data is set properly in the data folder as you suggested.

Thank u very much for your assistance.

chengkaiAcademyCity commented 2 weeks ago

Hi Xianqiang,

Thank you for reaching out and for your interest in our work. We appreciate your feedback.

  1. Regarding the output you are seeing when running bash scripts/run_gen_offline_data.sh, it is expected in our experiments. cmd fail suggests that this particular data point encounters object clipping in the scene. Our scene configuration is implemented by putting objects one by one, which may clip with each other. When this happens, we disgard such a data point and print 'cmd fail'.

  2. Based on the images you provided, it looks like the command xvfb-run -a python collect_data.py ... is encountering issues related to missing commands. This might be due to incorrect paths or permissions. If you are using a server without the display environment, please ensure to install the xvfb tool on your server. Otherwise it may be due to your SAPIEN dataset version. If this is a rare case, you may just ignore it and still collect enough data for training.

Thanks and hope your experiments go on well!

goxq commented 2 weeks ago

Thanks a lot for your reply! I've finished running the scripts/run_gen_offline_data.sh script, and it seems that it has generated enough data points, although there were some error messages during the generation process. I want to confirm the following:

For the pushing task, the final filter_tuple_list.txt (which contains the intersection of cpbs, cpct, and cpsm) includes 7981 files: 6960 false and 1021 true files, as shown in the image below. Does the number of these files exceed the required amount for the "push" training (the paper mentions 900 successful and failed interactions for pushing)? Can I simply select 900 successful and 900 failed data points from the '1021' true and '6960' false generated files for training as mentioned in your paper? image

Thanks a lot!