xukechun / Efficient_goal-oriented_push-grasping_synergy

[RAL & IROS 2021] Efficient learning of goal-oriented push-grasping synergy in clutter
60 stars 10 forks source link

Not working repo #4

Closed KamalMokhtar closed 2 years ago

KamalMokhtar commented 2 years ago

Dear Xu,

This project does not work, there are many bugs and issues, I have tried to reproduce your work for 3 months now, I am not able to get your results. I hope you can reproduce and share the results.

Hoping to hear from you! Kamal

xukechun commented 2 years ago

Hi Kamal:

  1. What means "many bugs and issues?" Does anything confuse you? Or any command confuses you?
  2. I can share our [reproduced model] tomorrow.
  3. Also, I mentioned in the readme that we found that exploring with a pre-trained goal-agnostic grasp model can effectively speed up the training process. You can try this technique.
Kamalnl92 commented 2 years ago

Hi xukechun,

  1. I will make a report and show all the problems. I should have it done by tomorrow.
  2. That would be great, can you share all the models and how many episodes or iterations you have trained it for. I can create a shared folder so that you can place all the model's pictures (logs). that will give me more insight into where things go wrong.

Thank you!

Kamalnl92 commented 2 years ago

Hi xukechun,

This happens after so many training iterations at any stage usually between the 200 and 2000 iterations.

image

image

The above two problems are related I believe I solved this issue. https://github.com/xukechun/Efficient_goal-oriented_push-grasping_synergy/issues/3

The training stops when this RunTime error appears.

image

image

image

image

This project is quite unstable (not working), I would like to cooperate to solve all the problems, if you would like to know more (i.e. print statements in the source code) I would gladly do that.

I would like to see the models and the terminal output and log/ files during training from your side. Could you upload them to the below link? that would assist me in seeing the difference.

https://drive.google.com/drive/folders/1-hluT4AHMCYn0QHmrFC50WSVdIAWKwOF?usp=sharing

xukechun commented 2 years ago

Thanks for your problem report. I think you can refer to the repo of VPG to solve most of your problems:

Kamalnl92 commented 2 years ago

Thank you for your reply.

xukechun commented 2 years ago
Kamalnl92 commented 2 years ago

I believe if you reproduce the work by using the code in the GitHub repo here, I am sure you would face the same problems, while training I would appreciate it if you can save the terminal output, which can be easily done by ' > terminaloutput.txt'. That will save the output of the terminal while training. Als the models that have been trained, again you may use the shared folder I have created above to upload everything including the log/.

I hope you can do that. That would make things clear for everyone. Thank you.

xukechun commented 2 years ago

Sorry I haven't try the headless mode before. The solution of problem 3 and 4 might need a display window since you should close the popup window for three times.

Kamalnl92 commented 2 years ago

I understand, Would it be possible for you to use this repo code and train the models again please, I would appreciate that.

Kamalnl92 commented 2 years ago

Another issue next to the above-mentioned issues (problem 6).

The mask of the q values are not fitting the objects, see the image below, this example happened while training the grasp model 554 and 556

Screenshot 2021-12-26 at 17 20 21
Kamalnl92 commented 2 years ago

Problem 7, pushes are just done at nowhere. Iteration 27 push net

000027 best_push_direction 000027 grasp 000027 mask 000027 push

Kamalnl92 commented 2 years ago

Hello,

Please do not close the issue when it is an open issue (not solved) your project should be reproducible, and so far this is not the case.

Kamalnl92 commented 2 years ago

I see you have closed the issue, I have noticed that the masking is not working, maybe that is the cause of some of the problems.

xukechun commented 2 years ago

OH. So sorry for my mistaken close. Do you mean the mask as input is not necessary?

xukechun commented 2 years ago

Another issue next to the above-mentioned issues (problem 6).

The mask of the q values are not fitting the objects, see the image below, this example happened while training the grasp model 554 and 556 Screenshot 2021-12-26 at 17 20 21

Is that this issue? Is there any report about this problem? I will check the code of this repo carefully for the mask generation.

Kamalnl92 commented 2 years ago

The mask is necessary! I believe if you run it you will see that it's sometimes not working, as you see in the image it was for iteration 556, so it does not happen at the first iteration. I am not sure what is helpful for you from my side.

xukechun commented 2 years ago

Thanks for your comment! I'll run and check carefully.

Kamalnl92 commented 2 years ago

Thank you for your reply!

xukechun commented 2 years ago

Hi, we just reproduce the mask problem and correct it. We have modified the function get_obj_mask and get_obj_masks in robot.py to the same as get_test_obj_mask and get_test_obj_mask . We feel really sorry for this negligence when reorganizing codes. The key codes are these two lines: https://github.com/xukechun/Efficient_goal-oriented_push-grasping_synergy/blob/feb5dd8196e60f2e454ecf2a26c438043eb65ab1/robot.py#L236-L237 Also, another modification in main.py is to make push action more stable. Hope my reply will help you and again sorry for our negligence.

Kamalnl92 commented 2 years ago

Thank you, I will check it :)