Thank you for sharing your excellent research. I have a few questions regarding the details:
In Table 1, could you please clarify which method was used for policy learning in “Ours”? Is the corresponding code publicly available?
In the Policy Distillation section, it is mentioned that ACT policies were trained using self-collected data. I wonder if this is the method that was utilized in your work.
Regardless of the specific approach chosen, I would like to confirm if there is code available for directly training and evaluating tasks using the framework you developed.
Thank you for your time and for providing such valuable research.
The "Ours" method is the RAM we proposed. The code is publicly available in the repo.
As stated in section 4.5, our zero-shot RAM can collect high-quality data for behavior cloning. For BC implementation, you can check out ACT, Diffusion Policy, or other methods you are interested in.
Our method is training-free. You can use our RAM in any environment you want (sim or real) as a plug-and-play module to realize zero-shot object manipulation.
I hope the messages above could answer your questions. If you have further questions feel free to let me know.
Hello,
Thank you for sharing your excellent research. I have a few questions regarding the details:
Thank you for your time and for providing such valuable research.
Best regards, Jiwon