Open sjtuyinjie opened 2 weeks ago
Thank you. I am currently really busy with other project and hardly have time for refining, sorry about that. For visualization, what kind of visualization do you want, such as visualization of env in our gif or training process. For sim2real, there are visual and dynamic gap, for visual, since we only use pointcloud and robot state in sim, and we use fused point cloud in real, so the gap is small; for dynamic gap, we mainly use position-based control and since our task is human-assisting, the arm is controlled by human, we do not need to consider the gap of arm.
Very nice of you to reply so quickly!
I'm not in a hurry, so please first deal with your own project. If you have time after that, you could spend some time replying to me. Respect!
Thanks again for your answer. You mention that you fuse point cloud from four cameras and clip them manually. Do you mean you tried to reduce the point cloud gap between sim and reality and just zero-shot deploy the trained policy to reality?
Yes, we do not finetune on real.
Very solid work! Do you have plans for releasing codes for refining and visualization? By the way, I noticed an amazing performance on your real-world experiments. How do you solve the sim2real gap?