octo-models / octo

Octo is a transformer-based robot policy trained on a diverse mix of 800k robot trajectories.
https://octo-models.github.io/
MIT License
885 stars 166 forks source link

Some technical and academic questions on Octo #78

Open oym1994 opened 6 months ago

oym1994 commented 6 months ago

Hi, thanks for this great work and after a long 、tough but exciting trail, we finally reproduce some good results on our own robot and task. Meanwhile some questions come to our mind:

  1. When we just fine-tune just one task(pick a cup, 100 demos), the performance is pretty well. But when we tried fine-tuning on 4 tasks(pick a cup, place a cup, wipe a dirty area, insert a block, 100demos for each) together, the performance is not well as before. So what is the strategy and keypoints to fine-tune multi-tasks(such as data ratio, task relationship, et. al)?

  2. As mentioned there are three fine-tuning methods: head only, head map only, and full. Currently we only use the "full" method. What are the three methods suitable for? What are the advantages of each one?

  3. In many cases, the robot data used for fine-tuning is completely different from the data used for pre-training. In this case, what benefits does pre-training bring? Or would it be better to train fully with only new data?

  4. What does the model learn from large dataset pre-training? Such as scene representation or something else?

  5. As a general manipulation model, can it cope effectively with the differences between different robot bodies? What are some ideas to solve this discrepancy?

Thanks for your attention and always keep looking forward to your kind response!