Closed Yeager-101 closed 3 weeks ago
This repo is mainly based on language-table simulation environment to build the whole pipeline. If you want to test on real robots, you need to carry out the following steps:
Maybe after some time, I will try and open source the whole pipeline to test the boundaries of each algorithm in real environment, and make a tutorial. However I have some research work to do now, I hope this information above is helpful and wish you good luck.
本 repo 主要基于 language-table 模拟环境搭建整个流程,如果要在真实机器人上进行测试,需要进行如下步骤:
- 您应该通过遥控或策略代码收集自定义数据集,以抓取不同位置的特定对象(我们尝试过,很容易获得),一开始您可以为单个任务收集大约 100 个情节,例如以随机位置抓取桌子上的红色块。
- 按照此repo中的说明将您的自定义数据集转换为 RLDS 类型。
- 然后,您可以使用 RT-1 的 tensorflow 代码通过自定义数据集训练模型,如果您更喜欢使用 pytorch,您可以按照pytorch等其他 repo 的说明进行操作,但您还应该更改数据集类型以与 pytorch 代码保持一致,我认为这更容易。
- 在你训练了几个减少损失的步骤之后,你可以在你的环境中测试它们,你需要编写代码来确保机器人和你的主机服务之间的通信,包括机器人摄像头传感器将图像发送到服务器,服务器运行模型推理以输出动作并发送回机器人,然后机器人执行它。
- 我强烈建议您在开始时使用calvin或其他模拟环境来学习整个流程。真实环境有许多因素会影响最终结果。我们的团队尝试在真实环境中测试 RT-1,结果不足以进行精细控制,因此您可以尝试其他最近的开源模型。
也许过一段时间,我会尝试开源整个流程,在真实环境中测试每个算法的极限,并制作一个教程。不过我现在还有一些研究工作要做,希望以上信息对您有所帮助,祝您好运。
God, thank you very much for your reply. I will study further according to your suggestions. At the same time, I look forward to the open source of your entire process. Thank you again for your suggestions.
Thank you very much for your work. I am a novice in this field and I really want to know how to use it. I now have a robotic arm and a Realsense camera, but it seems that you are testing with simulated data here. I want to know how to really use it on a hardware system.