Closed SiyuanHuang95 closed 1 month ago
Hi, thanks for sharing this repo, I would like to self-prompt some papers in the Embodied AI, you can select some when you find them useful.
Instruct2act: Mapping multi-modality instructions to robotic actions with large language model S Huang, Z Jiang, H Dong, Y Qiao, P Gao, H Li arXiv preprint arXiv:2305.11176
Manipvqa: Injecting robotic affordance and physically grounded information into multi-modal large language models S Huang, I Ponomarenko, Z Jiang, X Li, X Hu, P Gao, H Li, H Dong IROS2024
A3VLM: Actionable Articulation-Aware Vision Language Model S Huang, H Chang, Y Liu, Y Zhu, H Dong, P Gao, A Boularias, H Li arXiv preprint arXiv:2406.07549
Thanks a lot for your suggestion! We have added these papers.
Hi, thanks for sharing this repo, I would like to self-prompt some papers in the Embodied AI, you can select some when you find them useful.
Manipulation-Related:
Instruct2act: Mapping multi-modality instructions to robotic actions with large language model S Huang, Z Jiang, H Dong, Y Qiao, P Gao, H Li arXiv preprint arXiv:2305.11176
Manipvqa: Injecting robotic affordance and physically grounded information into multi-modal large language models S Huang, I Ponomarenko, Z Jiang, X Li, X Hu, P Gao, H Li, H Dong IROS2024
A3VLM: Actionable Articulation-Aware Vision Language Model S Huang, H Chang, Y Liu, Y Zhu, H Dong, P Gao, A Boularias, H Li arXiv preprint arXiv:2406.07549
Navigation Related: