cshizhe / VLN-HAMT

Official implementation of History Aware Multimodal Transformer for Vision-and-Language Navigation (NeurIPS'21).
MIT License
97 stars 12 forks source link

Could you sepecify the models that can reproduce the reported results? #6

Closed Xin-Ye-1 closed 2 years ago

Xin-Ye-1 commented 2 years ago

Hi Shizhe,

Thank you for releasing the code! I am wondering if you can specify or release the models for us to reproduce the reported results. I would appreciate it greatly if you can provide additional instructions!

cshizhe commented 2 years ago

Hi, the pretrained models are released, see https://github.com/cshizhe/VLN-HAMT#installation.