bdaiinstitute / vlfm

The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)
http://naoki.io/portfolio/vlfm.html
MIT License
194 stars 14 forks source link

Release of training code #43

Closed Hongbin0411 closed 1 month ago

Hongbin0411 commented 1 month ago

Hi,

Could you release the code that you folks used for training the models?

Thanks!

naokiyokoyama commented 1 month ago

This is a zero-shot approach