bdaiinstitute / vlfm

The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)
http://naoki.io/portfolio/vlfm.html
MIT License
194 stars 14 forks source link

Refactoring for better abstractions #5

Closed naokiyokoyamabd closed 1 year ago

naokiyokoyamabd commented 1 year ago

Made things much neater to clarify what parts of the code use habitat and which don't. Using camera transformation matrix instead of just position and yaw to allow for cameras that also have pitch (like spot arm camera)