maytusp / tta-nav

The official implementation of "TTA-Nav: Test-time Adaptive Reconstruction for Point-Goal Navigation under Visual Corruptions"
1 stars 1 forks source link

Problems with running script #1

Open kozhukovv opened 3 months ago

kozhukovv commented 3 months ago

Hi! I read your article, it is a very impressive and interesting work. But according to the description of the repository, it is not entirely clear how you can run the model and evaluate its performance. You apply the pre-trained weights that are attached in the README, but at startup:

python -u -m habitat_baselines.run habitat_baselines.evaluate=True --config-name pointnav/tta-nav/adapt/lighting_5.yaml

the script asks for completely different weights.


Besides that, I tried to run train_ae.sh, and in .py script for the train prescribes the paths to the mount points of the your local computer, and when setting its path to the gibson dataset, the model still does not start, it feels like the models need pictures specifically, but on the gibson dataset.


From here, I would like to know all the detailed steps for the successful launch of your model. Maybe the problem could be solved by using Dockerfile with all the dependencies fixed, I did a fork of the repository, I added Dockerfile to the repository, if there is a desire, you can use it as a draft.

Thank you very much for your work, I will be waiting for a response!

maytusp commented 2 months ago

Hello, Thank you for your interest in our work and sorry for the very late reply as I have been on vacation.

  1. The weight that is used in config.yaml can be downloaded at checkpoints as mentioned in README (SE-ResNeXt-50). The main agent: DD-PPO (BatchNorm version)

  2. Gibson dataset here refers to a set of 112k images collected by robot navigation in Gibson. However, due to the terms of use of Gibson, I may not be able to distribute these images. But you can simply create your own Gibson image datasets by saving camera frames of the navigation robot in Gibson scenes. You can download Gibson scenes here: https://docs.google.com/forms/d/e/1FAIpQLScWlx5Z1DM1M-wTSXaa6zV8lTFkPmTHW1LqMsoCBDWsTDjBkQ/viewform

  3. If you've never used Habitat before, I suggest you try running simple navigation agents on the official Habitat code and understand all essential scripts in habitat-baselines (especially in ppo and ddppo folders). Our code is a minor modification (neural network inference parts) of the official code. Please let me know if you have questions.

  4. Dockerfile can be downloaded on Habitat official repo.