devendrachaplot / Neural-SLAM

Pytorch code for ICLR-20 Paper "Learning to Explore using Active Neural SLAM"
http://www.cs.cmu.edu/~dchaplot/projects/neural-slam.html
MIT License
761 stars 144 forks source link

assert PointNavDatasetV1.check_config_paths_exist(config) AssertionError #13

Closed CloudWave818 closed 4 years ago

CloudWave818 commented 4 years ago

When running python main.py --split val_mt --eval 1 \ --auto_gpu_config 0 -n 14 --num_episodes 71 --num_processes_per_gpu 7 \ --map_size_cm 4800 --global_downscaling 4 \ --load_global pretrained_models/model_best.global --train_global 0 \ --load_local pretrained_models/model_best.local --train_local 0 \ --load_slam pretrained_models/model_best.slam --train_slam 0 get:

Dumping at ./tmp//models/exp1/
Namespace(alpha=0.99, auto_gpu_config=0, camera_height=1.25, clip_param=0.2, collision_threshold=0.2, cuda=True, du_scale=2, dump_location='./tmp/', entropy_coef=0.001, env_frame_height=256, env_frame_width=256, eps=1e-05, eval=1, exp_loss_coeff=1.0, exp_name='exp1', frame_height=128, frame_width=128, gamma=0.99, global_downscaling=2, global_hidden_size=256, global_lr=2.5e-05, goals_size=2, hfov=90.0, load_global='pretrained_models/model_best.global', load_local='pretrained_models/model_best.local', load_slam='pretrained_models/model_best.slam', local_hidden_size=512, local_optimizer='adam,lr=0.0001', local_policy_update_freq=5, log_interval=10, map_pred_threshold=0.5, map_resolution=5, map_size_cm=2400, max_episode_length=1000, max_grad_norm=0.5, no_cuda=False, noise_level=1.0, noisy_actions=1, noisy_odometry=1, num_episodes=71, num_global_steps=40, num_local_steps=25, num_mini_batch=7, num_processes=14, num_processes_on_first_gpu=0, num_processes_per_gpu=7, obs_threshold=1, obstacle_boundary=5, pose_loss_coeff=10000.0, ppo_epoch=4, pretrained_resnet=1, print_images=0, proj_loss_coeff=1.0, randomize_env_every=1000, save_interval=1, save_periodic=500000, save_trajectory_data='0', seed=1, short_goal_dist=1, sim_gpu_id=0, slam_batch_size=72, slam_iterations=10, slam_memory_size=500000, slam_optimizer='adam,lr=0.0001', split='val_mt', task_config='tasks/pointnav_gibson.yaml', tau=0.95, total_num_scenes='auto', train_global=0, train_local=0, train_slam=0, use_deterministic_local=0, use_gae=False, use_pose_estimation=2, use_recurrent_global=0, use_recurrent_local=1, value_loss_coef=0.5, vis_type=1, vision_range=64, visualize=0)
Traceback (most recent call last):
  File "main.py", line 769, in <module>
    main()
  File "main.py", line 119, in main
    envs = make_vec_envs(args)
  File "/home/thomastao/SLAM/Neural-SLAM/env/__init__.py", line 7, in make_vec_envs
    envs = construct_envs(args)
  File "/home/thomastao/SLAM/Neural-SLAM/env/habitat/__init__.py", line 40, in construct_envs
    scenes = PointNavDatasetV1.get_scenes_to_load(basic_config.DATASET)
  File "/home/thomastao/SLAM/habitat-api/habitat/datasets/pointnav/pointnav_dataset.py", line 45, in get_scenes_to_load
    assert PointNavDatasetV1.check_config_paths_exist(config)
AssertionError

my data is placed in /home/thomastao/SLAM/Neural-SLAM/data/datasets/pointnav/gibson/v1/train,my scene_datasets is placed in /home/thomastao/SLAM/Neural-SLAM/data/scene_datasets/gibson, Do I need to modify the config file which in /home/thomastao/SLAM/habitat-api/configs/datasets/pointnav/gibson.yaml

devendrachaplot commented 4 years ago

Hi, You should not need to modify the config file. Did you run the convert_datasets.py script to create the val_mt split as described here: https://github.com/devendrachaplot/Neural-SLAM/blob/master/docs/INSTRUCTIONS.md#converting-datasets python scripts/convert_datasets.py

CloudWave818 commented 4 years ago

Hi, You should not need to modify the config file. Did you run the convert_datasets.py script to create the val_mt split as described here: https://github.com/devendrachaplot/Neural-SLAM/blob/master/docs/INSTRUCTIONS.md#converting-datasets python scripts/convert_datasets.py

Thank you , now I can run the evaluation command successfully , thank you very much.

zhanghua7099 commented 3 years ago

Hi!

I get the same error. I have run the command: python scripts/convert_datasets.py But I still get this error when I run: python main.py -n1 --auto_gpu_config 0 --split val_mini Can you give me some suggestions?