devendrachaplot / Neural-SLAM

Pytorch code for ICLR-20 Paper "Learning to Explore using Active Neural SLAM"
http://www.cs.cmu.edu/~dchaplot/projects/neural-slam.html
MIT License
761 stars 144 forks source link

Regarding Why the greater the noise(0-1), the better the performance #38

Closed small-zeng closed 2 years ago

small-zeng commented 3 years ago

When evaluate the prerformance on large scene of Gibson dataset,I find when I run :

CUDA_VISIBLE_DEVICES=1 nohup python main.py --split val_mt_large --eval 1 \ --auto_gpu_config 0 -n 4 --num_episodes 71 --num_processes_per_gpu 4 \ --load_global pretrained_models/model_best.global --train_global 0 \ --load_local pretrained_models/model_best.local --train_local 0 \ --load_slam pretrained_models/model_best.slam --train_slam 0 -na 0 -no 0 -d random/large/na_no/10/ --exp_name exp1 & the result INFO:root:Final Exp Area: 7.39262, 11.80636, 15.54326, 18.67932, 21.43694, 24.30778, 26.60989, 28.66229, 30.52884, 31.91293, 33.23227, 34.40311, 35.57489, 36.52914, 37.40491, 38.11405, 38.83797, 39.46507, 39.93938, 40.40798, 40.79878, 41.14798, 41.52818, 41.94619, 42.22427, 42.41448, 42.58310, 42.80809, 42.97796, 43.14965, 43.28536, 43.45905, 43.63735, 43.75960, 43.83435, 43.87316, 43.91094, 43.96829, 44.00480, 44.09988, Final Exp Ratio: 0.12119, 0.19422, 0.25642, 0.30809, 0.35348, 0.40149, 0.44013, 0.47382, 0.50535, 0.52876, 0.55067, 0.57085, 0.59045, 0.60642, 0.62087, 0.63281, 0.64491, 0.65493, 0.66285, 0.67065, 0.67727, 0.68301, 0.68929, 0.69624, 0.70096, 0.70419, 0.70707, 0.71077, 0.71370, 0.71644, 0.71862, 0.72155, 0.72463, 0.72675, 0.72795, 0.72859, 0.72925, 0.73024, 0.73083, 0.73222,

when I run : CUDA_VISIBLE_DEVICES=2 nohup python main.py --split val_mt_large --eval 1 \ --auto_gpu_config 0 -n 4 --num_episodes 71 --num_processes_per_gpu 4 \ --load_global pretrained_models/model_best.global --train_global 0 \ --load_local pretrained_models/model_best.local --train_local 0 \ --load_slam pretrained_models/model_best.slam --train_slam 0 -d random/large/not_na_no/10/ --exp_name exp1 & the result INFO:root:Final Exp Area: 7.31249, 11.80577, 15.85805, 19.26320, 22.18673, 24.92665, 27.21441, 29.24679, 31.09027, 32.88026, 34.60589, 35.94048, 37.32295, 38.54152, 39.78651, 40.78866, 41.66500, 42.57974, 43.40369, 44.00271, 44.68042, 45.28838, 45.92671, 46.72835, 47.31259, 47.82757, 48.24503, 48.59364, 49.01058, 49.43538, 49.96648, 50.30362, 50.61988, 50.87675, 51.09861, 51.33011, 51.57324, 51.85214, 52.06178, 52.21830, Final Exp Ratio: 0.12001, 0.19400, 0.26143, 0.31787, 0.36630, 0.41165, 0.44932, 0.48264, 0.51294, 0.54286, 0.57141, 0.59298, 0.61599, 0.63587, 0.65632, 0.67327, 0.68791, 0.70288, 0.71635, 0.72616, 0.73729, 0.74750, 0.75810, 0.77142, 0.78094, 0.78921, 0.79620, 0.80199, 0.80882, 0.81587, 0.82439, 0.82988, 0.83487, 0.83878, 0.84236, 0.84617, 0.85020, 0.85468, 0.85803, 0.86051,

I have a question, why the greater the noise , the better the performance,Maybe there are some other details I don't notice?

Thanks

devendrachaplot commented 3 years ago

You should turn off the pose estimator if you are turning off motion and actuation noise. Try adding --use_pose_estimation 0 argument to the first command.