devendrachaplot / Neural-SLAM

Pytorch code for ICLR-20 Paper "Learning to Explore using Active Neural SLAM"
http://www.cs.cmu.edu/~dchaplot/projects/neural-slam.html
MIT License
761 stars 144 forks source link

num_mini_batch =0 when use one GPU to train #9

Closed pamzerbhu closed 4 years ago

pamzerbhu commented 4 years ago

Hi, Neural-SLAM's author,

  1. When I train this Network first by using the command "python main.py". An error comes out in lines 137, rollout_storage.py file. It means num_processes divided by zero. My GPU is TitanXp with 12GB memory. So I modified the code in lines 245, argument.py file, let args.num_mini_batch = arts.num_processes // 1 . Did I do it right?

  2. Another problem is when I training with the modified codes, it seems that memory always increases with time going on. image image Would you mind help me to solve this problem? Thanks, Pamzerbhu

devendrachaplot commented 4 years ago

Hi,

  1. You can set the num_mini_batch argument by just adding --num_mini_batch 1 as a command-line argument instead of modifying the code.

  2. Memory is increasing because the code keeps a memory of the last 500000 frames for training the neural-slam module. The memory will keep increasing until this buffer is filled. You can decrease the memory by using the slam_memory_size argument, for example --slam_memory_size 100000.