Closed pamzerbhu closed 4 years ago
Hi,
You can set the num_mini_batch argument by just adding --num_mini_batch 1
as a command-line argument instead of modifying the code.
Memory is increasing because the code keeps a memory of the last 500000 frames for training the neural-slam module. The memory will keep increasing until this buffer is filled. You can decrease the memory by using the slam_memory_size
argument, for example --slam_memory_size 100000
.
Hi, Neural-SLAM's author,
When I train this Network first by using the command "python main.py". An error comes out in lines 137, rollout_storage.py file. It means num_processes divided by zero. My GPU is TitanXp with 12GB memory. So I modified the code in lines 245, argument.py file, let args.num_mini_batch = arts.num_processes // 1 . Did I do it right?
Another problem is when I training with the modified codes, it seems that memory always increases with time going on. Would you mind help me to solve this problem? Thanks, Pamzerbhu