rpng / calc

Convolutional Autoencoder for Loop Closure
BSD 3-Clause "New" or "Revised" License
190 stars 45 forks source link

about training #13

Closed SummerTime2777 closed 5 years ago

SummerTime2777 commented 5 years ago

hello,i read your paper and understand the train dataset is Place dataset, but which version of the dataset are you using for training? and the number of iterations of network training? The are two version.1.8T and the other is resized.I trained the net using the resized database version and used 400,000 iterations , but the experimental results obtained are very poor.Can you tell me what version are you using and the number of iterations of network training? Thand you!

nmerrill67 commented 5 years ago

That is the dataset I used. The results will vary greatly with the batch size and not to mention the random seed. I set random seeds in the dataset preprocessing, but Caffe has its own random seeds I believe, so the iteration number may not be exactly the same. I trained for over 500k iterations, then used the testNet script to find the best one, which turned out to be iteration 220k.

SummerTime2777 commented 5 years ago

Thank you for your reply. I find the number of iterations is proportional to the size of the training set. I will try to set the number of iterations to 220k to verify the test results. The question is whether the batch size is the default(256) or another value, and I trained on one GPU.

nmerrill67 commented 5 years ago

Sorry for the delay. The batch size we used to make the released model is 768 per GPU, and two GPUs for an effective batch size of 1536. This was chosen to saturate the memory of our Gtx1080 Ti GPUs.