Closed WANGYINGYU closed 5 years ago
Hi. The default model we provide in this code base is indeed memory-consuming. You can start with batch size of 1 or 2.
@fangchangma Thank you for your reply. When I train the model with batch-size of 1/2, the program will prompt "warning: diff.nelement()==0 in PhotometricLoss (this is expected during early stage of training, try larger batch size" , so I think if a small batch-size will affects the accuracy of the trained model. If it is influential, what do you think is the minimum setting of batch-size?
What do you mean by batch-size of 1/2?
The warning appears when the inverse-warped rgb image is black (i.e., no warped pixel falls within the field of view). This usually happens at initialization, when the depth prediction is far off from ground truth.
@fangchangma I mean this warning will appear when the batch size is set to 1 or 2, so I think if the small batch size will lead to a bad result. What do you think the minimum size of the batch size is set to get good results?
@fangchangma I mean this warning will appear when the batch size is set to 1 or 2, so I think if the small batch size will lead to a bad result. What do you think the minimum size of the batch size is set to get good results?
Hi, I also get the warning. Dose it actually affect the final result? And do you know what is the proper batch size? Thanks!
Hello, When I am training the model, there will be a problem about "cuda: out of memory". I try to reduce the batch size, but the batch size does not seem too small for this work. Can you give me some advice about the minimum batch size ?